Question about LionsOS
Hello! Looking at the LionsOS website and documentation you posted, it seems like Lions is an operating system built using Microkit out of components/drivers that use the sDDF? Do I have the right shape of things in my head? Also, would sDDF drivers for a static OS like Lions (or other OS built in Microkit, since that’s what’s it’s built for) be potentially portable to other operating systems using the seL4 kernel and sDDF? Like, if I design a non-static operating system that uses sDDF drivers, could I use existing code from Lions or would I face issues with code assuming that the system’s components can’t change at runtime? Thanks, Isaac
Hi Isaac
"Isaac" == Isaac Beckett
writes:
Isaac> Hello! Looking at the LionsOS website and documentation you Isaac> posted, it seems like Lions is an operating system built using Isaac> Microkit out of components/drivers that use the sDDF? Do I have Isaac> the right shape of things in my head? Pretty much. LionsOS at the moment is really a principled way to build on sDDF and microkit compments, plus some examples. Isaac> Also, would sDDF drivers for a static OS like Lions (or other Isaac> OS built in Microkit, since that’s what’s it’s built for) be Isaac> potentially portable to other operating systems using the seL4 Isaac> kernel and sDDF? Yes. Providing the interfaces are the same, you can use the same driver anywhere --- and because LionsOS-style drivers are as simple as possible, and have few (or no) dependencies apart from the Microkit and the sDDF, they're likely to work for systems other than LionsOS. Isaac> Like, if I design a non-static operating Isaac> system that uses sDDF drivers, could I use existing code from Isaac> Lions or would I face issues with code assuming that the Isaac> system’s components can’t change at runtime? It's quite hard to make a fully dynamic OS on top of Microkit, because the architecture (all the memory regions, PDs etc) are hard-coded at build time into the System Description File. About the best you can do is make it possible to populate individual protection domains with code at runtime --- there's no way to create new PDs. The overall architecture (which PDs exist and how they communicate) is fixed. This is what's meant by a 'static' system, and in general is what you want for a secure embedded OS, as it is amenable to analysis for information flow, and cannot change to make that analysis invalid. -- Dr Peter Chubb https://trustworthy.systems/ Trustworthy Systems Group CSE, UNSW Core hours: Mon 8am-3pm; Wed: 8am-5pm; Fri 8am-12pm.
Sorry, I should clarify. I meant making a dynamic OS that uses the sDDF. Not on Microkit, since it’s not meant for that. But something that is more flexible at runtime, designed for general purpose use so that it can load components when they’re needed instead of having a fixed set of components running at all times. So like, detecting a device being attached, and loading drivers for it from the disk. Or loading an emulation layer/OS personality for another operating system, like for Linux or Windows programs. While I was thinking of that I also got another idea. Some graphics cards support technology similar to hardware virtualization extensions on CPUs, such that they can be configured (with appropriate drivers, of course) to behave as multiple logical devices on one physical device. The term for this I believe is SR-IOV. Support for this is patchy, from what I hear, since this feature was originally marketed for enterprise customers and not made available to individuals, so setting it up involved (or used to involve) hacks to trick the driver and/or firmware into allowing those features to be used. So I was thinking, it may make sense to write a smaller, simpler native driver for seL4 that does very little but configure logical/virtual devices, and then those can be given to either native drivers written from scratch, or to ported Linux code, or to Linux VMs. Of course this doesn’t consider the possibility (probably very likely) of hardware side channels in the GPU, but then again neither does seL4.
On Apr 17, 2024, at 5:43 PM, Peter Chubb
wrote: It's quite hard to make a fully dynamic OS on top of Microkit
On 4/17/24 18:02, Isaac Beckett wrote:
Sorry, I should clarify. I meant making a dynamic OS that uses the sDDF. Not on Microkit, since it’s not meant for that. But something that is more flexible at runtime, designed for general purpose use so that it can load components when they’re needed instead of having a fixed set of components running at all times.
So like, detecting a device being attached, and loading drivers for it from the disk. Or loading an emulation layer/OS personality for another operating system, like for Linux or Windows programs.
While I was thinking of that I also got another idea. Some graphics cards support technology similar to hardware virtualization extensions on CPUs, such that they can be configured (with appropriate drivers, of course) to behave as multiple logical devices on one physical device. The term for this I believe is SR-IOV.
Support for this is patchy, from what I hear, since this feature was originally marketed for enterprise customers and not made available to individuals, so setting it up involved (or used to involve) hacks to trick the driver and/or firmware into allowing those features to be used.
So I was thinking, it may make sense to write a smaller, simpler native driver for seL4 that does very little but configure logical/virtual devices, and then those can be given to either native drivers written from scratch, or to ported Linux code, or to Linux VMs. Of course this doesn’t consider the possibility (probably very likely) of hardware side channels in the GPU, but then again neither does seL4.
If you are going to be using seL4 for serious work on an out-of-order CPU vulnerable to Spectre, and are not able to ensure that all memory containing potentially-sensitive data is marked as device memory, I recommend consulting with an expert on the CPU architecture in question to ensure that seL4 properly implements mitigations. Time protection is a principled solution to side-channel attacks, but it requires that the time consumed by operations on sensitive data is not observable. This may or may not be practical for a given workload. -- Sincerely, Demi Marie Obenour (she/her/hers)
On 18 Apr 2024, at 13:13, Demi Marie Obenour
On 4/17/24 23:20, Gernot Heiser via Devel wrote:
On 18 Apr 2024, at 13:13, Demi Marie Obenour
wrote: properly implements mitigations. Time protection is a principled solution to side-channel attacks, but it requires that the time consumed by operations on sensitive data is not observable.
This is actually not a correct summary of time protection (TP). In contrast, TP *ensures* that kernel operations are constant time, and that userspace operations do not produce observable timing variations across security domains.
Having said that, TP isn’t in the mainline kernel and is still experimental. We’re planning to restart that project mid-year.
Gernot
How does time protection handle these two cases? 1. Untrusted code can request a service from trusted code that involves processing sensitive data, and this request may take an unbounded amount of time. In this case, it is not possible to pad the time actually consumed to the maximum possible value, because the maximum possible value does not exist. Cryptographic operations involving rejection sampling may be an example here. It is possible to pad the time to a very large value, since the probability that N operations will be needed decreases exponentially with N, but this may be prohibitive performance-wise. 2. Operations on sensitive data must be able to consume all available CPU resources. The main example I can think of is human-interactive systems. These may be so heavily oversubscribed that it is simply not possible to statically allocate resources to different security domains. Instead, even security domains involving sensitive data must be able to compete with each other. A web browser is a good example of this. The number of security domains is at least the number of origins in use, which can be extremely large. Furthermore, some origins might be CPU-intensive. Therefore, the overall system load is an unavoidable side-channel, at least if one wants fair scheduling. -- Sincerely, Demi Marie Obenour (she/her/hers)
On 18 Apr 2024, at 13:40, Demi Marie Obenour
How does time protection handle these two cases?
1. Untrusted code can request a service from trusted code that involves processing sensitive data, and this request may take an unbounded amount of time. In this case, it is not possible to pad the time actually consumed to the maximum possible value, because the maximum possible value does not exist. […]
that’s an *algorithmic* timing channel (and therefore requires different approaches). Time protection is about *microarchitectural* timing channels. Please check the paper on what threats it addresses: https://trustworthy.systems/publications/abstracts/Ge_YCH_19.abstract
2. Operations on sensitive data must be able to consume all available CPU resources. The main example I can think of is human-interactive systems. These may be so heavily oversubscribed that it is simply not possible to statically allocate resources to different security domains. Instead, even security domains involving sensitive data must be able to compete with each other.
this isn’t a microarchitectural channel either. Gernot
"A web browser is a good example of this. The number of security
domains is at least the number of origins in use, which can be
extremely large. Furthermore, some origins might be CPU-intensive."
Yes. That's the challenge when you try to use a solution aimed to static
systems for dynamic systems... Not sure how other will solve this, we will
just try to make some sacrifice, in example, by assuming a total fixed
memory and CPU available for a domain and how will you offer that to the
end user. What is not compatible is the old fashioned way of happily and
chaotically assigning resources (expressions like "must be able to compete
with each other" I don't think are too much compatible with a predictable
behavior...). What is not "normal" to me is how browsers behave and how
much we allow to them (that is a party for attackers) in example, with
crazy stuff like javascript, etc. So, at some point, the solution designer
must decide what to offer to end user. In example, you can set up fixed
resources per domain and fixed number of domains. Yes, it is a waste of
resources. But it is secure. So you offer this to the customers and explain
how it works: no more free browser resources party, no more browsers taking
100% cpu and lots of GB of RAM (with 20 tabs...). Are we crazy? Instead,
you can just make a reservation of resources for the secure static system
and put your secure stuff there. And forget of those resources, they are
not free to be used by other programs, nor by the OS,... Just partition the
full system, have a static partition for serious stuff and and dynamic
partition for toys. Just a silly example.
El jue, 18 abr 2024 a las 5:43, Demi Marie Obenour (
On 4/17/24 23:20, Gernot Heiser via Devel wrote:
On 18 Apr 2024, at 13:13, Demi Marie Obenour
wrote: properly implements mitigations. Time protection is a principled solution to side-channel attacks, but it requires that the time consumed by operations on sensitive data is not observable.
This is actually not a correct summary of time protection (TP). In contrast, TP *ensures* that kernel operations are constant time, and that userspace operations do not produce observable timing variations across security domains.
Having said that, TP isn’t in the mainline kernel and is still experimental. We’re planning to restart that project mid-year.
Gernot
How does time protection handle these two cases?
1. Untrusted code can request a service from trusted code that involves processing sensitive data, and this request may take an unbounded amount of time. In this case, it is not possible to pad the time actually consumed to the maximum possible value, because the maximum possible value does not exist.
Cryptographic operations involving rejection sampling may be an example here. It is possible to pad the time to a very large value, since the probability that N operations will be needed decreases exponentially with N, but this may be prohibitive performance-wise.
2. Operations on sensitive data must be able to consume all available CPU resources. The main example I can think of is human-interactive systems. These may be so heavily oversubscribed that it is simply not possible to statically allocate resources to different security domains. Instead, even security domains involving sensitive data must be able to compete with each other.
A web browser is a good example of this. The number of security domains is at least the number of origins in use, which can be extremely large. Furthermore, some origins might be CPU-intensive. Therefore, the overall system load is an unavoidable side-channel, at least if one wants fair scheduling. -- Sincerely, Demi Marie Obenour (she/her/hers)
_______________________________________________ Devel mailing list -- devel@sel4.systems To unsubscribe send an email to devel-leave@sel4.systems
On 18 Apr 2024, at 19:18, Hugo V.C.
"A web browser is a good example of this. The number of security domains is at least the number of origins in use, which can be extremely large. Furthermore, some origins might be CPU-intensive."
Yes. That's the challenge when you try to use a solution aimed to static systems for dynamic systems... Not sure how other will solve this,
Operating systems have been dealing with this kind of problem for well over half a century, it’s called the working-set model. You may have 1000s of processes, but very few are active during a time window, and you allocate the resources to them. Remember what virtual memory was originally introduced for? Allowing program memories to exceed the size of physical memory. All this is a matter for user mode. The microkernel’s job is to do the things usermode cannot. Gernot
"All this is a matter for user mode. "
Well... in seL4. In standard OSs (Windows, Linux...) this is a matter of
kernel mode. And here is the party. Of course if you use seL4 as TCB then
you can partition whatever you want on top.
The challenge that I probably didn't expressed correctly, is to integrate
seL4 on top of other dynamic systems (in example to virtualize a browser),
as seL4 needs to know in advance how much memory will have available...
Running seL4 on bare metal is a "kids game" (don't want to offend anyone),
the challenge is to integrate it on systems where resources are dynamically
assigned.
I think we are thinking about different threat models Gernot, I'm looking
to isolate what comes from the outside World (Internet) from inside World
(standard OS). I think you are talking about seL4 managing all the systems
resources, as a TCB. Absolutely different stuff.
El jue, 18 abr 2024 a las 11:36, Gernot Heiser via Devel
(
On 18 Apr 2024, at 19:18, Hugo V.C.
wrote: "A web browser is a good example of this. The number of security domains is at least the number of origins in use, which can be extremely large. Furthermore, some origins might be CPU-intensive."
Yes. That's the challenge when you try to use a solution aimed to static systems for dynamic systems... Not sure how other will solve this,
Operating systems have been dealing with this kind of problem for well over half a century, it’s called the working-set model.
You may have 1000s of processes, but very few are active during a time window, and you allocate the resources to them.
Remember what virtual memory was originally introduced for? Allowing program memories to exceed the size of physical memory.
All this is a matter for user mode. The microkernel’s job is to do the things usermode cannot.
Gernot _______________________________________________ Devel mailing list -- devel@sel4.systems To unsubscribe send an email to devel-leave@sel4.systems
On 18 Apr 2024, at 20:38, Hugo V.C.
On 18/04/24 12:12, Gernot Heiser via Devel wrote:
On 18 Apr 2024, at 20:38, Hugo V.C.
wrote: The challenge that I probably didn't expressed correctly, is to integrate seL4 on top of other dynamic systems (in example to virtualize a browser), as seL4 needs to know in advance how much memory will have available... Running seL4 on bare metal is a "kids game" (don't want to offend anyone), the challenge is to integrate it on systems where resources are dynamically assigned.
I don’t think I understand. Running seL4 in any way other than on bare metal makes no sense whatsoever.
Gernot _______________________________________________ Devel mailing list -- devel@sel4.systems To unsubscribe send an email to devel-leave@sel4.systems
Hi Gernot, I agree. Hugo, What would be the point in doing what you suggest? The whole point of seL4 is to be as close to the silicon as possible. Regards, David
"Running seL4 in any way other than on bare metal makes no sense
whatsoever."
There are scenarios where it makes sense. The power of seL4 is isolation
guarantee. That is, software running virtualized on top of seL4 can't
escape from seL4. This means it can be used to safely run a web browser
(virtualized) in a standard OS. A browser is only exposed to external
threats, that come from the Internet, they reach the host tcp/ip stack and
then the browser. Here an attacker can exploit an OS flaw (in this
scenario, the tcp/ip stack) or can exploit a browser flaw. What attackers
are doing now is to exploit browsers, not tcp/ip. Why? Any bug hunter knows
the answer: it's, by far, much easier to find (and succesfully weaponize)
bugs in browsers!
So, what vendors have been trying is to "virtualize" (really just
sandboxing) browsers (very inefective as they rely on OS own mechanisms,
which are not always available or robust), or running cloud browsers, which
is painful, not user friendly, and expensive...
A solution better than that would be to run the browser virtualized on
seL4. You don't solve the host tcp/ip stack surface attack, nor the "glue"
software surface attack required to run seL4 virtualized, but, again, any
bug hunter would always prefer the scenario of exploiting a browser than
being forced to exploit the tcp/ip stack or "glue" software. Exploitation
is a matter of stats.
That is the scenario I'm talking about.
El jue., 18 abr. 2024 13:14, Gernot Heiser via Devel
On 18 Apr 2024, at 20:38, Hugo V.C.
wrote: The challenge that I probably didn't expressed correctly, is to integrate seL4 on top of other dynamic systems (in example to virtualize a browser), as seL4 needs to know in advance how much memory will have available... Running seL4 on bare metal is a "kids game" (don't want to offend anyone), the challenge is to integrate it on systems where resources are dynamically assigned.
I don’t think I understand. Running seL4 in any way other than on bare metal makes no sense whatsoever.
Gernot _______________________________________________ Devel mailing list -- devel@sel4.systems To unsubscribe send an email to devel-leave@sel4.systems
On 4/18/24 07:12, Gernot Heiser via Devel wrote:
On 18 Apr 2024, at 20:38, Hugo V.C.
wrote: The challenge that I probably didn't expressed correctly, is to integrate seL4 on top of other dynamic systems (in example to virtualize a browser), as seL4 needs to know in advance how much memory will have available... Running seL4 on bare metal is a "kids game" (don't want to offend anyone), the challenge is to integrate it on systems where resources are dynamically assigned.
I don’t think I understand. Running seL4 in any way other than on bare metal makes no sense whatsoever.
Gernot
I think the point is that if a system needs dynamic resource allocation, _something_ needs to do it. Traditional systems do this in the kernel. seL4 declares this to be the responsibility of userspace, but I’m not aware of a userspace component that can actually do that job. Also, running seL4 in a virtualized environment makes a huge amount of sense for testing and debugging, or if one is shipping a virtual appliance. -- Sincerely, Demi Marie Obenour (she/her/hers)
"running seL4 in a virtualized environment makes a huge amount
of sense for testing and debugging, or if one is shipping a virtual
appliance."
or running on top of any other standard OS... Those are latest news (April
17th):
"*Google and Mozilla on Tuesday announced security updates that address
more than 35 vulnerabilities in their browsers, including a dozen
high-severity flaws.*"
https://www.securityweek.com/chrome-124-firefox-125-patch-high-severity-vuln...
I'm not sure if people understands what is the scenario right now. This is
a Circus. Browsers are pathetic in terms of security. Too complex, too big,
too much JIT... too much interaction with Internet. The worst ever scenario
we could figure out, and this only gets worse every year as, instead of
simplifying the protocols we use for Internet, we make them more complex
(aka "more features"). So:
1) Let's forget the naive idea of users changing behavior or vendors making
secure software. Users are "ignorant" and vendors are unethical. That's raw
reality.
2) Let's forget about users using (wonderful) secure platforms (like
QubesOS) or... ups..., looks there's nothing else, as seL4 is aimed for
embedded world (right now)
3) Let's try to build solutions, far from perfect, but that improve
(current) real requirements security. In example, using seL4 integrated on
other (insecure) platforms.
The magic of seL4 is correctness. Anyway, this does not magically solves
the Universe problems. But keeps being correct.
There's no engineering limitation that avoids running seL4 on top of any
other OS using virtualization functionality. And besides unfounded non
engineering based facts, the reality is that an attacker will almost ALWAYS
(exceptions are exceptions by definition) prefer a scenarios where seL4 is
not present to the same scenario where seL4 is there. If anyone can prove
I'm wrong here, please correct me.
So, my proposal, is, of course, let's fight to try putting seL4 as close
as possible to the silicon and anywhere if possible, but... if not
possible, let's fight so put it somewhere else!
In example: we have a scenario with Host OS (with nested virtualization
enabled) ==> hypervisor XYZ ==> XYZ OS layer ==> Clown Buggy Browser
we can convert this to:
Host OS (with nested virtualization enabled) ==> seL4 ==> XYZ OS layer ==>
Clown Buggy Browser
As simple as this. In the last scenario is, by far, more complex (need to
exploit drivers, tcp/ip stack, "glue" software) to jump from "Clown Buggy
Browser" to "Host OS". And that's enough for me.
Moreover, in the last scenario, I can have:
Host OS (with nested virtualization enabled) ==> seL4 | ==> XYZ OS layer
==> Clown Buggy Browser (Work)
| ==> ABC OS layer ==> Clown Buggy Browser (Fun)
| ==> ABC OS layer ==> Clown Buggy RandomTrendyApp
Does it sound familiar to you guys...?
And yes, I obviously prefer Mediterranean food and:
MyPerfectSilicon (RISCV) ==> seL4 ==> Native seL4 Code == > Clown Buggy
Browser/App
But this is not going to happen in the short term, so, meanwhile, I think
that it is not crazy to find half-way solutions to just be able to survive
this era.
El jue., 18 abr. 2024 21:51, Demi Marie Obenour
On 18 Apr 2024, at 20:38, Hugo V.C.
wrote: The challenge that I probably didn't expressed correctly, is to integrate seL4 on top of other dynamic systems (in example to virtualize a browser), as seL4 needs to know in advance how much memory will have available... Running seL4 on bare metal is a "kids game" (don't want to offend anyone), the challenge is to integrate it on systems where resources are dynamically assigned.
I don’t think I understand. Running seL4 in any way other than on bare
On 4/18/24 07:12, Gernot Heiser via Devel wrote: metal makes no sense whatsoever.
Gernot
I think the point is that if a system needs dynamic resource allocation, _something_ needs to do it. Traditional systems do this in the kernel. seL4 declares this to be the responsibility of userspace, but I’m not aware of a userspace component that can actually do that job.
Also, running seL4 in a virtualized environment makes a huge amount of sense for testing and debugging, or if one is shipping a virtual appliance. -- Sincerely, Demi Marie Obenour (she/her/hers)
_______________________________________________ Devel mailing list -- devel@sel4.systems To unsubscribe send an email to devel-leave@sel4.systems
Hello Hugo, On 2024-04-19 10:06, Hugo V.C. wrote:
1) Let's forget the naive idea of users changing behavior or vendors making secure software. Users are "ignorant" and vendors are unethical. That's raw reality.
Both Chromium and Firefox are open-source. Calling Mozilla and all the volunteers unethical is unkind. Nothing is stopping you and like minded people from forking and creating a version which is more secure.
2) Let's forget about users using (wonderful) secure platforms (like QubesOS) or... ups..., looks there's nothing else, as seL4 is aimed for embedded world (right now)
This doesn't solve the problem of insecurity on the browser side, it at best helps to contain the damage if a browser process is compromised. But it only contains damage towards the OS side, not towards the user. Considering how much is done in browsers nowadays, an attacker is pretty much done if they have the same information and access as the browser has.
The magic of seL4 is correctness. Anyway, this does not magically solves the Universe problems. But keeps being correct.
There's no engineering limitation that avoids running seL4 on top of any other OS using virtualization functionality. And besides unfounded non engineering based facts, the reality is that an attacker will almost ALWAYS (exceptions are exceptions by definition) prefer a scenarios where seL4 is not present to the same scenario where seL4 is there. If anyone can prove I'm wrong here, please correct me.
Of course adding another layer of indirection makes live harder for attackers. Following this line of through, why stop with one nested virtual machine, why not keep going? You could nest Linux on top of seL4 on top of Linux. On the other hand, it makes everything more complex and that will give more attack vectors in itself. Using an seL4 guest OS instead of a Linux guest OS can only stop attacks that rely on weaknesses in the host OS that can be exploited by guest kernels, but not from (guest) user space. You can achieve the same level of security by disallowing most of the browser code from making system calls and not using any virtualisation at all.
So, my proposal, is, of course, let's fight to try putting seL4 as close as possible to the silicon and anywhere if possible, but... if not possible, let's fight so put it somewhere else!
In example: we have a scenario with Host OS (with nested virtualization enabled) ==> hypervisor XYZ ==> XYZ OS layer ==> Clown Buggy Browser
we can convert this to:
Host OS (with nested virtualization enabled) ==> seL4 ==> XYZ OS layer ==> Clown Buggy Browser
As simple as this. In the last scenario is, by far, more complex (need to exploit drivers, tcp/ip stack, "glue" software) to jump from "Clown Buggy Browser" to "Host OS". And that's enough for me.
But the same is true if a browser runs as a separate user, if it is compromised it is contained to what permissions it has, which can be very little. With your solution you just need to do it twice instead of once. Security wise, having to do the same thing twice isn't really more secure than having to do it once. Greetings, Indan
Hello.
There have been many interesting points, but it was a strawman fallacy to
suggest Hugo accused "all the volunteers" of being unethical.
I think Hugo has a salient point when he says a world of bespoke seL4
images for every application is a distant one. I share the feeling that a
hybrid, intermediate step is meaningful. I also echo the idea that
exploitation at scale is a matter of statistics.
Even if virtualizing the seL4 undermines its security properties, it is
still a fast and correct microkernel. Putting a usable seL4 image in the
hands of enthusiasts, budding developers, or even seasoned developers would
be a boost to the ecosystem.
One reason Linux is so popular is that you can use Linux to develop Linux.
I would be thrilled to boot any meaningful home or development environment
on top of the seL4. I would never stop talking about it.
Cheers,
Michael
On Fri, Apr 19, 2024, 5:05 AM Indan Zupancic
Hello Hugo,
On 2024-04-19 10:06, Hugo V.C. wrote:
1) Let's forget the naive idea of users changing behavior or vendors making secure software. Users are "ignorant" and vendors are unethical. That's raw reality.
Both Chromium and Firefox are open-source. Calling Mozilla and all the volunteers unethical is unkind. Nothing is stopping you and like minded people from forking and creating a version which is more secure.
2) Let's forget about users using (wonderful) secure platforms (like QubesOS) or... ups..., looks there's nothing else, as seL4 is aimed for embedded world (right now)
This doesn't solve the problem of insecurity on the browser side, it at best helps to contain the damage if a browser process is compromised.
But it only contains damage towards the OS side, not towards the user. Considering how much is done in browsers nowadays, an attacker is pretty much done if they have the same information and access as the browser has.
The magic of seL4 is correctness. Anyway, this does not magically solves the Universe problems. But keeps being correct.
There's no engineering limitation that avoids running seL4 on top of any other OS using virtualization functionality. And besides unfounded non engineering based facts, the reality is that an attacker will almost ALWAYS (exceptions are exceptions by definition) prefer a scenarios where seL4 is not present to the same scenario where seL4 is there. If anyone can prove I'm wrong here, please correct me.
Of course adding another layer of indirection makes live harder for attackers. Following this line of through, why stop with one nested virtual machine, why not keep going? You could nest Linux on top of seL4 on top of Linux.
On the other hand, it makes everything more complex and that will give more attack vectors in itself.
Using an seL4 guest OS instead of a Linux guest OS can only stop attacks that rely on weaknesses in the host OS that can be exploited by guest kernels, but not from (guest) user space.
You can achieve the same level of security by disallowing most of the browser code from making system calls and not using any virtualisation at all.
So, my proposal, is, of course, let's fight to try putting seL4 as close as possible to the silicon and anywhere if possible, but... if not possible, let's fight so put it somewhere else!
In example: we have a scenario with Host OS (with nested virtualization enabled) ==> hypervisor XYZ ==> XYZ OS layer ==> Clown Buggy Browser
we can convert this to:
Host OS (with nested virtualization enabled) ==> seL4 ==> XYZ OS layer ==> Clown Buggy Browser
As simple as this. In the last scenario is, by far, more complex (need to exploit drivers, tcp/ip stack, "glue" software) to jump from "Clown Buggy Browser" to "Host OS". And that's enough for me.
But the same is true if a browser runs as a separate user, if it is compromised it is contained to what permissions it has, which can be very little. With your solution you just need to do it twice instead of once. Security wise, having to do the same thing twice isn't really more secure than having to do it once.
Greetings,
Indan
_______________________________________________ Devel mailing list -- devel@sel4.systems To unsubscribe send an email to devel-leave@sel4.systems
On Fri, Apr 19, 2024 at 3:07 PM Michael Neises
I think Hugo has a salient point when he says a world of bespoke seL4 images for every application is a distant one. I share the feeling that a hybrid, intermediate step is meaningful. I also echo the idea that exploitation at scale is a matter of statistics.
Even if virtualizing the seL4 undermines its security properties, it is still a fast and correct microkernel. Putting a usable seL4 image in the hands of enthusiasts, budding developers, or even seasoned developers would be a boost to the ecosystem.
One reason Linux is so popular is that you can use Linux to develop Linux. I would be thrilled to boot any meaningful home or development environment on top of the seL4. I would never stop talking about it.
I don't really think the static Microkit/CAmkES type of architecture will ever be viable for desktops or self-hosting. Basically nobody's going to put up with rebooting into a different image for each application like an 80s-era home computer (if that's the kind of thing you meant by "bespoke seL4 images"). The closest you'd get would be something like Genode, which is more or less a dynamic system that's managed like a static one, and even something like that would be too awkward for most people. Even for some types of embedded systems, a static architecture is going to be rather limiting (e.g. a CNC machine that can have additional hardware like automatic parts loaders or additional axes added). Running per-application static seL4 images in a dynamic hypervisor also seems kind of pointless; you might as well use a regular (non-hypervisor) dynamic microkernel OS instead. IMO the best approach for a microkernel-based desktop OS is something architecturally more like QNX than Microkit/CAmkES, since porting existing Linux code to something like that is going to be relatively easy. Even though such a system is going to be difficult or impossible to formally verify as far as security goes since it's so open-ended, it's still going to be a lot more architecturally secure than anything mainstream.
On Sat, 20 Apr 2024 at 09:33, Andrew Warkentin
I don't really think the static Microkit/CAmkES type of architecture will ever be viable for desktops or self-hosting. Basically nobody's going to put up with rebooting into a different image for each application like an 80s-era home computer (if that's the kind of thing you meant by "bespoke seL4 images").
On the other hand, Microkit feels a bit like the image build systems used by the EROS family, which could be used as dynamic systems. Because the systems were orthogonally persistent, the image generated was the /initial/ image for the system, new objects would be saved to the image as if they had been declared in the mki file. You could also use these images as a sort of initrd, with enough features available to bootstrap the rest of the system from disk. -- William ML Leslie
Hi Indan.
"Both Chromium and Firefox are open-source. Calling Mozilla and all the
volunteers unethical is unkind. Nothing is stopping you and like minded
people from forking and creating a version which is more secure."
I talked about "vendors" in general. Never specified about any particular
vendor. It is a generalization.
If we can't generalize then switch off the debate... Water does not boil at
100 degrees... always...
I love Mozilla work. They invented Rust. I love and appreciate volunteers
work around the World.
Sadly, this do not changes the fact that vendors (in general) are
unethical.
"This doesn't solve the problem of insecurity on the browser side, it at
best helps to contain the damage if a browser process is compromised.
But it only contains damage towards the OS side, not towards the user.
Considering how much is done in browsers nowadays, an attacker is pretty
much done if they have the same information and access as the browser
has."
Step by Step Indan... Yes, I know that.
But containing damage to the OS this is a BIG step ahead. In example, to
stop ransomware.
Ransomware doesn't come hide in donuts... it comes in software users run in
the OS, so
very firsts step is stop such software to be able to reach the OS. Next
step would be to
limit what can to the browser. And next to separate usage. This is
something browsers
have tried ad nauseam unsuccessfully (they keep using OS API for this
purpose and attackers
just bypass those "sandboxes"). So Yes. First, we contain the damage. The
problem you are
talking about is for a "next phase".
"Of course adding another layer of indirection makes live harder for
attackers. Following this line of through, why stop with one nested
virtual machine, why not keep going?"
You know the answer: performance.
"You could nest Linux on top
of seL4 on top of Linux."
I already do that Indan. On production. seL4 amazing performance makes
nested virtualization
penalty assumable to me and there are scenarios (critical services) where
security comes first, at least
in my business.
"On the other hand, it makes everything more complex"
I agree. I attach to the KISS principle. But there's not a solution that
fits everywhere.
Until I have a decent way of deploying seL4 bare metal solutions (which
doesn't require me to
recruit SpaceX engineers) I run seL4 virtualized. Then I move the
virtualized environment to different
OSs. This boosts my dev. It is a matter of being. On the other hand, unless
you convince some billions
of Windows users to switch to seL4 running on a C3P0 hardware without GPU
support, they are left
without a solution other than: "your OS is a s**t" we don't develop
solution on top of s**t. I don't
think this is the path to convince people to adopt new technologies. I
prefer to give options. I prefer to
give the option of running seL4 on top of Linux or Windows or Mac and then,
give time to time...
seL4 is the future, it will penetrate every market, but we need to make
people know it.
"and that will give more attack vectors in itself."
I have had this discussion many times. Yes. Theoretically is what you say.
In the real World (the one
where an attacker evaluates the attack surface), whe you have ested
virtualization, you are left to:
1) reachable tcp/ip stack (you don't reach ALL the tcp/ip stacks of the
virtualized layers, just one, the rest is "glue" software)
2) permissions of the glue software of component of (1). Is it a qemu
driver? Don't like qemu driver security? Nor me. Then let's focus on this,
as
THIS is a real entry point. Once you pass the packets to seL4, game is over
for the attacker. You can't do dirty hyercalls tricks (like with other
hypervisors) to escape.
3) the app/OS in the guest. I don't care, to me all this layer is just
untrusted. I just care about separate untrusted environments in different
virtualized roles
Or in other words: yes, looks like there are more opportunities, but in
reality the "added" opportunities are in a layer (emulation, hypervisors,
etc) that represent a negligible amount of remote compromises, while all
the compromises are on the layer of OS/app... virtualize it, and forget.
"You can achieve the same level of security by disallowing most of the
browser code from making system calls and not using any virtualisation
at all."
Here i don't understand. You can't do nothing without system calls...
Everything is a system call. Maybe the tricky word is "most". This means
maintaining a fork of a browser. Every browser.
Not affordable. Nos scalable. Not real. It is easier to use QubesOS at this
point... want to convince all the people to use QubesOS? Good luck.
"But the same is true if a browser runs as a separate user, if it is
compromised
it is contained to what permissions it has, which can be very little.
With your
solution you just need to do it twice instead of once. Security wise,
having to
do the same thing twice isn't really more secure than having to do it
once."
No. This is the same to what QubesOS currently does. You can have a Work,
Fun, Finnace... environemnts (Browsers). Which is very different than
"users", are virtulized by Xen, please don't compare OS "user" separation
to hypervisor separation... nothing to do.
I propose the same QubesOS is doing, thanks to seL4, but on top of standard
OSs so people do not need to change OS.
El vie, 19 abr 2024 a las 12:03, Indan Zupancic (
Hello Hugo,
On 2024-04-19 10:06, Hugo V.C. wrote:
1) Let's forget the naive idea of users changing behavior or vendors making secure software. Users are "ignorant" and vendors are unethical. That's raw reality.
Both Chromium and Firefox are open-source. Calling Mozilla and all the volunteers unethical is unkind. Nothing is stopping you and like minded people from forking and creating a version which is more secure.
2) Let's forget about users using (wonderful) secure platforms (like QubesOS) or... ups..., looks there's nothing else, as seL4 is aimed for embedded world (right now)
This doesn't solve the problem of insecurity on the browser side, it at best helps to contain the damage if a browser process is compromised.
But it only contains damage towards the OS side, not towards the user. Considering how much is done in browsers nowadays, an attacker is pretty much done if they have the same information and access as the browser has.
The magic of seL4 is correctness. Anyway, this does not magically solves the Universe problems. But keeps being correct.
There's no engineering limitation that avoids running seL4 on top of any other OS using virtualization functionality. And besides unfounded non engineering based facts, the reality is that an attacker will almost ALWAYS (exceptions are exceptions by definition) prefer a scenarios where seL4 is not present to the same scenario where seL4 is there. If anyone can prove I'm wrong here, please correct me.
Of course adding another layer of indirection makes live harder for attackers. Following this line of through, why stop with one nested virtual machine, why not keep going? You could nest Linux on top of seL4 on top of Linux.
On the other hand, it makes everything more complex and that will give more attack vectors in itself.
Using an seL4 guest OS instead of a Linux guest OS can only stop attacks that rely on weaknesses in the host OS that can be exploited by guest kernels, but not from (guest) user space.
You can achieve the same level of security by disallowing most of the browser code from making system calls and not using any virtualisation at all.
So, my proposal, is, of course, let's fight to try putting seL4 as close as possible to the silicon and anywhere if possible, but... if not possible, let's fight so put it somewhere else!
In example: we have a scenario with Host OS (with nested virtualization enabled) ==> hypervisor XYZ ==> XYZ OS layer ==> Clown Buggy Browser
we can convert this to:
Host OS (with nested virtualization enabled) ==> seL4 ==> XYZ OS layer ==> Clown Buggy Browser
As simple as this. In the last scenario is, by far, more complex (need to exploit drivers, tcp/ip stack, "glue" software) to jump from "Clown Buggy Browser" to "Host OS". And that's enough for me.
But the same is true if a browser runs as a separate user, if it is compromised it is contained to what permissions it has, which can be very little. With your solution you just need to do it twice instead of once. Security wise, having to do the same thing twice isn't really more secure than having to do it once.
Greetings,
Indan
Hello Hugo, On 2024-04-19 13:59, Hugo V.C. wrote:
I talked about "vendors" in general. Never specified about any particular vendor. It is a generalization.
The context was browsers, there are not many browser vendors. Furthermore, you used it as an argument to propose your idea. If you didn't mean to include Mozilla you should have used a different argument to use your complex scheme instead of improving Firefox.
But containing damage to the OS this is a BIG step ahead. In example, to stop ransomware.
It's not a big step ahead compared to a process contained with the existing OS containment features there are, like running it as a different user and using file permissions. In both cases either a kernel exploit or infrastructure exploit is needed to gain access to the rest of the system. Perhaps the attack surface is a bit smaller, but that depends on the implementation.
"Of course adding another layer of indirection makes live harder for attackers. Following this line of through, why stop with one nested virtual machine, why not keep going?"
You know the answer: performance.
I would think the answer would be because it's pointless.
"You could nest Linux on top of seL4 on top of Linux."
I already do that Indan. On production. seL4 amazing performance makes nested virtualization penalty assumable to me and there are scenarios (critical services) where security comes first, at least in my business.
You clearly don't take security really seriously if run security critical software mixed with untrusted software on x86. That's what I would call taking calculated risks.
Until I have a decent way of deploying seL4 bare metal solutions (which doesn't require me to recruit SpaceX engineers) I run seL4 virtualized.
Running seL4 on bare metal is a lot simpler than porting existing software to seL4. To me it sounds like you only use seL4 to run a Linux guest OS, which only runs a browser. In this scenario the added value of using seL4 is low.
seL4 is the future, it will penetrate every market, but we need to make people know it.
seL4 is not magic pixie dust, it's a microkernel that does as little as possible and doesn't yet have any extensive user space frameworks that normal developers expect from an OS when developing software. The quality of an seL4 system depends on the design and implementation of the system as a whole, it's easy to create an insecure seL4 system.
"and that will give more attack vectors in itself."
I have had this discussion many times. Yes. Theoretically is what you say. In the real World (the one where an attacker evaluates the attack surface), whe you have ested virtualization, you are left to: 1) reachable tcp/ip stack (you don't reach ALL the tcp/ip stacks of the virtualized layers, just one, the rest is "glue" software) 2) permissions of the glue software of component of (1). Is it a qemu driver? Don't like qemu driver security? Nor me. Then let's focus on this, as THIS is a real entry point. Once you pass the packets to seL4, game is over for the attacker. You can't do dirty hyercalls tricks (like with other hypervisors) to escape.
3) the app/OS in the guest. I don't care, to me all this layer is just untrusted. I just care about separate untrusted environments in different virtualized roles
Or in other words: yes, looks like there are more opportunities, but in reality the "added" opportunities are in a layer (emulation, hypervisors, etc) that represent a negligible amount of remote compromises, while all the compromises are on the layer of OS/app... virtualize it, and forget.
Great, but that's virtualisation. What does seL4 add to that in how you use it? What features of seL4 are you using to improve security?
"You can achieve the same level of security by disallowing most of the browser code from making system calls and not using any virtualisation at all."
Here i don't understand. You can't do nothing without system calls... Everything is a system call. Maybe the tricky word is "most".
The point I am trying to make is that the reason you want seL4 and virtualisation is because you don't trust Linux to handle all system calls securely. You trust glue code if it's limited enough in size. Well, you can easily limit system calls in Linux. All the calls you don't trust you can delegate to a small, trusted browser process which does a limited set of system calls necessary for the browser to function. But all the expensive, complex code you can run contained with no generic system call access, just access to a few selected calls that are necessary to function and to communicate with the syscall task.
This means maintaining a fork of a browser. Every browser.
Not affordable. Nos scalable. Not real. It is easier to use QubesOS at this point... want to convince all the people to use QubesOS? Good luck.
You only need to change one browser and people that care about that will use it instead of something else. Exactly the same as with your proposed solution.
"But the same is true if a browser runs as a separate user, if it is compromised it is contained to what permissions it has, which can be very little. With your solution you just need to do it twice instead of once. Security wise, having to do the same thing twice isn't really more secure than having to do it once."
No. This is the same to what QubesOS currently does. You can have a Work, Fun, Finnace... environemnts (Browsers). Which is very different than "users", are virtulized by Xen, please don't compare OS "user" separation to hypervisor separation... nothing to do.
I am saying that OS "user" separation largely achieves what you achieve with virtualisation from a functional point of view. I'm trying to get an argument out of you why virtualisation + infrastructure to make it usable is significantly better than just using the OS mechanisms that are already provided. Both hardware and software have been proven to be broken in the past. Using existing OS features to contain processes is already not done in practice because it's too much hassle. Virtualisation only adds more complexity.
I propose the same QubesOS is doing, thanks to seL4, but on top of standard OSs so people do not need to change OS.
To me it sounds like you should just use QubesOS. Greetings, Indan
Hi Indan,
I appreciate your attempt to change my opinion on how to design security
solutions. Anyway, my opinion is based, not only on theory, but also in
experience breaking stuff. This allowed me to build an idea of how hard is
to break something, I mean, not only finding a vulnerability, but also
weaponizing the exploitation. Here we enter into the field of 0days,
chained exploits, hardening techniques, and so on. I did this for two
decades, pionereed explotation, etc... if you ever wrote an exploit based
on ROP/JOP techniques, well... I was doing that before those techiques ever
got baptized (check Wikipedia) on some ot the hardest targets you can
figure out.
For that reason, I trust my vision on security and my experience on
exploitation. And this is useful, because it allows me to decide if a
security solution is harder to exploit than other. As an attacker I prefer
to face a protection based on "sandbox" built by OS mechanisms that facing
a virtualization. By far, with my sun glasses and a mojito. You, as an
experienced exploit writer I guess you are as per your words... you may
prefer to fight a virtualization security solution. No problem.
The only thing we both agree is seL4 on bare metal is the most secure
solution. It's easy to agree on that. Beyond that, I believe it will be
more than difficult we agree on anything related to hardening or
exploitation.
Fortunately for both this will not be a problem. seL4 is open so anyone can
choose how to implement it.
Good luck escaping hypervisors, is a fascinating topic, specially if the
hypervisor is seL4, as you need to deal with very complex exploits (usually
ring 0 chains).
Best,
El vie., 19 abr. 2024 18:13, Indan Zupancic
Hello Hugo,
On 2024-04-19 13:59, Hugo V.C. wrote:
I talked about "vendors" in general. Never specified about any particular vendor. It is a generalization.
The context was browsers, there are not many browser vendors. Furthermore, you used it as an argument to propose your idea. If you didn't mean to include Mozilla you should have used a different argument to use your complex scheme instead of improving Firefox.
But containing damage to the OS this is a BIG step ahead. In example, to stop ransomware.
It's not a big step ahead compared to a process contained with the existing OS containment features there are, like running it as a different user and using file permissions. In both cases either a kernel exploit or infrastructure exploit is needed to gain access to the rest of the system. Perhaps the attack surface is a bit smaller, but that depends on the implementation.
"Of course adding another layer of indirection makes live harder for attackers. Following this line of through, why stop with one nested virtual machine, why not keep going?"
You know the answer: performance.
I would think the answer would be because it's pointless.
"You could nest Linux on top of seL4 on top of Linux."
I already do that Indan. On production. seL4 amazing performance makes nested virtualization penalty assumable to me and there are scenarios (critical services) where security comes first, at least in my business.
You clearly don't take security really seriously if run security critical software mixed with untrusted software on x86. That's what I would call taking calculated risks.
Until I have a decent way of deploying seL4 bare metal solutions (which doesn't require me to recruit SpaceX engineers) I run seL4 virtualized.
Running seL4 on bare metal is a lot simpler than porting existing software to seL4.
To me it sounds like you only use seL4 to run a Linux guest OS, which only runs a browser. In this scenario the added value of using seL4 is low.
seL4 is the future, it will penetrate every market, but we need to make people know it.
seL4 is not magic pixie dust, it's a microkernel that does as little as possible and doesn't yet have any extensive user space frameworks that normal developers expect from an OS when developing software.
The quality of an seL4 system depends on the design and implementation of the system as a whole, it's easy to create an insecure seL4 system.
"and that will give more attack vectors in itself."
I have had this discussion many times. Yes. Theoretically is what you say. In the real World (the one where an attacker evaluates the attack surface), whe you have ested virtualization, you are left to: 1) reachable tcp/ip stack (you don't reach ALL the tcp/ip stacks of the virtualized layers, just one, the rest is "glue" software) 2) permissions of the glue software of component of (1). Is it a qemu driver? Don't like qemu driver security? Nor me. Then let's focus on this, as THIS is a real entry point. Once you pass the packets to seL4, game is over for the attacker. You can't do dirty hyercalls tricks (like with other hypervisors) to escape.
3) the app/OS in the guest. I don't care, to me all this layer is just untrusted. I just care about separate untrusted environments in different virtualized roles
Or in other words: yes, looks like there are more opportunities, but in reality the "added" opportunities are in a layer (emulation, hypervisors, etc) that represent a negligible amount of remote compromises, while all the compromises are on the layer of OS/app... virtualize it, and forget.
Great, but that's virtualisation. What does seL4 add to that in how you use it? What features of seL4 are you using to improve security?
"You can achieve the same level of security by disallowing most of the browser code from making system calls and not using any virtualisation at all."
Here i don't understand. You can't do nothing without system calls... Everything is a system call. Maybe the tricky word is "most".
The point I am trying to make is that the reason you want seL4 and virtualisation is because you don't trust Linux to handle all system calls securely. You trust glue code if it's limited enough in size.
Well, you can easily limit system calls in Linux. All the calls you don't trust you can delegate to a small, trusted browser process which does a limited set of system calls necessary for the browser to function. But all the expensive, complex code you can run contained with no generic system call access, just access to a few selected calls that are necessary to function and to communicate with the syscall task.
This means maintaining a fork of a browser. Every browser.
Not affordable. Nos scalable. Not real. It is easier to use QubesOS at this point... want to convince all the people to use QubesOS? Good luck.
You only need to change one browser and people that care about that will use it instead of something else. Exactly the same as with your proposed solution.
"But the same is true if a browser runs as a separate user, if it is compromised it is contained to what permissions it has, which can be very little. With your solution you just need to do it twice instead of once. Security wise, having to do the same thing twice isn't really more secure than having to do it once."
No. This is the same to what QubesOS currently does. You can have a Work, Fun, Finnace... environemnts (Browsers). Which is very different than "users", are virtulized by Xen, please don't compare OS "user" separation to hypervisor separation... nothing to do.
I am saying that OS "user" separation largely achieves what you achieve with virtualisation from a functional point of view. I'm trying to get an argument out of you why virtualisation + infrastructure to make it usable is significantly better than just using the OS mechanisms that are already provided. Both hardware and software have been proven to be broken in the past.
Using existing OS features to contain processes is already not done in practice because it's too much hassle. Virtualisation only adds more complexity.
I propose the same QubesOS is doing, thanks to seL4, but on top of standard OSs so people do not need to change OS.
To me it sounds like you should just use QubesOS.
Greetings,
Indan
I totally agree with the notion that we want to be able to run sel4 in a
virtualized environment for development, debug, testing, and perhaps even
production scenarios where that makes the most sense. However, as Gernot
seems to be stressing, sel4 is supposed to be *secure* and that means the
most minimal possible codebase, footprint, and dependencies. There is a
danger that sel4 takes a hit for being insecure because habits develop
using it in insecure configurations while claiming the sel4 gives security
in ways it cannot do.
I took a brief look at the Trademark Guidelines and it seems to me that at
least in the spirit of the guidelines you can't put together
something whose operation is suspect and still claim that it is 'sel4'. I'm
a total copyright and patent abolitionist, but trademarks have their place
and this is a good example of that.
https://sel4.systems/Foundation/Trademark/
On Thu, Apr 18, 2024 at 3:51 PM Demi Marie Obenour
On 18 Apr 2024, at 20:38, Hugo V.C.
wrote: The challenge that I probably didn't expressed correctly, is to integrate seL4 on top of other dynamic systems (in example to virtualize a browser), as seL4 needs to know in advance how much memory will have available... Running seL4 on bare metal is a "kids game" (don't want to offend anyone), the challenge is to integrate it on systems where resources are dynamically assigned.
I don’t think I understand. Running seL4 in any way other than on bare
On 4/18/24 07:12, Gernot Heiser via Devel wrote: metal makes no sense whatsoever.
Gernot
I think the point is that if a system needs dynamic resource allocation, _something_ needs to do it. Traditional systems do this in the kernel. seL4 declares this to be the responsibility of userspace, but I’m not aware of a userspace component that can actually do that job.
Also, running seL4 in a virtualized environment makes a huge amount of sense for testing and debugging, or if one is shipping a virtual appliance. -- Sincerely, Demi Marie Obenour (she/her/hers)
_______________________________________________ Devel mailing list -- devel@sel4.systems To unsubscribe send an email to devel-leave@sel4.systems
-- Bob Trower --- From Gmail webmail account. ---
I agree with Gernot.
Ultimately, in deployment as a secure system, isn't the whole point of this
to have confidence that it is in fact secure?
As a practical matter of convenience for development, I would find it much
easier to work with Sel4 in a virtualized environment. However, I would not
have confidence for production delivery if it is not on bare metal.
Hopefully, we have assurance from the silicon on up,
Security is incredibly difficult and it promises to get much more difficult
quickly as AI weaponry comes online.
For my long-term project I have been looking at a RISC-V platform that we
can specify such that it can be visually inspected microscopically to
verify that the silicon itself is as specified.
I am not convinced that absolute security is possible even in theory, but
the higher your security, the more you send attackers after lower hanging
fruit.
I took a brief look at the sources on GitHub and I was surprised to see
that the coding standards violate a few of my rules as to single point of
entry, single point of exit, wrapped return statements, etc. However,
because the code base is small it can be read and reviewed and to the
extent that stuff like that might post problems it can be systematically
repaired.
Gernot -- apropos of the single point of entry/exit in code, I was given to
understand years ago that violating this made certain types of proofs as to
code behavior theoretically impossible. I am curious as to how you work
around this. Am I simply mistaken? Ignore if you wish, I will not be
offended.
On Thu, Apr 18, 2024 at 7:14 AM Gernot Heiser via Devel
On 18 Apr 2024, at 20:38, Hugo V.C.
wrote: The challenge that I probably didn't expressed correctly, is to integrate seL4 on top of other dynamic systems (in example to virtualize a browser), as seL4 needs to know in advance how much memory will have available... Running seL4 on bare metal is a "kids game" (don't want to offend anyone), the challenge is to integrate it on systems where resources are dynamically assigned.
I don’t think I understand. Running seL4 in any way other than on bare metal makes no sense whatsoever.
Gernot _______________________________________________ Devel mailing list -- devel@sel4.systems To unsubscribe send an email to devel-leave@sel4.systems
-- Bob Trower --- From Gmail webmail account. ---
On 4/18/24 05:34, Gernot Heiser via Devel wrote:
On 18 Apr 2024, at 19:18, Hugo V.C.
wrote: "A web browser is a good example of this. The number of security domains is at least the number of origins in use, which can be extremely large. Furthermore, some origins might be CPU-intensive."
Yes. That's the challenge when you try to use a solution aimed to static systems for dynamic systems... Not sure how other will solve this,
Operating systems have been dealing with this kind of problem for well over half a century, it’s called the working-set model.
You may have 1000s of processes, but very few are active during a time window, and you allocate the resources to them.
Remember what virtual memory was originally introduced for? Allowing program memories to exceed the size of physical memory.
All this is a matter for user mode. The microkernel’s job is to do the things usermode cannot.
Is there a user mode component that can do this? -- Sincerely, Demi Marie Obenour (she/her/hers)
Re: "Yes, it is a waste of resources. But it is secure. "
I agree.
I have been thinking of this tradeoff for another security scenario and my
current thinking is:
- It's just necessary
- It's not really a waste. It is a price paid for security and the
security is worth the price.
- What a caller sees need not be time spinning in the callee but rather
deliberate delays offset by servicing other requests.
- We are at the level of paranoid security and we need to be. Anything at
all that leaks information is a security issue and as we are increasingly
seeing adversaries will use the smallest silver of an opening to attack.
On Thu, Apr 18, 2024 at 5:20 AM Hugo V.C.
"A web browser is a good example of this. The number of security domains is at least the number of origins in use, which can be extremely large. Furthermore, some origins might be CPU-intensive."
Yes. That's the challenge when you try to use a solution aimed to static systems for dynamic systems... Not sure how other will solve this, we will just try to make some sacrifice, in example, by assuming a total fixed memory and CPU available for a domain and how will you offer that to the end user. What is not compatible is the old fashioned way of happily and chaotically assigning resources (expressions like "must be able to compete with each other" I don't think are too much compatible with a predictable behavior...). What is not "normal" to me is how browsers behave and how much we allow to them (that is a party for attackers) in example, with crazy stuff like javascript, etc. So, at some point, the solution designer must decide what to offer to end user. In example, you can set up fixed resources per domain and fixed number of domains. Yes, it is a waste of resources. But it is secure. So you offer this to the customers and explain how it works: no more free browser resources party, no more browsers taking 100% cpu and lots of GB of RAM (with 20 tabs...). Are we crazy? Instead, you can just make a reservation of resources for the secure static system and put your secure stuff there. And forget of those resources, they are not free to be used by other programs, nor by the OS,... Just partition the full system, have a static partition for serious stuff and and dynamic partition for toys. Just a silly example.
El jue, 18 abr 2024 a las 5:43, Demi Marie Obenour (
) escribió:
On 4/17/24 23:20, Gernot Heiser via Devel wrote:
On 18 Apr 2024, at 13:13, Demi Marie Obenour
wrote: properly implements mitigations. Time protection is a principled solution to side-channel attacks, but it requires that the time consumed by operations on sensitive data is not observable.
This is actually not a correct summary of time protection (TP). In contrast, TP *ensures* that kernel operations are constant time, and that userspace operations do not produce observable timing variations across security domains.
Having said that, TP isn’t in the mainline kernel and is still experimental. We’re planning to restart that project mid-year.
Gernot
How does time protection handle these two cases?
1. Untrusted code can request a service from trusted code that involves processing sensitive data, and this request may take an unbounded amount of time. In this case, it is not possible to pad the time actually consumed to the maximum possible value, because the maximum possible value does not exist.
Cryptographic operations involving rejection sampling may be an example here. It is possible to pad the time to a very large value, since the probability that N operations will be needed decreases exponentially with N, but this may be prohibitive performance-wise.
2. Operations on sensitive data must be able to consume all available CPU resources. The main example I can think of is human-interactive systems. These may be so heavily oversubscribed that it is simply not possible to statically allocate resources to different security domains. Instead, even security domains involving sensitive data must be able to compete with each other.
A web browser is a good example of this. The number of security domains is at least the number of origins in use, which can be extremely large. Furthermore, some origins might be CPU-intensive. Therefore, the overall system load is an unavoidable side-channel, at least if one wants fair scheduling. -- Sincerely, Demi Marie Obenour (she/her/hers)
_______________________________________________ Devel mailing list -- devel@sel4.systems To unsubscribe send an email to devel-leave@sel4.systems
_______________________________________________ Devel mailing list -- devel@sel4.systems To unsubscribe send an email to devel-leave@sel4.systems
-- Bob Trower --- From Gmail webmail account. ---
On 18 Apr 2024, at 08:02, Isaac Beckett
participants (11)
-
Andrew Warkentin
-
Bob Trower
-
David Barrass
-
Demi Marie Obenour
-
Gernot Heiser
-
Hugo V.C.
-
Indan Zupancic
-
Isaac Beckett
-
Michael Neises
-
Peter Chubb
-
William ML Leslie