Confidentiality and realtime requirements
One of the goals of the MCS scheduler is to allow untrusted parts of the system (such as device drivers) to still have low interrupt latency. However, this seems to interact badly with the domain scheduler, as interrupts can arrive when the domain that will serve them is not scheduled. Worse, it appears that interrupts will generally require an IBPB (or equivalent) on both entry and exit, since they may interrupt any code. Is this accurate? If so, it seems that the “flush all μArch state” instruction coming to some RISC-V CPUs is insufficient, and full speculative taint tracking is required. More generally, requiring mutually distrusting domains to be explicitly marked seems to be problematic for anything that is not a static system: in a dynamic system (one that can run third-party code), one must typically assume that different address spaces are mutually distrusting, with the result that IPC latency will be severely impacted. Am I missing something, or will a general-purpose OS need full speculative taint tracking in hardware if it is to have fast IPCs between mutually-distrusting code on out-of-order CPUs? -- Sincerely, Demi Marie Obenour (she/her/hers)
On 9 Aug 2023, at 04:02, Demi Marie Obenour
One of the goals of the MCS scheduler is to allow untrusted parts of the system (such as device drivers) to still have low interrupt latency. However, this seems to interact badly with the domain scheduler, as interrupts can arrive when the domain that will serve them is not scheduled. Worse, it appears that interrupts will generally require an IBPB (or equivalent) on both entry and exit, since they may interrupt any code.
The domain scheduler is designed to support a strict separation-kernel configuration (strict time and space partitioning). These generally run with interrupts disabled (that’s the configuration for the seL4 confidentiality proofs). Typically you check for pending interrupts at the beginning of a time partition. Obviously this means that your interrupt latency is the full time period. Note exactly fast interrupt response, but that’s how eg ARINC systems run. We demonstrated [Ge et al, EuroSys’19] that you can allow interrupts without introducing timing channels by partitioning IRQs between partitions. But that doesn’t change the WCIL, which is still the partition period. (This isn’t verified yet – work in progress.) We did brainstorm a while back ways of getting better IRQ latencies for some cases, eg if the IRQ handling domain itself has no secrets to leak. But that was never really thought through, and I’m waiting for a good PhD student to get back to this topic.
Is this accurate? If so, it seems that the “flush all μArch state” instruction coming to some RISC-V CPUs is insufficient,
I assume you’re referring to the fence.t proposal by Wistoff et al [DATE’21]?
and full speculative taint tracking is required.
I don’t follow. If you clean all µarch state you don’t have to worry about speculation traces, that’s (among others) the gist of Ge et al.
More generally, requiring mutually distrusting domains to be explicitly marked seems to be problematic for anything that is not a static system: in a dynamic system (one that can run third-party code), one must typically assume that different address spaces are mutually distrusting, with the result that IPC latency will be severely impacted.
You can’t have isolation without paying for it. But endpoints between mutually-distrusting domains make no sense (our experimental time-protection implementation has it only for performance evaluation). The work on verified time protection has no cross-domain communication at present, which is, of course, overly restrictive. It can be extended to cross-domain shared memory and signals-like mechanisms, but we haven’t done that yet. (See above re looking for a good PhD student…) As I keep saying, the seL4 mechanism that is (unfortunately, somewhat misleadingly and for purely historic reasons) called “IPC” shouldn’t be considered a general-purpose communication mechanism, but a protected procedure call – the microkernel equivalent to a Linux system call. As such, the trust relationship is not symmetric: you use it to invoke some more privileged operation (and definitely need to trust the callee to a degree).
Am I missing something, or will a general-purpose OS need full speculative taint tracking in hardware if it is to have fast IPCs between mutually-distrusting code on out-of-order CPUs?
No, I don’t think so. To the contrary, I think that speculation tracking is an instance of the fallacy of trowing hardware at a problem that can be solved much simpler if you instead provide simple mechanisms that allow the OS to do its job. The OS, not the hardware, knows the system’s security policy. Pretending otherwise leads to complexity and waste. Which is fine if your business model is based on making hardware more complex (no names ;-) but it’s not fine if your objective is secure systems. fence.t (or something similar) is the mechanism you need to let the OS do it’s job, and it is simple and cheap to implement, and costs you no more than the L1-D flush you need anyway, as Wistoff et al demonstrated. Gernot
On 8/8/23 15:32, Gernot Heiser wrote:
On 9 Aug 2023, at 04:02, Demi Marie Obenour
wrote: One of the goals of the MCS scheduler is to allow untrusted parts of the system (such as device drivers) to still have low interrupt latency. However, this seems to interact badly with the domain scheduler, as interrupts can arrive when the domain that will serve them is not scheduled. Worse, it appears that interrupts will generally require an IBPB (or equivalent) on both entry and exit, since they may interrupt any code.
The domain scheduler is designed to support a strict separation-kernel configuration (strict time and space partitioning). These generally run with interrupts disabled (that’s the configuration for the seL4 confidentiality proofs). Typically you check for pending interrupts at the beginning of a time partition.
Obviously this means that your interrupt latency is the full time period. Note exactly fast interrupt response, but that’s how eg ARINC systems run.
We demonstrated [Ge et al, EuroSys’19] that you can allow interrupts without introducing timing channels by partitioning IRQs between partitions. But that doesn’t change the WCIL, which is still the partition period. (This isn’t verified yet – work in progress.)
We did brainstorm a while back ways of getting better IRQ latencies for some cases, eg if the IRQ handling domain itself has no secrets to leak. But that was never really thought through, and I’m waiting for a good PhD student to get back to this topic.
Is this accurate? If so, it seems that the “flush all μArch state” instruction coming to some RISC-V CPUs is insufficient,
I assume you’re referring to the fence.t proposal by Wistoff et al [DATE’21]?
Perhaps? It’s whatever was referred to at the last developer hangout.
and full speculative taint tracking is required.
I don’t follow. If you clean all µarch state you don’t have to worry about speculation traces, that’s (among others) the gist of Ge et al.
Does it prevent Spectre v1? A bounds check will almost always predict as in-bounds and that is potentially a problem. Taint tracking does prevent Spectre v1 because the speculatively read data is guaranteed to be unobservable. Strict temporal isolation also mitigates this, but IIUC it is also incompatible with load-balancing and therefore only practical in limited cases.
More generally, requiring mutually distrusting domains to be explicitly marked seems to be problematic for anything that is not a static system: in a dynamic system (one that can run third-party code), one must typically assume that different address spaces are mutually distrusting, with the result that IPC latency will be severely impacted.
You can’t have isolation without paying for it.
But endpoints between mutually-distrusting domains make no sense (our experimental time-protection implementation has it only for performance evaluation). The work on verified time protection has no cross-domain communication at present, which is, of course, overly restrictive. It can be extended to cross-domain shared memory and signals-like mechanisms, but we haven’t done that yet. (See above re looking for a good PhD student…)
As I keep saying, the seL4 mechanism that is (unfortunately, somewhat misleadingly and for purely historic reasons) called “IPC” shouldn’t be considered a general-purpose communication mechanism, but a protected procedure call – the microkernel equivalent to a Linux system call. As such, the trust relationship is not symmetric: you use it to invoke some more privileged operation (and definitely need to trust the callee to a degree).
I should have said “not-mutually-trusting”, then.
Am I missing something, or will a general-purpose OS need full speculative taint tracking in hardware if it is to have fast IPCs between mutually-distrusting code on out-of-order CPUs?
No, I don’t think so. To the contrary, I think that speculation tracking is an instance of the fallacy of trowing hardware at a problem that can be solved much simpler if you instead provide simple mechanisms that allow the OS to do its job. The OS, not the hardware, knows the system’s security policy. Pretending otherwise leads to complexity and waste. Which is fine if your business model is based on making hardware more complex (no names ;-) but it’s not fine if your objective is secure systems.
fence.t (or something similar) is the mechanism you need to let the OS do it’s job, and it is simple and cheap to implement, and costs you no more than the L1-D flush you need anyway, as Wistoff et al demonstrated.
I missed the “costs you no more than the L1-D flush you need anyway” part. On x86, instructions like IBPB can easily take thousands of cycles IIUC. Would fence.t have equally catastrophic overhead on an out-of-order RISC-V processor? https://riscv-europe.org/media/proceedings/posters/2023-06-08-Nils-WISTOFF-a... seems simple to implement in hardware, but does not seem efficient. https://carrv.github.io/2020/papers/CARRV2020_paper_10_Wistoff.pdf claims to be decently efficient, but is for an in-order CPU. Also, in the future, would you mind including the full URL of any articles? I don’t know what “Wistoff et al” and “Ge et al” refer to, and my mail client is configured to only display plain text (not HTML) because the attack surface of HTML rendering is absurdly high. -- Sincerely, Demi Marie Obenour (she/her/hers)
On 9 Aug 2023, at 06:28, Demi Marie Obenour
On 8/9/23 04:47, Gernot Heiser wrote:
On 9 Aug 2023, at 06:28, Demi Marie Obenour
wrote: and full speculative taint tracking is required.
I don’t follow. If you clean all µarch state you don’t have to worry about speculation traces, that’s (among others) the gist of Ge et al.
Does it prevent Spectre v1? A bounds check will almost always predict as in-bounds and that is potentially a problem. Taint tracking does prevent Spectre v1 because the speculatively read data is guaranteed to be unobservable. Strict temporal isolation also mitigates this, but IIUC it is also incompatible with load-balancing and therefore only practical in limited cases.
Spectre v1 uses speculation to put secrets into the cache, combined with a covert timing channel to move it across security domains. Without the covert channel it’s harmless. Time protection prevents the covert channel.
You cannot use time protection in a fully dynamic system, _especially_ not a desktop system. I should have made it clear that I was referring to dynamic systems.
Speculation taint-tracking is a complex point-defence against one specific attack patterns, that needs to take a pessimistic approach to enforcing what the hardware thinks *might* be security boundaries, irrespective what the actual security policy is.
Time protection is a general, policy-free mechanism that prevents µarch timing channels under control of the OS, which can deploy it where needed.
The problem with time protection is that it is all-or-nothing. A general purpose system _cannot_ enforce time protection, because doing so requires statically allocating CPU time to different security domains. This is obviously impossible in any desktop system, because it is the human at the console who decides what needs to run and when.
As I keep saying, the seL4 mechanism that is (unfortunately, somewhat misleadingly and for purely historic reasons) called “IPC” shouldn’t be considered a general-purpose communication mechanism, but a protected procedure call – the microkernel equivalent to a Linux system call. As such, the trust relationship is not symmetric: you use it to invoke some more privileged operation (and definitely need to trust the callee to a degree).
I should have said “not-mutually-trusting”, then.
Makes a big difference: Enforcement then comes down to enforcing the security policy, which may or may not require temporal isolation. If it does, there’s an (unavoidable) cost. If not then not.
Or maybe the security policy requires something between “nothing” and “full temporal isolation”. Consider a server that performs cryptographic operations. The security policy is that clients cannot access or alter data belonging to other clients and that secret keys cannot be extracted by any client. Since the cryptographic operations are constant-time there is no need for temporal isolation, _provided that speculative execution does not cause problems_. Enforcing temporal isolation would likely cause such a large performance penalty that the whole concept is not viable.
fence.t (or something similar) is the mechanism you need to let the OS do it’s job, and it is simple and cheap to implement, and costs you no more than the L1-D flush you need anyway, as Wistoff et al demonstrated.
I missed the “costs you no more than the L1-D flush you need anyway” part. On x86, instructions like IBPB can easily take thousands of cycles IIUC.
Invalidating the L1-D cache takes 1000s of cycles, and is unavoidable for temporal isolation – D-cache timing attacks are the easiest ones to do. The point is that compared to the inevitable D-cache flush, everything else is cheap, and can be done in parallel to the D-cache flush, so doesn’t affect overall latency.
I’m not sure why IBPB should take 1000s of cycles (unless Intel executes complex microcode to do it). Resetting flip-flops is cheap. What makes the D-cache expensive is the need to write back dirty data. Other µarch state can be reset without any write-back as it caches R/O information.
There is a separate issue of indirect costs, which can be significant, but not in a time-partitioned system. If the security policy requires isolating a server from it’s client, then these costs would become significant, but that’s inherent in the problem.
For literally 100% of the cases I deal with, time partitioning is completely impractical. The inability of a time-partitioned system to adapt to workload changes means that it is not even worth considering. Any feasible solution needs to be able to allocate 90+% of system CPU time to security domain X, and then allocate 90+% of system CPU time to security domain Y, _without knowing in advance that these changes will happen_. Time partitioning is awesome for your static embedded systems that will only ever run workloads known in advance, but for the systems I work on, it is a complete non-starter both now and in the foreseeable future.
Would fence.t have equally catastrophic overhead on an out-of-order RISC-V processor? https://riscv-europe.org/media/proceedings/posters/2023-06-08-Nils-WISTOFF-a... seems simple to implement in hardware, but does not seem efficient. https://carrv.github.io/2020/papers/CARRV2020_paper_10_Wistoff.pdf claims to be decently efficient, but is for an in-order CPU.
It is highly efficient, and completely hidden behind the D-cache flush.
Implementation on an OoO processor isn’t published yet, but confirms the results obtained on the IO CV6.
Also, in the future, would you mind including the full URL of any articles? I don’t know what “Wistoff et al” and “Ge et al” refer to, and my mail client is configured to only display plain text (not HTML) because the attack surface of HTML rendering is absurdly high.
All papers are listed on the time-protection project page.
[Ge et al, EuroSys’19]: https://trustworthy.systems/publications/abstracts/Ge_YCH_19.abstract [Wistoff et al, DATE’21]: https://trustworthy.systems/publications/abstracts/Wistoff_SGBH_21.abstract
Both won Best-Paper awards, btw
I’m not surprised! For the systems that _can_ use it, it is awesome. -- Sincerely, Demi Marie Obenour (she/her/hers)
"For literally 100% of the cases I deal with, time partitioning is completely
impractical. The inability of a time-partitioned system to adapt to
workload changes means that it is not even worth considering."
I agree Demi. Maybe the problem is trying to solve everything wirh the same
hardware and software. With current kind of hardware/CPUs is very difficult
to solve both Desktop (human operated) devices general purpose software and
embedded, task specific devices software challenges as they have, by
nature, different problems to be solved. Human interaction with a computer
is not deterministic, so forget about deterministic solutions... Instead
try to solve just the most sensible parts of the full puzzle, so you can
use specific software/hardware doing the job where errors are not an option
and keep other pieces of the puzzle with "standard" software/hardware.
Mixing all in a big soap of software is, nowadays, and with the horrible
hardware support, an impossible mission.
El mié., 9 ago. 2023 19:38, Demi Marie Obenour
On 8/9/23 04:47, Gernot Heiser wrote:
On 9 Aug 2023, at 06:28, Demi Marie Obenour
wrote:
and full speculative taint tracking is required.
I don’t follow. If you clean all µarch state you don’t have to worry
about speculation traces, that’s (among others) the gist of Ge et al.
Does it prevent Spectre v1? A bounds check will almost always predict
as in-bounds and that is potentially a problem.
Taint tracking does prevent Spectre v1 because the speculatively read data is guaranteed to be unobservable. Strict temporal isolation also mitigates this, but IIUC it is also incompatible with load-balancing and therefore only practical in limited cases.
Spectre v1 uses speculation to put secrets into the cache, combined with a covert timing channel to move it across security domains. Without the covert channel it’s harmless. Time protection prevents the covert channel.
You cannot use time protection in a fully dynamic system, _especially_ not a desktop system. I should have made it clear that I was referring to dynamic systems.
Speculation taint-tracking is a complex point-defence against one specific attack patterns, that needs to take a pessimistic approach to enforcing what the hardware thinks *might* be security boundaries, irrespective what the actual security policy is.
Time protection is a general, policy-free mechanism that prevents µarch timing channels under control of the OS, which can deploy it where needed.
The problem with time protection is that it is all-or-nothing. A general purpose system _cannot_ enforce time protection, because doing so requires statically allocating CPU time to different security domains. This is obviously impossible in any desktop system, because it is the human at the console who decides what needs to run and when.
As I keep saying, the seL4 mechanism that is (unfortunately, somewhat misleadingly and for purely historic reasons) called “IPC” shouldn’t be considered a general-purpose communication mechanism, but a protected procedure call – the microkernel equivalent to a Linux system call. As such, the trust relationship is not symmetric: you use it to invoke some more privileged operation (and definitely need to trust the callee to a degree).
I should have said “not-mutually-trusting”, then.
Makes a big difference: Enforcement then comes down to enforcing the security policy, which may or may not require temporal isolation. If it does, there’s an (unavoidable) cost. If not then not.
Or maybe the security policy requires something between “nothing” and “full temporal isolation”.
Consider a server that performs cryptographic operations. The security policy is that clients cannot access or alter data belonging to other clients and that secret keys cannot be extracted by any client. Since the cryptographic operations are constant-time there is no need for temporal isolation, _provided that speculative execution does not cause problems_. Enforcing temporal isolation would likely cause such a large performance penalty that the whole concept is not viable.
fence.t (or something similar) is the mechanism you need to let the OS do it’s job, and it is simple and cheap to implement, and costs you no more than the L1-D flush you need anyway, as Wistoff et al demonstrated.
I missed the “costs you no more than the L1-D flush you need anyway” part. On x86, instructions like IBPB can easily take thousands of cycles IIUC.
Invalidating the L1-D cache takes 1000s of cycles, and is unavoidable for temporal isolation – D-cache timing attacks are the easiest ones to do. The point is that compared to the inevitable D-cache flush, everything else is cheap, and can be done in parallel to the D-cache flush, so doesn’t affect overall latency.
I’m not sure why IBPB should take 1000s of cycles (unless Intel executes complex microcode to do it). Resetting flip-flops is cheap. What makes the D-cache expensive is the need to write back dirty data. Other µarch state can be reset without any write-back as it caches R/O information.
There is a separate issue of indirect costs, which can be significant, but not in a time-partitioned system. If the security policy requires isolating a server from it’s client, then these costs would become significant, but that’s inherent in the problem.
For literally 100% of the cases I deal with, time partitioning is completely impractical. The inability of a time-partitioned system to adapt to workload changes means that it is not even worth considering. Any feasible solution needs to be able to allocate 90+% of system CPU time to security domain X, and then allocate 90+% of system CPU time to security domain Y, _without knowing in advance that these changes will happen_. Time partitioning is awesome for your static embedded systems that will only ever run workloads known in advance, but for the systems I work on, it is a complete non-starter both now and in the foreseeable future.
Would fence.t have equally catastrophic overhead on an out-of-order RISC-V processor? https://riscv-europe.org/media/proceedings/posters/2023-06-08-Nils-WISTOFF-a... seems simple to implement in hardware, but does not seem efficient. https://carrv.github.io/2020/papers/CARRV2020_paper_10_Wistoff.pdf claims to be decently efficient, but is for an in-order CPU.
It is highly efficient, and completely hidden behind the D-cache flush.
Implementation on an OoO processor isn’t published yet, but confirms the results obtained on the IO CV6.
Also, in the future, would you mind including the full URL of any articles? I don’t know what “Wistoff et al” and “Ge et al” refer to, and my mail client is configured to only display plain text (not HTML) because the attack surface of HTML rendering is absurdly high.
All papers are listed on the time-protection project page.
[Ge et al, EuroSys’19]: https://trustworthy.systems/publications/abstracts/Ge_YCH_19.abstract [Wistoff et al, DATE’21]: https://trustworthy.systems/publications/abstracts/Wistoff_SGBH_21.abstract
Both won Best-Paper awards, btw
I’m not surprised! For the systems that _can_ use it, it is awesome. -- Sincerely, Demi Marie Obenour (she/her/hers)
_______________________________________________ Devel mailing list -- devel@sel4.systems To unsubscribe send an email to devel-leave@sel4.systems
This discussion is confusing some issues, among others, time protection (TP) and temporal isolation. The definition of time protection (from Ge et al [2019]): "A collection of OS mechanism which jointly prevent interference between security domains that would make execution speed in one domain dependent on the activities of another.” IOW, time protection is the mechanism that allows you to implement temporal isolation, which is a prevention of interference between domains that affects execution speed. It’s up to the OS to deploy TP to enforce its security policies. Full temporal isolation, i.e. the absence of any temporal information flow, requires strict time partitioning, no if’s, no but’s. If the time of a context switch can be controlled by application code, you have a timing channel (and it’s one that is trivial to exploit). Yes, strict temporal partitioning is extremely restrictive, way too restrictive for general-purpose computing. (But it happens to be what is used to ensure commercial aircraft don’t fall out of the sky.) Nothing new there. If you want to relax this, you’ll need a very careful analysis of your security requirements, and deploy TP where needed and omit where it’s not needed. But you can't prevent µarch timing channels without TP. Everybody knows about D-cache channels these days and how easy they are to exploit, and similar with LLC channels. And if you want to prevent at least those, then full TP has no extra cost. The main problem with using a more relaxed security policy (compared to full temporal isolation) is that I’m not aware of a theoretical framework that will allow you to make *any* guarantee, i.e. you’re back to ad-hoc security (which in the end tends to be not much different from no security). Developing such a framework is on my agenda (it’s a core part of what I said I’m waiting for a good PhD student to work on). Basically I want to be able to make certain security guarantees where components have overt communication channels. An example is tab isolation in a browser: We want to have a fully functional web browser that allows processing of sensitive information in one tab, but guarantee that it cannot be leaked to a different tab. You can easily think of ad-hoc approaches that seem to be able to deal with this. Until someone breaks them. I hope this helps. Gernot
On 10 Aug 2023, at 07:48, Hugo V.C.
wrote: "For literally 100% of the cases I deal with, time partitioning is completely impractical. The inability of a time-partitioned system to adapt to workload changes means that it is not even worth considering."
I agree Demi. Maybe the problem is trying to solve everything wirh the same hardware and software. With current kind of hardware/CPUs is very difficult to solve both Desktop (human operated) devices general purpose software and embedded, task specific devices software challenges as they have, by nature, different problems to be solved. Human interaction with a computer is not deterministic, so forget about deterministic solutions... Instead try to solve just the most sensible parts of the full puzzle, so you can use specific software/hardware doing the job where errors are not an option and keep other pieces of the puzzle with "standard" software/hardware. Mixing all in a big soap of software is, nowadays, and with the horrible hardware support, an impossible mission.
El mié., 9 ago. 2023 19:38, Demi Marie Obenour
escribió: On 8/9/23 04:47, Gernot Heiser wrote:
On 9 Aug 2023, at 06:28, Demi Marie Obenour
wrote:
and full speculative taint tracking is required.
I don’t follow. If you clean all µarch state you don’t have to worry
about speculation traces, that’s (among others) the gist of Ge et al.
Does it prevent Spectre v1? A bounds check will almost always predict
as in-bounds and that is potentially a problem.
Taint tracking does prevent Spectre v1 because the speculatively read data is guaranteed to be unobservable. Strict temporal isolation also mitigates this, but IIUC it is also incompatible with load-balancing and therefore only practical in limited cases.
Spectre v1 uses speculation to put secrets into the cache, combined with a covert timing channel to move it across security domains. Without the covert channel it’s harmless. Time protection prevents the covert channel.
You cannot use time protection in a fully dynamic system, _especially_ not a desktop system. I should have made it clear that I was referring to dynamic systems.
Speculation taint-tracking is a complex point-defence against one specific attack patterns, that needs to take a pessimistic approach to enforcing what the hardware thinks *might* be security boundaries, irrespective what the actual security policy is.
Time protection is a general, policy-free mechanism that prevents µarch timing channels under control of the OS, which can deploy it where needed.
The problem with time protection is that it is all-or-nothing. A general purpose system _cannot_ enforce time protection, because doing so requires statically allocating CPU time to different security domains. This is obviously impossible in any desktop system, because it is the human at the console who decides what needs to run and when.
As I keep saying, the seL4 mechanism that is (unfortunately, somewhat misleadingly and for purely historic reasons) called “IPC” shouldn’t be considered a general-purpose communication mechanism, but a protected procedure call – the microkernel equivalent to a Linux system call. As such, the trust relationship is not symmetric: you use it to invoke some more privileged operation (and definitely need to trust the callee to a degree).
I should have said “not-mutually-trusting”, then.
Makes a big difference: Enforcement then comes down to enforcing the security policy, which may or may not require temporal isolation. If it does, there’s an (unavoidable) cost. If not then not.
Or maybe the security policy requires something between “nothing” and “full temporal isolation”.
Consider a server that performs cryptographic operations. The security policy is that clients cannot access or alter data belonging to other clients and that secret keys cannot be extracted by any client. Since the cryptographic operations are constant-time there is no need for temporal isolation, _provided that speculative execution does not cause problems_. Enforcing temporal isolation would likely cause such a large performance penalty that the whole concept is not viable.
fence.t (or something similar) is the mechanism you need to let the OS do it’s job, and it is simple and cheap to implement, and costs you no more than the L1-D flush you need anyway, as Wistoff et al demonstrated.
I missed the “costs you no more than the L1-D flush you need anyway” part. On x86, instructions like IBPB can easily take thousands of cycles IIUC.
Invalidating the L1-D cache takes 1000s of cycles, and is unavoidable for temporal isolation – D-cache timing attacks are the easiest ones to do. The point is that compared to the inevitable D-cache flush, everything else is cheap, and can be done in parallel to the D-cache flush, so doesn’t affect overall latency.
I’m not sure why IBPB should take 1000s of cycles (unless Intel executes complex microcode to do it). Resetting flip-flops is cheap. What makes the D-cache expensive is the need to write back dirty data. Other µarch state can be reset without any write-back as it caches R/O information.
There is a separate issue of indirect costs, which can be significant, but not in a time-partitioned system. If the security policy requires isolating a server from it’s client, then these costs would become significant, but that’s inherent in the problem.
For literally 100% of the cases I deal with, time partitioning is completely impractical. The inability of a time-partitioned system to adapt to workload changes means that it is not even worth considering. Any feasible solution needs to be able to allocate 90+% of system CPU time to security domain X, and then allocate 90+% of system CPU time to security domain Y, _without knowing in advance that these changes will happen_. Time partitioning is awesome for your static embedded systems that will only ever run workloads known in advance, but for the systems I work on, it is a complete non-starter both now and in the foreseeable future.
Would fence.t have equally catastrophic overhead on an out-of-order RISC-V processor? https://riscv-europe.org/media/proceedings/posters/2023-06-08-Nils-WISTOFF-a... seems simple to implement in hardware, but does not seem efficient. https://carrv.github.io/2020/papers/CARRV2020_paper_10_Wistoff.pdf claims to be decently efficient, but is for an in-order CPU.
It is highly efficient, and completely hidden behind the D-cache flush.
Implementation on an OoO processor isn’t published yet, but confirms the results obtained on the IO CV6.
Also, in the future, would you mind including the full URL of any articles? I don’t know what “Wistoff et al” and “Ge et al” refer to, and my mail client is configured to only display plain text (not HTML) because the attack surface of HTML rendering is absurdly high.
All papers are listed on the time-protection project page.
[Ge et al, EuroSys’19]: https://trustworthy.systems/publications/abstracts/Ge_YCH_19.abstract [Wistoff et al, DATE’21]: https://trustworthy.systems/publications/abstracts/Wistoff_SGBH_21.abstract
Both won Best-Paper awards, btw
I’m not surprised! For the systems that _can_ use it, it is awesome. -- Sincerely, Demi Marie Obenour (she/her/hers)
_______________________________________________ Devel mailing list -- devel@sel4.systems To unsubscribe send an email to devel-leave@sel4.systems
_______________________________________________ Devel mailing list -- devel@sel4.systems To unsubscribe send an email to devel-leave@sel4.systems
"If you want to relax this, you’ll need a very careful analysis of your
security requirements, and deploy TP where needed and omit where it’s not
needed. "
That's what I tried to say Gernot... but you express much better :-)
"The main problem with using a more relaxed security policy (compared to
full temporal isolation) is that I’m not aware of a theoretical framework
that will allow you to make *any* guarantee, i.e. you’re back to ad-hoc
security (which in the end tends to be not much different from no
security)."
That's it. And here is were I think we all in the security industry are
failing. I don't think we can solve that nowadays with the current
hardware/CPUs and "mix" things, moreover, even if someone dares to do it, I
guess it will be extremely complex to make guarantees. Instead of
"relaxing" the security policy, I bet to solve that by, literally, make
hardware partitioning, with different OSs, the general purpose one and the
one with guarantees and then transfer sensible workloads to the hardware
partition with the OS that gives you guarantees. I'm aware that here
interaction between those two systems introduces new challenges, but IMHO
it simplifies a lot the design.
"Basically I want to be able to make certain security guarantees where
components have overt communication channels. An example is tab isolation
in a browser: We want to have a fully functional web browser that allows
processing of sensitive information in one tab, but guarantee that it
cannot be leaked to a different tab."
That sounds very interesting, but I think it requires hardware support as
mentioned above. I insist, I think we all in the security industry are
going in the wrong direction trying to solve all the problems with the same
hardware. It would be nice to see laptop vendor to create a laptop
with two *fully
isolated *hardware environments so two different OSs (generic purpose and
the one with guarantees) can share the screen with a physical switch. That
would make life much easier than trying to solve everything from a single
(big) piece of software running on the same hardware (with all the hardware
bugs waiting to break the software layer...).
El vie, 11 ago 2023 a las 10:52, Gernot Heiser (
This discussion is confusing some issues, among others, time protection (TP) and temporal isolation.
The definition of time protection (from Ge et al [2019]): "A collection of OS mechanism which jointly prevent interference between security domains that would make execution speed in one domain dependent on the activities of another.”
IOW, time protection is the mechanism that allows you to implement temporal isolation, which is a prevention of interference between domains that affects execution speed. It’s up to the OS to deploy TP to enforce its security policies.
Full temporal isolation, i.e. the absence of any temporal information flow, requires strict time partitioning, no if’s, no but’s. If the time of a context switch can be controlled by application code, you have a timing channel (and it’s one that is trivial to exploit).
Yes, strict temporal partitioning is extremely restrictive, way too restrictive for general-purpose computing. (But it happens to be what is used to ensure commercial aircraft don’t fall out of the sky.) Nothing new there.
If you want to relax this, you’ll need a very careful analysis of your security requirements, and deploy TP where needed and omit where it’s not needed. But you can't prevent µarch timing channels without TP. Everybody knows about D-cache channels these days and how easy they are to exploit, and similar with LLC channels. And if you want to prevent at least those, then full TP has no extra cost.
The main problem with using a more relaxed security policy (compared to full temporal isolation) is that I’m not aware of a theoretical framework that will allow you to make *any* guarantee, i.e. you’re back to ad-hoc security (which in the end tends to be not much different from no security).
Developing such a framework is on my agenda (it’s a core part of what I said I’m waiting for a good PhD student to work on). Basically I want to be able to make certain security guarantees where components have overt communication channels. An example is tab isolation in a browser: We want to have a fully functional web browser that allows processing of sensitive information in one tab, but guarantee that it cannot be leaked to a different tab.
You can easily think of ad-hoc approaches that seem to be able to deal with this. Until someone breaks them.
I hope this helps.
Gernot
On 10 Aug 2023, at 07:48, Hugo V.C.
wrote: "For literally 100% of the cases I deal with, time partitioning is completely impractical. The inability of a time-partitioned system to adapt to workload changes means that it is not even worth considering."
I agree Demi. Maybe the problem is trying to solve everything wirh the same hardware and software. With current kind of hardware/CPUs is very difficult to solve both Desktop (human operated) devices general purpose software and embedded, task specific devices software challenges as they have, by nature, different problems to be solved. Human interaction with a computer is not deterministic, so forget about deterministic solutions... Instead try to solve just the most sensible parts of the full puzzle, so you can use specific software/hardware doing the job where errors are not an option and keep other pieces of the puzzle with "standard" software/hardware. Mixing all in a big soap of software is, nowadays, and with the horrible hardware support, an impossible mission.
El mié., 9 ago. 2023 19:38, Demi Marie Obenour
escribió: On 8/9/23 04:47, Gernot Heiser wrote:
On 9 Aug 2023, at 06:28, Demi Marie Obenour
wrote:
> and full speculative taint tracking is required.
I don’t follow. If you clean all µarch state you don’t have to worry
about speculation traces, that’s (among others) the gist of Ge et al.
Does it prevent Spectre v1? A bounds check will almost always predict
as in-bounds and that is potentially a problem.
Taint tracking does prevent Spectre v1 because the speculatively read data is guaranteed to be unobservable. Strict temporal isolation also mitigates this, but IIUC it is also incompatible with load-balancing and therefore only practical in limited cases.
Spectre v1 uses speculation to put secrets into the cache, combined with a covert timing channel to move it across security domains. Without the covert channel it’s harmless. Time protection prevents the covert channel.
You cannot use time protection in a fully dynamic system, _especially_ not a desktop system. I should have made it clear that I was referring to dynamic systems.
Speculation taint-tracking is a complex point-defence against one specific attack patterns, that needs to take a pessimistic approach to enforcing what the hardware thinks *might* be security boundaries, irrespective what the actual security policy is.
Time protection is a general, policy-free mechanism that prevents µarch timing channels under control of the OS, which can deploy it where needed.
The problem with time protection is that it is all-or-nothing. A general purpose system _cannot_ enforce time protection, because doing so requires statically allocating CPU time to different security domains. This is obviously impossible in any desktop system, because it is the human at the console who decides what needs to run and when.
As I keep saying, the seL4 mechanism that is (unfortunately, somewhat misleadingly and for purely historic reasons) called “IPC” shouldn’t be considered a general-purpose communication mechanism, but a protected procedure call – the microkernel equivalent to a Linux system call. As such, the trust relationship is not symmetric: you use it to invoke some more privileged operation (and definitely need to trust the callee to a degree).
I should have said “not-mutually-trusting”, then.
Makes a big difference: Enforcement then comes down to enforcing the security policy, which may or may not require temporal isolation. If it does, there’s an (unavoidable) cost. If not then not.
Or maybe the security policy requires something between “nothing” and “full temporal isolation”.
Consider a server that performs cryptographic operations. The security policy is that clients cannot access or alter data belonging to other clients and that secret keys cannot be extracted by any client. Since the cryptographic operations are constant-time there is no need for temporal isolation, _provided that speculative execution does not cause problems_. Enforcing temporal isolation would likely cause such a large performance penalty that the whole concept is not viable.
fence.t (or something similar) is the mechanism you need to let the OS do it’s job, and it is simple and cheap to implement, and costs you no more than the L1-D flush you need anyway, as Wistoff et al demonstrated.
I missed the “costs you no more than the L1-D flush you need anyway” part. On x86, instructions like IBPB can easily take thousands of cycles IIUC.
Invalidating the L1-D cache takes 1000s of cycles, and is unavoidable for temporal isolation – D-cache timing attacks are the easiest ones to do. The point is that compared to the inevitable D-cache flush, everything else is cheap, and can be done in parallel to the D-cache flush, so doesn’t affect overall latency.
I’m not sure why IBPB should take 1000s of cycles (unless Intel executes complex microcode to do it). Resetting flip-flops is cheap. What makes the D-cache expensive is the need to write back dirty data. Other µarch state can be reset without any write-back as it caches R/O information.
There is a separate issue of indirect costs, which can be significant, but not in a time-partitioned system. If the security policy requires isolating a server from it’s client, then these costs would become significant, but that’s inherent in the problem.
For literally 100% of the cases I deal with, time partitioning is completely impractical. The inability of a time-partitioned system to adapt to workload changes means that it is not even worth considering. Any feasible solution needs to be able to allocate 90+% of system CPU time to security domain X, and then allocate 90+% of system CPU time to security domain Y, _without knowing in advance that these changes will happen_. Time partitioning is awesome for your static embedded systems that will only ever run workloads known in advance, but for the systems I work on, it is a complete non-starter both now and in the foreseeable future.
Would fence.t have equally catastrophic overhead on an out-of-order RISC-V processor?
seems simple to implement in hardware, but does not seem efficient. https://carrv.github.io/2020/papers/CARRV2020_paper_10_Wistoff.pdf claims to be decently efficient, but is for an in-order CPU.
It is highly efficient, and completely hidden behind the D-cache flush.
Implementation on an OoO processor isn’t published yet, but confirms
https://riscv-europe.org/media/proceedings/posters/2023-06-08-Nils-WISTOFF-a... the
results obtained on the IO CV6.
Also, in the future, would you mind including the full URL of any
articles?
I don’t know what “Wistoff et al” and “Ge et al” refer to, and my mail client is configured to only display plain text (not HTML) because the attack surface of HTML rendering is absurdly high.
All papers are listed on the time-protection project page.
[Ge et al, EuroSys’19]: https://trustworthy.systems/publications/abstracts/Ge_YCH_19.abstract [Wistoff et al, DATE’21]:
https://trustworthy.systems/publications/abstracts/Wistoff_SGBH_21.abstract
Both won Best-Paper awards, btw
I’m not surprised! For the systems that _can_ use it, it is awesome. -- Sincerely, Demi Marie Obenour (she/her/hers)
_______________________________________________ Devel mailing list -- devel@sel4.systems To unsubscribe send an email to devel-leave@sel4.systems
_______________________________________________ Devel mailing list -- devel@sel4.systems To unsubscribe send an email to devel-leave@sel4.systems
_______________________________________________ Devel mailing list -- devel@sel4.systems To unsubscribe send an email to devel-leave@sel4.systems
On 11 Aug 2023, at 21:33, Hugo V.C.
On 8/11/23 07:46, Gernot Heiser wrote:
On 11 Aug 2023, at 21:33, Hugo V.C.
wrote: That's it. And here is were I think we all in the security industry are failing. I don't think we can solve that nowadays with the current hardware/CPUs and "mix" things, moreover, even if someone dares to do it, I guess it will be extremely complex to make guarantees. Instead of "relaxing" the security policy, I bet to solve that by, literally, make hardware partitioning, with different OSs, the general purpose one and the one with guarantees and then transfer sensible workloads to the hardware partition with the OS that gives you guarantees. I'm aware that here interaction between those two systems introduces new challenges, but IMHO it simplifies a lot the design.
I’m not convinced that there’s a case for more HW support than the simple mechanisms we propose in the TP paper, and which Nils instantiated in fence.t. Unless you go for something that is *very* complex, and will just create more opportunities for loopholes.
"Simple is better” applies in the security context even more than in other contexts. Pick the simplest mechanism that does the job, and then use it judiciously.
I agree, but in this case, I don’t know if a simple solution exists. The workloads people want to run aren’t simple, and the security policies they want to enforce aren’t simple either. -- Sincerely, Demi Marie Obenour (she/her/hers)
On 12 Aug 2023, at 02:10, Demi Marie Obenour
On 8/11/23 07:46, Gernot Heiser wrote:
On 11 Aug 2023, at 21:33, Hugo V.C.
wrote: That's it. And here is were I think we all in the security industry are failing. I don't think we can solve that nowadays with the current hardware/CPUs and "mix" things, moreover, even if someone dares to do it, I guess it will be extremely complex to make guarantees. Instead of "relaxing" the security policy, I bet to solve that by, literally, make hardware partitioning, with different OSs, the general purpose one and the one with guarantees and then transfer sensible workloads to the hardware partition with the OS that gives you guarantees. I'm aware that here interaction between those two systems introduces new challenges, but IMHO it simplifies a lot the design.
I’m not convinced that there’s a case for more HW support than the simple mechanisms we propose in the TP paper, and which Nils instantiated in fence.t. Unless you go for something that is *very* complex, and will just create more opportunities for loopholes.
"Simple is better” applies in the security context even more than in other contexts. Pick the simplest mechanism that does the job, and then use it judiciously.
I agree, but in this case, I don’t know if a simple solution exists. The workloads people want to run aren’t simple, and the security policies they want to enforce aren’t simple either.
I’m yet to see a system that cannot be built on top of simple mechanisms. Policy-mechanism separation is one of the most powerful concepts in system design. Unfortunately, most people just try to solve problems by adding features (and thus complexity) instead of stepping back and try to understand the root causes of a problem and how it can be solved at the root. Featuritis would have never produced something of the power of seL4, but instead has produced all the security debacles we see day after day. Gernot
On 8/11/23 04:33, Gernot Heiser wrote:
This discussion is confusing some issues, among others, time protection (TP) and temporal isolation.
The definition of time protection (from Ge et al [2019]): "A collection of OS mechanism which jointly prevent interference between security domains that would make execution speed in one domain dependent on the activities of another.”
IOW, time protection is the mechanism that allows you to implement temporal isolation, which is a prevention of interference between domains that affects execution speed. It’s up to the OS to deploy TP to enforce its security policies.
Full temporal isolation, i.e. the absence of any temporal information flow, requires strict time partitioning, no if’s, no but’s. If the time of a context switch can be controlled by application code, you have a timing channel (and it’s one that is trivial to exploit).
That is correct.
Yes, strict temporal partitioning is extremely restrictive, way too restrictive for general-purpose computing. (But it happens to be what is used to ensure commercial aircraft don’t fall out of the sky.) Nothing new there.
Strict temporal partitioning also makes sense for cloud environments that do not oversubscribe CPU or memory.
If you want to relax this, you’ll need a very careful analysis of your security requirements, and deploy TP where needed and omit where it’s not needed. But you can't prevent µarch timing channels without TP. Everybody knows about D-cache channels these days and how easy they are to exploit, and similar with LLC channels. And if you want to prevent at least those, then full TP has no extra cost.
Does time protection require temporal isolation?
The main problem with using a more relaxed security policy (compared to full temporal isolation) is that I’m not aware of a theoretical framework that will allow you to make *any* guarantee, i.e. you’re back to ad-hoc security (which in the end tends to be not much different from no security).
Such a framework would be awesome!
Developing such a framework is on my agenda (it’s a core part of what I said I’m waiting for a good PhD student to work on). Basically I want to be able to make certain security guarantees where components have overt communication channels. An example is tab isolation in a browser: We want to have a fully functional web browser that allows processing of sensitive information in one tab, but guarantee that it cannot be leaked to a different tab.
Another good example is different apps on a mobile device. Android allows any two applications on the same profile to communicate with mutual consent, and yet also allows one application to keep information secret from another. One critical requirement is that such a framework must allow an efficient implementation, both in terms of performance and power consumption. If it isn’t efficient enough to be deployed, then it won’t do people any good.
You can easily think of ad-hoc approaches that seem to be able to deal with this. Until someone breaks them.
I hope this helps.
Gernot
On 10 Aug 2023, at 07:48, Hugo V.C.
wrote: "For literally 100% of the cases I deal with, time partitioning is completely impractical. The inability of a time-partitioned system to adapt to workload changes means that it is not even worth considering."
I agree Demi. Maybe the problem is trying to solve everything wirh the same hardware and software. With current kind of hardware/CPUs is very difficult to solve both Desktop (human operated) devices general purpose software and embedded, task specific devices software challenges as they have, by nature, different problems to be solved. Human interaction with a computer is not deterministic, so forget about deterministic solutions... Instead try to solve just the most sensible parts of the full puzzle, so you can use specific software/hardware doing the job where errors are not an option and keep other pieces of the puzzle with "standard" software/hardware. Mixing all in a big soap of software is, nowadays, and with the horrible hardware support, an impossible mission.
El mié., 9 ago. 2023 19:38, Demi Marie Obenour
escribió: On 8/9/23 04:47, Gernot Heiser wrote:
On 9 Aug 2023, at 06:28, Demi Marie Obenour
wrote:
> and full speculative taint tracking is required.
I don’t follow. If you clean all µarch state you don’t have to worry
about speculation traces, that’s (among others) the gist of Ge et al.
Does it prevent Spectre v1? A bounds check will almost always predict
as in-bounds and that is potentially a problem.
Taint tracking does prevent Spectre v1 because the speculatively read data is guaranteed to be unobservable. Strict temporal isolation also mitigates this, but IIUC it is also incompatible with load-balancing and therefore only practical in limited cases.
Spectre v1 uses speculation to put secrets into the cache, combined with a covert timing channel to move it across security domains. Without the covert channel it’s harmless. Time protection prevents the covert channel.
You cannot use time protection in a fully dynamic system, _especially_ not a desktop system. I should have made it clear that I was referring to dynamic systems.
Speculation taint-tracking is a complex point-defence against one specific attack patterns, that needs to take a pessimistic approach to enforcing what the hardware thinks *might* be security boundaries, irrespective what the actual security policy is.
Time protection is a general, policy-free mechanism that prevents µarch timing channels under control of the OS, which can deploy it where needed.
The problem with time protection is that it is all-or-nothing. A general purpose system _cannot_ enforce time protection, because doing so requires statically allocating CPU time to different security domains. This is obviously impossible in any desktop system, because it is the human at the console who decides what needs to run and when.
As I keep saying, the seL4 mechanism that is (unfortunately, somewhat misleadingly and for purely historic reasons) called “IPC” shouldn’t be considered a general-purpose communication mechanism, but a protected procedure call – the microkernel equivalent to a Linux system call. As such, the trust relationship is not symmetric: you use it to invoke some more privileged operation (and definitely need to trust the callee to a degree).
I should have said “not-mutually-trusting”, then.
Makes a big difference: Enforcement then comes down to enforcing the security policy, which may or may not require temporal isolation. If it does, there’s an (unavoidable) cost. If not then not.
Or maybe the security policy requires something between “nothing” and “full temporal isolation”.
Consider a server that performs cryptographic operations. The security policy is that clients cannot access or alter data belonging to other clients and that secret keys cannot be extracted by any client. Since the cryptographic operations are constant-time there is no need for temporal isolation, _provided that speculative execution does not cause problems_. Enforcing temporal isolation would likely cause such a large performance penalty that the whole concept is not viable.
fence.t (or something similar) is the mechanism you need to let the OS do it’s job, and it is simple and cheap to implement, and costs you no more than the L1-D flush you need anyway, as Wistoff et al demonstrated.
I missed the “costs you no more than the L1-D flush you need anyway” part. On x86, instructions like IBPB can easily take thousands of cycles IIUC.
Invalidating the L1-D cache takes 1000s of cycles, and is unavoidable for temporal isolation – D-cache timing attacks are the easiest ones to do. The point is that compared to the inevitable D-cache flush, everything else is cheap, and can be done in parallel to the D-cache flush, so doesn’t affect overall latency.
I’m not sure why IBPB should take 1000s of cycles (unless Intel executes complex microcode to do it). Resetting flip-flops is cheap. What makes the D-cache expensive is the need to write back dirty data. Other µarch state can be reset without any write-back as it caches R/O information.
There is a separate issue of indirect costs, which can be significant, but not in a time-partitioned system. If the security policy requires isolating a server from it’s client, then these costs would become significant, but that’s inherent in the problem.
For literally 100% of the cases I deal with, time partitioning is completely impractical. The inability of a time-partitioned system to adapt to workload changes means that it is not even worth considering. Any feasible solution needs to be able to allocate 90+% of system CPU time to security domain X, and then allocate 90+% of system CPU time to security domain Y, _without knowing in advance that these changes will happen_. Time partitioning is awesome for your static embedded systems that will only ever run workloads known in advance, but for the systems I work on, it is a complete non-starter both now and in the foreseeable future.
Would fence.t have equally catastrophic overhead on an out-of-order RISC-V processor? https://riscv-europe.org/media/proceedings/posters/2023-06-08-Nils-WISTOFF-a... seems simple to implement in hardware, but does not seem efficient. https://carrv.github.io/2020/papers/CARRV2020_paper_10_Wistoff.pdf claims to be decently efficient, but is for an in-order CPU.
It is highly efficient, and completely hidden behind the D-cache flush.
Implementation on an OoO processor isn’t published yet, but confirms the results obtained on the IO CV6.
Also, in the future, would you mind including the full URL of any articles? I don’t know what “Wistoff et al” and “Ge et al” refer to, and my mail client is configured to only display plain text (not HTML) because the attack surface of HTML rendering is absurdly high.
All papers are listed on the time-protection project page.
[Ge et al, EuroSys’19]: https://trustworthy.systems/publications/abstracts/Ge_YCH_19.abstract [Wistoff et al, DATE’21]: https://trustworthy.systems/publications/abstracts/Wistoff_SGBH_21.abstract
Both won Best-Paper awards, btw
I’m not surprised! For the systems that _can_ use it, it is awesome. -- Sincerely, Demi Marie Obenour (she/her/hers)
_______________________________________________ Devel mailing list -- devel@sel4.systems To unsubscribe send an email to devel-leave@sel4.systems
_______________________________________________ Devel mailing list -- devel@sel4.systems To unsubscribe send an email to devel-leave@sel4.systems
_______________________________________________ Devel mailing list -- devel@sel4.systems To unsubscribe send an email to devel-leave@sel4.systems
-- Sincerely, Demi Marie Obenour (she/her/hers)
On 12 Aug 2023, at 02:07, Demi Marie Obenour
If you want to relax this, you’ll need a very careful analysis of your security requirements, and deploy TP where needed and omit where it’s not needed. But you can't prevent µarch timing channels without TP. Everybody knows about D-cache channels these days and how easy they are to exploit, and similar with LLC channels. And if you want to prevent at least those, then full TP has no extra cost.
Does time protection require temporal isolation?
Nope. Time protection is the mechanism that allows providing temporal isolation.
Developing such a framework is on my agenda (it’s a core part of what I said I’m waiting for a good PhD student to work on). Basically I want to be able to make certain security guarantees where components have overt communication channels. An example is tab isolation in a browser: We want to have a fully functional web browser that allows processing of sensitive information in one tab, but guarantee that it cannot be leaked to a different tab.
Another good example is different apps on a mobile device. Android allows any two applications on the same profile to communicate with mutual consent, and yet also allows one application to keep information secret from another.
Yes. The smartphone space is really interesting. While built on top of the classical Unix security model (but implementable on a much simpler model) it demonstrates that stricter security policies (compared to desktops) can be made acceptable to users.
One critical requirement is that such a framework must allow an efficient implementation, both in terms of performance and power consumption. If it isn’t efficient enough to be deployed, then it won’t do people any good.
Energy is (to first order) proportional to computation, so just looking at “performance” is good enough. And performance has always been our driver – see “security is no excuse for bad performance!” ;-) As I argued earlier, the performance cost of time protection is not higher than what you need to eliminate just the most easily exploitable cache channels. Gernot
On 8/13/23 12:18, Gernot Heiser wrote:
On 12 Aug 2023, at 02:07, Demi Marie Obenour
wrote: If you want to relax this, you’ll need a very careful analysis of your security requirements, and deploy TP where needed and omit where it’s not needed. But you can't prevent µarch timing channels without TP. Everybody knows about D-cache channels these days and how easy they are to exploit, and similar with LLC channels. And if you want to prevent at least those, then full TP has no extra cost.
Does time protection require temporal isolation?
Nope. Time protection is the mechanism that allows providing temporal isolation.
Developing such a framework is on my agenda (it’s a core part of what I said I’m waiting for a good PhD student to work on). Basically I want to be able to make certain security guarantees where components have overt communication channels. An example is tab isolation in a browser: We want to have a fully functional web browser that allows processing of sensitive information in one tab, but guarantee that it cannot be leaked to a different tab.
Another good example is different apps on a mobile device. Android allows any two applications on the same profile to communicate with mutual consent, and yet also allows one application to keep information secret from another.
Yes. The smartphone space is really interesting. While built on top of the classical Unix security model (but implementable on a much simpler model) it demonstrates that stricter security policies (compared to desktops) can be made acceptable to users.
One critical requirement is that such a framework must allow an efficient implementation, both in terms of performance and power consumption. If it isn’t efficient enough to be deployed, then it won’t do people any good.
Energy is (to first order) proportional to computation, so just looking at “performance” is good enough.
And performance has always been our driver – see “security is no excuse for bad performance!” ;-)
As I argued earlier, the performance cost of time protection is not higher than what you need to eliminate just the most easily exploitable cache channels.
Gernot
There is also the question of whether current hardware allows eliminating cache channels without an unacceptable performance cost. DRAM access is horribly expensive nowadays, so I am not sure if the overhead of cache partitioning and/or flushing is something that modern devices can afford. I’m also unsure if context switches that do not cross security boundaries are particularly common at all. Is the fast case (no flushing needed) actually that common, or will most context switches require flushing? If the latter, should synchronous IPC be discouraged, with asynchronous designs based on large ring buffers as the recommended alternative? -- Sincerely, Demi Marie Obenour (she/her/hers)
participants (3)
-
Demi Marie Obenour
-
Gernot Heiser
-
Hugo V.C.