On Mon, 29 Aug 2016 23:01:45 +0000, Gernot.Heiser wrote:
On 30 Aug 2016, at 8:38 , Alex Elsayed
wrote: As a result, I think it'd be _very_ prudent to continue looking at purely- software, paravirtualized hypervisor implementations, especially for high- assurance systems.
I think it’s an illusion to think one can partition hardware into trusted and untrusted bits. In the end, the OS is at the mercy of the hardware, if it’s faulty then you lose, there’s no way around it.
You may *think* that the manufacturers don’t make changes to the more conventional bits of the hardware, and thus they are “correct", but that isn’t true, of course. And on top of that we know that there are intentional backdoors in commercial hardware.
The only way around this is high-assurance hardware, and this doesn’t come at an affordable cost.
Sure, and I don't disagree. It's part of why I'm so excited by the RISC-V work, especially MIT's progress on formal verification - that, at least, pushes the space problems can occur in down by one more level. But to a certain extent, I think that there's value in reducing the amount of new API exposed. Furthermore, there's a large body of code exercising the "more conventional" bits in the wild, and a considerably smaller body of code exercising the newer bits. Bugs in the more conventional bits are thus more likely to break existing code - and thus be caught early. It's not protection in the same sence as high-assurance hardware, no. But at least statistically, from looking at where severe hardware-based security vulnerabilities have cropped up, it seems to be better than the alternative of trusting the hardware virtualization to actually isolate.