On 24 Jan 2023, at 14:58, Demi Marie Obenour
What is the appropriate design for dynamic systems with a potentially unbounded number of critical components? In Qubes OS, for example, there is no way to know what is and is not critical, as that depends on the (human) user.
It depends on the definition of critical, i.e. what safety or security property you're trying to enforce. For example, that would usually not be that the user sees the website they requested in a browser, but that whatever any website does, it cannot compromise the rest of the machine. That property does not depend on the human user. I don't know Qubes in detail, but I'd think the code that needs to be at the highest criticality level for that property to hold is relatively small. It'd mainly be the component that is responsible for starting up and isolating new components. That could be one central component or an architecture where things can be created recursively in containers. I'd give drivers and other hardware access the next level of criticality (high, but maybe not formal-verification high), probably also communication code between partitions, and would treat anything else (Linux, user code, etc) as entirely untrusted. "Untrusted" wouldn't mean "doesn't work", only "is not required to work for security to hold". My point is that once you have defined what critical means, even in a dynamic system, the number of critical components is not usually unbounded (maybe the number of instances, but not the code). Conversely, if you can't define what critical means, you can't really design for it. For example, in the extreme, critically important for one system purpose might be actively damaging for another system purpose. Of course there other properties where that doesn't work as well, I'm not saying there is a silver bullet. It just works in more cases than one might initially think. Cheers, Gerwin