I should probably also read up on the domain features, that’s not something I had heard of before, somehow. Thanks again.
On Mar 6, 2022, at 5:00 PM, Gerwin Klein
wrote: Hi Isaac,
On 7 Mar 2022, at 04:46, Isaac Beckett
wrote: From the infoflow proof paper (https://trustworthy.systems/publications/nicta_full_text/6464.pdf) where it is called the partition scheduler: "Similarly, the original seL4 scheduler, before partition scheduling was implemented, was known not to enforce isolation. We could not prove information flow security until we had fully and correctly specified the partition scheduler in the updated abstract specification. Proving information flow security then required us to show that the scheduler’s choice about which partition to schedule next can never be affected by any other partition, as one would expect."
Hello! It was my understanding that seL4 used to not have an in-kernel scheduler, and scheduling was up to the root task. How did that work?
seL4 still has an in-kernel scheduler, scheduling was never up to to user-level tasks (incl the root task).
User-level threads may have capabilities which allow them to set scheduling parameters (esp on MCS), but scheduling itself happens in the kernel.
If you push it you can effectively implement user-level scheduling with this in MCS if you combine it with other kernel mechanisms like notifications and scheduling context donation. This is useful for implementing specific real-time scheduling regimes for instance, but usual systems would just set standard scheduling parameters (period and budget) for the kernel.
The combination of MCS and the domain scheduler is still not fully explored, which also means that information flow protection under MCS is not fully explored (or proved). There is an obvious-ish way to combine the two, which is to run MCS within each domain. That will not be compatible with usual system schedulability analysis methods, but might still be useful for specific systems.
Since the MCS proofs are not finished yet, the static domain scheduler + priority round-robin within domains is still the default configuration.
And why was the choice made to add the MCS features? Basically, why the change from having scheduling in the kernel vs. in userspace?
There was no such change. MCS is just a much more flexible in-kernel scheduling mechanism, and by default user level, even on MCS, would not be involved with scheduling apart from setting initial parameters.
In general, the philosophy is that the kernel should provide mechanisms, not force policy. For scheduling, priority round-robin can be seen as a specific scheduling policy. MCS is a mechanism that allows the user to configure a more diverse set of policies.
Cheers, Gerwin