Hi Anna,
Thank you for the reply! And thank you for pointing me toward your dissertation; this will certainly be valuable for the research I am doing. I’ll take a look at the Quest-V paper as well.
I thought I would note another key difference, and ask if I’m interpreting this correctly as well: there is a minimum budget defined, equal to the kernel worst-case execution time on the given hardware. A scheduling context cannot be defined with this budget. Further, a replenishment remnant less than this minimum budget will be merged with the next replenishment in the circular buffer. Is this correct?
Thanks again. I’d love to stay in the loop as improvements are made, and contribute where possible. I’ve signed up for the discourse (pending approval), and can continue this conversation there if preferred.
Marion Sudvarg
From: Anna Lyons
However, looking at the seL4 kernel source code, it seems that the actual implementation follows very closely with the improved POSIX sporadic server mechanism described by Stanovic et al. in 2010 (https://www.cs.fsu.edu/~awang/papers/rtas2010.pdf).
Correct. I mostly followed the Stanovic paper and work done in one of the Quest-V papers which also uses sporadic servers. This is a very timely email, as recently we've discovered some inconsistencies with the implementation in seL4 and the sporadic server algorithm. Namely, the sporadic server algorithm in seL4 is not work conserving, which can result in very low utilisation, which is what you have also discovered. We're working on how to adjust this in a way that satisfies all uses of seL4, particularly in the case of adversarial threads, and will provide an RFC or an update to the documentation with the proposed changes. This is mostly led by Curtis Millar who is doing all the hard work, with advice from me and others.
Can somebody explain to me in more detail how the sporadic server has been implemented in seL4?
There's a more detailed explanation in my phd (the implementation chapter is probaly the most useful).: https://github.com/pingerino/phd/blob/master/phd.pdf The key differences are: * threads do not run at a lower priority once they are out of replenishments * currently the execution time is bounded by the sliding window constraint, due to the lack of work conserving. This is implemented via the fact that any pre-emption point causes replenishments to be delayed. * the number of replenishments is set by user level * the kernel does not do an admission test, and only guarantees that the highest priority threads with available replenishments are scheduled It would be best to continue these enquiries either here on the mailing list or on the seL4 discourse (https://sel4.discourse.group/), as many others have insights, and I am not deep into scheduling theory now. [https://aws1.discourse-cdn.com/free1/uploads/sel4/original/1X/f196583f81bd64...]https://sel4.discourse.group/ seL4 - microkernel trustworthy systemshttps://sel4.discourse.group/ Systems related seL4 discussions: kernel design decisions, how to do X with seL4, how to port seL4 to a new architecture. sel4.discourse.group Thanks, Anna. Confidentiality Note:This email is intended only for the person or entity to which it is addressed and may contain information that is privileged, confidential or otherwise protected from disclosure. Unauthorized use, dissemination, distribution or copying of this email or the information herein by anyone other than the intended recipient is strictly prohibited. If you have received this email in error, please notify the sender immediately and destroy the original message, any attachments thereto and all copies.