The article “Mixed-criticality support in seL4” at https://lwn.net/Articles/745946/ says seL4's solution to the problem of CPU budget exhaustion while in a non-reentrant shared data-structure server is timeout exceptions. The server might time out by chance, through no fault of its own. A more robust solution is to simply disallow calling any such server that advertises a WCET longer than the client's per-period CPU budget, and when a call is made, have the kernel check whether the client's remaining CPU budget ≥ server's WCET. If it is, then proceed with the call as normal. If it isn't, then verify that the client's per-period CPU budget ≥ server's WCET. If that verification succeeds, then terminate the budget for the client's current period early, and actually enter the server at the beginning of the client's _next_ period. (If the verification fails, then the kernel tells the client that the call is disallowed.) If the server fails to return before exhausting the client's CPU budget, then the server has violated its own WCET, which is a bug. Thus, a non-buggy server will never time out, so timeout exception handlers handle only bugs, not routine system operation. If the server's actual WCET is W, then from the client's point of view, the server's apparent WCET is almost 2W, because the client might make the call with remaining CPU budget X such that X is just barely less than W, in which case the kernel will terminate the client's current budget early (so the client loses X) and enter the server at the beginning of the next period, and the server might then take W to return, making the client wait X+W altogether (plus, of course, the remainder of the period, during which other VMs run). But with seL4's current solution, the server's apparent WCET is even longer than 2W, because the server could run for X, time out, be restarted (taking time R), then run for W, making the client wait X+R+W altogether.