On 24 Feb 2018, at 04:25, Kelly Dean
Gernot.Heiser@data61.csiro.au writes:
The server’s apparent WCET is only 2W if the client fails to ensure that it’s got enough budget left. That’s a client bug, the client can ensure that it invokes the server with a fresh budget. It’s not the kernel’s job to ensure that buggy clients get optimal service.
I see. Though wouldn't it make more sense for the server to check, instead of the client? Because the server needs to check anyway, so that a misbehaving client can't DoS other clients by repeatedly calling the server with insufficient budget and forcing the server to repeatedly time out and recover. And because the server is in a better position to handle its data-dependent execution times (i.e. as you said, not just pessimistically bail if remaining budget < WCET). So, the client can call unconditionally, the server can return an error code whenever it discovers it won't be able to finish in time, and the client's error checking doesn't depend on timing info about the server.
The server doesn’t do any checking, the kernel enforces the budget and the server uses the timeout exception to clean up. How is a system-defined policy. Eg the time-fault handler can suspend a misbehaving client if the system is so configured. The server attempting to gate-keep and ensure that there’s enough time left for the request would be quite complex and probably expensive, as it would need a full model of its own execution. It would also be very pessimistic, given the pessimism of worst-case execution-time (WCET) analysis on pipelined processors with caches. Much better to make the client responsible for its own fate and kick it out if it misbehaves.
Anyway, that made me think of where else limited server time is a problem. At 19:13 in your presentation “Mixed-criticality support in seL4” at https://www.youtube.com/watch?v=ijTTZgQ8cB4 you gave the example of network drivers needing timely response to avoid packet loss. The solution at 30:47 was a scheduling context with high priority, short period, and budget
But network traffic might be mixed criticality. For example, real-time sensor data mixed with bulk file transfers. Suppose the most critical thread must be allocated 90% to cover its worst case utilization, but its average is 20%. The network driver then can be allocated only 10%, but it's plenty to transceive the critical traffic. Suppose also that to saturate the network link, the bulk transfer thread (which runs as slack) would need 50% and the network driver would need 20% (twice its allocation) to transceive fast enough. In this case, total utilization will average less than 80%, and the bulk transfer will be unnecessarily throttled.
One solution would be to assign two contexts to the network driver: the standard 10% high priority context and a 100% low-priority context, and run the driver whenever either context allows. Then average total utilization could rise to 90% (with 20% for the network driver) to avoid throttling the bulk transfer.
Not sure how realistic this scenario is. You seem to assume that somehow there is a notion of packet priority on the network? In any case, there is no conceptual problem with the driver having two threads, running with different prios and SC. While a single thread could get SCs donated from different sources, this would have to come with explicit lowering of its prio when receiving the low-crit budget. Moreover, in your scenario, the driver, handling low and high traffic, would have to be trusted and assured to high criticality. All doable, but I don’t think I’d want this sort of complexity and extra concurrency control in a critical system.
But at 23:04 in your presentation, you explained that with traditional time slicing, one client can DoS another if it doesn't have to pay the time cost of running a high-priority server, and the solution at 32:03 is running the server using the client's budget, which gives me another idea: why give the network driver an independent budget at all? Even if the driver needs a short period, it still makes sense to run using the client's budget. It's pointless to avoid dropping packets if the client is too slow to consume them anyway, which means the client must be guaranteed suitable utilization; therefore, simply guarantee enough additional utilization to cover the cost of the network driver's service, and when the driver transceives data on the client's behalf, charge the client for the time.
Not that simple: the driver needs to execute on an interrupt (or are you assuming polled I/O)? The model supports passive drivers, it can wait both for clients IPCing it (with SC donation) and on a Notification (semaphore) that delivers the interrupt and also donates an SC. However, in all those scenarios you’re making the driver high-crit, and my example was all about it being low.
That way, there's no need for a second driver context to fully serve the slack thread, nor even a need for one complete context (with budget) to serve the most critical thread.
I’m not sure I’m still following your scenario, and what it has to do with your initial idea that a server should gate-keep client time.
BTW, at 40:42, you described a UAV system with an A15-based mission board and an M3-based flight control board, which you said you would have designed to be more simple if not representing a legacy system. In particular, would you eliminate the separate flight control MCU, and run everything on one CPU?
Yes, that’s the point. With MCS support, the whole setup could run on the A15 with seL4. In fact we’re building this version (but it’s a background activity and we don’t have much in-house control know-how, which slows things down). Gernot