With e.g. HBM, there might be an opportunity to state save the entire cache
on a context switch and restore it on resume. I guess some of the overhead
of flushing is due to random accesses. State save could just do a single
contiguous write. There’s likely to be some complexity around cache
coherency and shared memory but maybe not insurmountable. In the worst
case, shared memory could have a separate cache which would be flushed
rather than state saved.
On Mon, 14 Aug 2023 at 17:31, Gernot Heiser
On 15 Aug 2023, at 01:59, Gernot Heiser
wrote: Speculative taint tracking helps close some of the remaining holes.
I’m highly sceptical too. Taint-tracking has to be highly pessimistic to
be feasible, and adds a lot of complexity to the hardware (which gets back to my earlier argument: make it more complex and you’re making it more likely that something goes wrong).
Forgot to add:
It’s not perfect, but it is the best one can do without cache partitioning or heavyweight flushing, and I suspect both of those are often impractical from a performance perspective.
Cache partitioning (which really is static, software-controlled vs dynamic partitioning by hardware) produces, in average, at best a mild performance degradation. There are plenty of cases where it actually improves performance (loads with a cache working set that exceeds the cache size) and it is used for performance isolation and sometimes for improving average-case performance.
Similar, cache-flushing (of the L1) has little performance impact, except where context-switching rates are very high. Otherwise, the L1 is too small to hold much hot data after a full round of context switches, so the cache is effectively flushed implicitly.
On OoO processors explicit flushes have a cost, as the cost of the implicit replacement can be at least partially hidden by OoO execution. But I doubt there are many use realistic cases where the impact is significant. (Yes, we did evaluate this, although it’s been studied in more detail by others.)
Gernot _______________________________________________ Devel mailing list -- devel@sel4.systems To unsubscribe send an email to devel-leave@sel4.systems