"Heiser, Gernot (Data61, Kensington NSW)"
We benchmark our own system and publish the results in an easy to reproduce way. No-one else does, for good reason (because no-one is keen to demonstrate how much slower they are). If we published our own measurements of other systems’ performance, people might not believe them, or we might even not tweak the other systems to get the best out of them. So the best thing for us to do is to be open about our own performance, make the claim of being the fastest, and leave it to others to challenge this (with reproducible numbers, of course).
However, there is a recent peer-reviewed paper that does compare the performance of sel4, Fiasco.OC (aka L4Re) and Zircon. It shows that seL4 performance is within 10–20% of the hardware limit, while Fiasco.OC is about a factor two slower (i.e. >100% above the hardware limit), and Zircon is way slower than that: https://dl.acm.org/doi/pdf/10.1145/3302424.3303946
Thank you for your reply! I must say I find this stance strange in a way: sure, people can say you cheated if you published benchmarks for other systems, but… is it worse than them saying you make claims without providing any grounds for them? Anyway, I'm not saying that these claims are unsubstantiated (as the paper you linked me does look like a reasonable element of it), but if said paper was linked alongside to it, and/or you published benchmarks of your competitors (and then they can challenge your results with results of their own), it'd probably avoid such assertions being made :) All that to say, thank you for your reply! Leo