expecting the abstract model in Isabelle to concretely specify how the user programs
behave, perhaps according to x86/arm/riscv semantics extended with syscalls.
If it did that, the theorem would only apply to user programs with known behaviour, i.e.
these would be user theorems, not kernel theorems.
It just occurred to me what the confusion might be: by conservatively modelling which
effects a user program may have with nondeterministic, we implicitly model the hardware
mechanisms of the platform at a fairly coarse-grained level. I.e. instead of describing
what precisely each user-level instruction in the ARM or x64 instruction set can do, we
just say a user execution (no matter how long or short) might change *any* aspect of
user-accessible state in any order and in any magnitude, including any register or memory
it can access. However, these nondeterministic behaviours do not include any changes to
state that is only accessible in kernel mode or to memory that is not mapped by the MMU,
which are the hardware mechanisms the kernel relies on to protect its own state.
The proof shows that these are the only mechanisms the kernel needs for that, if the
hardware actually enforces them (e.g. Meltdown without mitigation means that the MMU
effectively provides more readable mappings than the ISA specification says).
We do have an additional unpublished proof for ARM that the instruction set does not
include any instructions that go beyond what these nondeterministic effects model. Since
this is precisely how the ISA is designed, this proof is not really kernel correctness,
more hardware platform and kernel assumption sanity check.