I understand your frustration: I was at the same point some time ago... Documentation is a pending matter of the seL4 ecosystem... (I hope nobody kills me for this statement). Anyway I'm pretty sure this will improve over time (it's just a matter of survival).
Having said that, it looks to me your problem is more about generic architecture than seL4 related, yet very interesting to me as looks like a common use case.
So, if I understood, your intention is to be sure your Linux virtual machine running on top of seL4 has not been tampered. Correct? If so, I would suggest you don't do this task from inside the VM as you can't use the (TOE) target of evaluation (Linux VM) to evaluate itself as you may end up dealing with some attacker that modified the tools you use to compute hashes... (I personally did that on Linux system long ago by tampering a (compiled) kernel module that was responsible for integrity checks on a FIPS-2 platform... So, any decent integrity check must be done from outside of the TOE. The only reliable way is to do it from seL4 host. Also, here you have 2 common scenarios:
1) VM with Inmutable file system: easy, just compute hash of the VM before it boots
2) VM with mixed file system: Inmutable + writable stuff: complex as you need to set up a procedure to check integrity on the writable part and this is not easy and all attacks will target this part of the TOE.
In any case, the tool/code doing the integrity check must be out of the TOE.
If you are thinking about "embedding" the integrity tools in the inmutable part of the TOE (a common error of many vendors) to check the writable stuff, I strongly suggesr not to do it. There are countless tricks to fool this kind of checks at runtime in a compromised TOE.
Thus, and trying to answer your question, thinking about "exporting" integrity checks from inside of the TOE is a very bad idea as you will never be sure if you can trust this data. In short: the TOE can't be used to self check integrity in any way.
If you still want to go ahead I suggest to go for seL4webserver but remember it is a bad idea and makes no sense to use seL4 platform (trusted computing) in such a way (false sense of security).
Please excuse me in advance if I misunderstood something.
El mar., 19 oct. 2021 22:49, Michael Neises firstname.lastname@example.org escribió:
Hello seL4 developers,
I want to be able to retrieve data from seL4's virtual Linux machine, in order to store it in a persistent way. Namely, I want to be able to simulate a seL4 kernel, boot its Linux virtual machine, compute some hash digests, and then export those hash digests. These digests are valuable because they represent the "clean room" runtime-state of the linux machine. Currently I can export these digests by way of hand-eye coordination, but I consider this unusable as a piece of software.
To date I've taken two main approaches: CAmkES FileServer or virtual networking. I'm under the impression that the FileServer changes are not persistent through reboot, and even if they were, to change the boot image after compile-time would seem to fly in the face of seL4's principles. Virtual networking seems to promise I can host my digests on a webpage that is visible to my "root host" machine; that is, the simulated seL4's linux instance hosts a site available on my 192.168.x.x network. I know there is a seL4webserver app as part of the seL4 repositories which claims to do this, but unfortunately its prose is unhelpful and it doesn't seem to work even when it compiles and simulates.
I've taken two distinct strategies to investigate the virtual network approach. First, I tried to get it to work on my normal stack: Windows 10 using WSL2 using a Docker container to simulate the seL4 image. The problem with this approach is that it appears I'm required to blindly thread 3 or 4 needles all at once, without getting feedback more descriptive than "you didn't do it." In other words, there does not appear to be a partial success available, and without ICMP ping, I honestly have no idea how to debug these "virtual" networks.
Next, I tried simplifying my stack by installing the dependencies natively on a Debian 10 machine, which should bypass several layers of the virtual network I was suggesting in my first strategy. Unfortunately, I met with the same "AttributeError: module 'yaml' has no attribute 'FullLoader'" error that inspired me to begin using Docker several years ago. Of course I should note that "pip/pip2/pip3 install pyyaml" all report that pyyaml is already installed, so I would be in debt to anyone who has an idea about that error.
To conclude, I find virtual networks opaque, and I would be grateful for any guidance. If you have a different idea how I might achieve my goal, I would be similarly effusive in my thanks.
Cheers, Michael Neises _______________________________________________ Devel mailing list -- email@example.com To unsubscribe send an email to firstname.lastname@example.org