Hello Andrew, On 2024-04-19 10:25, Andrew Warkentin wrote:
made little difference.
The difference is the overhead of doing 10k system calls in seL4.
As Demi said, read()/write() aren't usually buffered. I haven't actually checked whether QNX is buffering it, but I'd be a little bit surprised if it were.
You're right it's not buffered in user space, but it is buffered in kernel space though.
On Fri, Apr 19, 2024 at 3:12 AM Indan Zupancic
wrote: Yes, you are running it in a virtual machine, remember? Any timing is suspect.
Wouldn't that affect timing relative to bare metal more than timing of kernels relative to each other? I guess seL4 might just somehow not get along with virtualization (the timings in QEMU and VirtualBox are fairly similar), although I'd be a little surprised if it were affected that much relative to QNX.
Yes. What I meant is that I wouldn't trust the numbers to measure the individual IPC timing for sel4, because of overheads and the chance of unexpected interruptions like a different host task being scheduled. It's good enough for relative comparisons, just be mindful of the pitfalls.
This doesn't do any IPC and measures the time of 10k dummy write syscalls.
The POSIX equivalent would be to have two processes using pipes or fifos with the server sending a reply and the sender waiting for the reply before sending the next message.
AFAIK writing to /dev/null does some IPC under QNX (/dev/null is exported by the process server) although I suspect it's not copying anything. In any case I wrote my own null server that specifically copies the data into its own buffer (that's what the "custom null server" timings are) in order to avoid any optimizations like that, and it's slower than real /dev/null, but still way faster than what I'm getting out of seL4.
Because it is asynchronous: You do 10k system calls which queue some data, QNX can do one context switch to the server to handle all 10k of them in one go. The client needs to read a reply from the server for each message sent to make it equivalent to seL4's code. If system call overhead is what you want to measure, fine, but make the seL4 version asynchronous too, by using notifications and shared memory queues and having the same priority for server and client tasks. IPC in seL4 should be seen as a protected procedure call into another process, it shouldn't be used for message passing in the classical sense, see: https://microkerneldude.org/2019/03/07/how-to-and-how-not-to-use-sel4-ipc/ Greetings, Indan