Hello, I'm currently working with seL4 & Polarfire (RISCV) and I'm working with a single core at the moment (doing some testing and understanding how seL4 works). Polarfire has 4 cores and I'd like to start using all of them in a multikernel configuration (every hart running its own sel4 kernel). I've found the following RFC: https://sel4.atlassian.net/browse/RFC-17 where Kent McLeod has done some work (https://github.com/kent-mcleod/camkes-manifest/blob/kent/multicore2/master.x...) but only for ARM. Does someone know if the same approach will be taken for RISCV? or even better, if someone has already done it? Thanks, David.
On 4/11/24 12:12 AM, David Martin wrote:
Hello, I'm currently working with seL4 & Polarfire (RISCV) and I'm working with a single core at the moment (doing some testing and understanding how seL4 works). Polarfire has 4 cores and I'd like to start using all of them in a multikernel configuration (every hart running its own sel4 kernel). I've found the following RFC: https://sel4.atlassian.net/browse/RFC-17 where Kent McLeod has done some work (https://github.com/kent-mcleod/camkes-manifest/blob/kent/multicore2/master.x...) but only for ARM. Does someone know if the same approach will be taken for RISCV? or even better, if someone has already done it?
Thanks, David. _______________________________________________ Devel mailing list -- devel@sel4.systems To unsubscribe send an email to devel-leave@sel4.systems
This shouldn't be hard. I managed to run two instances of Linux on the JH7110 -- one Linux nommu on the S7 hart and the other regular Linux on the 4x U7 harts. This is kind of equivalent to run one Linux on your Polarfire Hart 0 (E51) and another on Hart 1-4 (U54). As Stefan pointed out in the RFC, (I also confirmed this in the privileged spec), the ASIDs are per-hart. Thus, from pure RISCV perspective, there's nothing stopping you. These 4 instances will run just fine without modification and they don't have to know about others existence. You only have to make sure the resources on the platform, such as physical memory, devices, etc., are partitioned correctly -- possibly by supplying different dtbs to each one. The challenging issue is very likely how to design the notification mechanism, and tailor to your needs. As said in the RFC, with the traditional CLINT/PLIC configuration, you only get 1 interrupt ID for IPI. Thus, you probably want to maintain a bitmap or similar in shared memory to distinguish the source harts that's signalling you, and use atomic instructions to maintain the bitmap. I haven't checked on Kent's work, but I'd imagine most of the work is on the user side rather than kernel. Sending IPI on RISCV is straight forward -- you just call the sbi_send_ipi... Bo
Hello, Indeed I was able to have seL4 running on multiple harts. Now I want to add an extra layer of shared memory (I'm not going to dig into the notification mechanism for the moment). I just want to create a shared memory region between 2 harts and check that both can have access to it. If I'm not mistaken, that could be done using a chunk of memory defined as device untyped and then used by each kernel (for my tests, I'll use 2 harts) to write/read. --> Is that a good understanding on how shared memory is suppose to be used within seL4? David.
Hello David, On 2024-06-05 14:56, David Martin wrote:
If I'm not mistaken, that could be done using a chunk of memory defined as device untyped and then used by each kernel (for my tests, I'll use 2 harts) to write/read. --> Is that a good understanding on how shared memory is suppose to be used within seL4?
Yes, adding reserved memory regions with nomap to the DTS is the way to tell seL4 not to use it for any kernel objects and to not map it. Then you can use it as shared memory between the two harts. Sadly there is no more convenient way of defining shared memory. I recommend using the KernelCustomDTSOverlay option to add such shared memory regions instead of modifying existing DTS files. Greetings, Indan
Thanks Indan, that sounds like a real plan for me to test tomorrow :) and if I'm not mistaken, inside each kernel now, the function "seL4_Untyped_Retype" should be called twice (one on each core) pointing to the same "free memory" declared previously so it can be used as shared memory by both harts. Regards, David.
Hello David, On 2024-06-05 15:40, David Martin wrote:
that sounds like a real plan for me to test tomorrow :) and if I'm not mistaken, inside each kernel now, the function "seL4_Untyped_Retype" should be called twice (one on each core) pointing to the same "free memory" declared previously so it can be used as shared memory by both harts.
Indeed. Multikernel is more cumbersome to work with because of things like this, compared to the SMP kernel. You can also do multithreading, if you're careful enough launching things. One thing missing is a cross-core task wakeup mechanism. Greetings, Indan
participants (3)
-
Bo Gan
-
David Martin
-
Indan Zupancic