Why send-only IPC doesn't and shouldn't return a success indicator
Gernot.Heiser wrote:
We have just received this comment on github, which I think merits some explanation:
There are times when IPC needs to happen, but a sending process cannot wait around for another process to be receiving. seL4_NBSend is basically the system call for this, but it never provides any feedback whether or not IPC took place.
...
This gets me to (2): send-only IPC is the wrong way to solve the problem.
The above use case is clearly one where info flow is inherently two-way: you send >something and need an acknowledgment of receipt. This means call-type IPC is the right mechanism, it does exactly that. And on top of that it is more efficient than separate send-receiver, and avoids (thanks to reply caps) the need to set up a separate reply channel.
New to sel4 and the list (so sorry for stuffing up the threading) Couldn't at the expense of an extra notification send, use a notification where the receiver of the notification then should call the then do a non-blocking reply. I didn't see reply being non-blocking described in the manual so please correct me if i am wrong. This only works one party is willing to block correct, and there is no way to short circuit the call if an incoming message could be delivered to the call'ing process before the call can be completed? Although it would seem to take some amount of gymnastics to balance sending and receiving... so if really neither are willing to wait there is always notifications+shared memory?
On 24 Sep 2016, at 19:30 , Matt Rice
wrote: Couldn't at the expense of an extra notification send, use a notification where the receiver of the notification then should call the then do a non-blocking reply. I didn't see reply being non-blocking described in the manual so please correct me if i am wrong.
This only works one party is willing to block correct, and there is no way to short circuit the call if an incoming message could be delivered to the call'ing process before the call can be completed? Although it would seem to take some amount of gymnastics to balance sending and receiving...
so if really neither are willing to wait there is always notifications+shared memory?
Typically it makes sense to block on the receive: - a client (using Call()) typically needs the server response to continue, and thus blocking makes sense - a server (using ReplyRecv()) does that because it has nothing else to do after serving one request than wait for the next. Any I/O completion would be signalled via a Notification that is bound to the server and delivered by the request endpoint Also, the server doesn’t care whether the client got the message, if it didn’t then it’s the client’s fault. Hence using NBReplyRecv() is generally the most appropriate operation for the server. If you want to be completely asynchronous, then a Notification-based protocol is the way to go. This would be the approach of choice if client and server are running on different cores. Gernot
On Sat, Sep 24, 2016 at 11:21 PM,
On 24 Sep 2016, at 19:30 , Matt Rice
wrote: Couldn't at the expense of an extra notification send, use a notification where the receiver of the notification then should call the then do a non-blocking reply. I didn't see reply being non-blocking described in the manual so please correct me if i am wrong.
This only works one party is willing to
.....
so if really neither are willing to wait there is always notifications+shared memory?
Typically it makes sense to block on the receive https://sel4.systems/lists/listinfo/devel
A consideration is buffer space and garbage collection. A reliable message needs to sit and sit until it is acknowledged. Unreliable messages can be made more unreliable with increased traffic. https://en.wikipedia.org/wiki/Sorcerer%27s_Apprentice_Syndrome Networking lets the OS toss the message onto the wire or air. The application or the OS could hold the message until the communication is acknowledged or a timer triggers. The application may know more than the OS so it is common to allow the application to manage the protocol. The need for speed pushes stuff into the kernel. IPC and communication protocols are difficult to get correct. My bias is a ring buffer (queue) in shared memory that uses dedicated cache lines so the sender and receiver can see and set messages and hints without conflicts. Volumes have been written that describe the good the bad and the ugly bits. http://www.dauniv.ac.in/downloads/EmbsysRevEd_PPTs/Chap_7Lesson12EmsysNewIPC... https://en.wikibooks.org/wiki/Operating_System_Design/Process Cleanup, exit, error handling, memory leaks, concurrency and security... -- T o m M i t c h e l l
On 26 Sep 2016, at 10:06 , Tom Mitchell
Thank you.
On Sun, Sep 25, 2016 at 5:35 PM,
On 26 Sep 2016, at 10:06 , Tom Mitchell
wrote: A consideration is buffer space and garbage collection. A reliable message needs to sit and sit until it is acknowledged. Unreliable messages can be made more unreliable with increased traffic.
seL4 (by design, and in line with all L4 kernels) doesn’t buffer messages, this is a mute point.
Gernot
_______________________________________________ Devel mailing list Devel@sel4.systems https://sel4.systems/lists/listinfo/devel
-- T o m M i t c h e l l
participants (3)
-
Gernot.Heiser@data61.csiro.au
-
Matt Rice
-
Tom Mitchell