Getting Badge Value of a badged EP capability
Hi seL4-devs and users, I have a question about implementing server and client using badged endpoints. Below I have laid out the scenario, the issue, and some questions. *Scenario* I have a userspace server that implements the functionality for an object(say type obj_T1). The server hands out badged-endpoints to clients to interact with this object. The badge is the virtual address of the object's struct in the server’s virtual address space. A client can interact with the object by sending messages on this badged endpoint. The client never learns the badge value. The kernel extracts the badged value when the server receives a message on this endpoint. By using the badge value, the server knows which object the client is trying to manipulate. The same PD, which implements the 1st server, also implements another server(in a separate thread, but same address space) for another object type (say type obj_T2). The server follows the same paradigm of handing out badged endpoints and using the badges values to keep track of the object. *Issue* I have a scenario where the client wants to invoke the functionality of obj_T1. This functionality needs access to obj_T2 as well. The client has badged EP caps to both objects. The client can invoke the badged EP handed out by server-1 for obj_T1 and give the badged EP of obj_T2 as an extraCap in the IPC message(or vice versa). Since the server has no way to look up the badge of the second cap, it cannot look up the underlying object for obj_T2. So my questions are: 1. Is there a way for a PD to look up the badge value of badged EP? I think the answer is no, but I thought I would still ask. 2. Is my idea of using the badge to keep track of the underlying object on the right track? Is there a better way of going about doing it? 3. As a potential solution, I can extend my server to add a new function, say GETID, which returns the badge value of a given object(i.e., EP). Since I know that the server always badges the EP with the virtual address of the struct, it is a trivial call to implement. But I somehow feel like this is not a neat idea, not quite sure why. Thanks for the help, everyone! Sid CS Graduate Student @ UBC sid-agrawal.ca
Hello Sid, I am an seL4 enthusiast. I'm not sure what the best practice is for this situation, so hopefully someone more knowledgeable will chime in. I'm certain the underlying issue here has been researched before as previous capability systems have used the badging mechanism that seL4 uses. For example, in coyotos this was called a "protected payload". I'm not sure what "PD" means, I'm assuming from context that you mean "process".
1.
Is there a way for a PD to look up the badge value of badged EP? I think the answer is no, but I thought I would still ask.
By design, there is no way to ask the kernel what the badge value is for a capability. The creator of the endpoint is assured that this badge value can never be seen or modified by anyone who receives a badged cap to the endpoint. In this way, the creator of the endpoint can use the badge to store secrets, such as permission bits or object type.
2.
Is my idea of using the badge to keep track of the underlying object on the right track? Is there a better way of going about doing it?
Yes. This mechanism allows you to use a single endpoint for multiple purposes, so a server only needs to use one. This is convenient since seL4 does not have a "select" mechanism to wait on multiple endpoints. You might consider storing an array index instead of a pointer to get more room for other bits you may want to store here. Alternatively, you might use any unused low bits of the pointer (due to object size and alignment) to store the object type.
3.
As a potential solution, I can extend my server to add a new function, say GETID, which returns the badge value of a given object(i.e., EP). Since I know that the server always badges the EP with the virtual address of the struct, it is a trivial call to implement. But I somehow feel like this is not a neat idea, not quite sure why.
I also don't think this is a good idea. I think this friction may be an indication that a different design might work better. Can you expand on what motivated you to use multiple threads in a single process to wait on multiple endpoints, instead of just using a single endpoint? -JB
I'm not sure what "PD" means, I'm assuming from context that you mean "process".
PD just means Protection Domain. So, in the context of seL4 - a cspace.
1.
Is there a way for a PD to look up the badge value of badged EP? I
think
the answer is no, but I thought I would still ask.
By design, there is no way to ask the kernel what the badge value is for a capability.
The creator of the endpoint is assured that this badge value can never be seen or modified by anyone who receives a badged cap to the endpoint. In this way, the creator of the endpoint can use the badge to store secrets, such as permission bits or object type.
2.
Is my idea of using the badge to keep track of the underlying object
on
the right track? Is there a better way of going about doing it?
Yes.
This mechanism allows you to use a single endpoint for multiple purposes, so a server only needs to use one. This is convenient since seL4 does not have a "select" mechanism to wait on multiple endpoints.
You might consider storing an array index instead of a pointer to get more room for other bits you may want to store here. Alternatively, you might use any unused low bits of the pointer (due to object size and alignment) to store the object type.
Agreed.
3.
As a potential solution, I can extend my server to add a new
function,
say GETID, which returns the badge value of a given object(i.e.,
EP). Since
I know that the server always badges the EP with the virtual address
of the
struct, it is a trivial call to implement. But I somehow feel like
this is
not a neat idea, not quite sure why.
I also don't think this is a good idea.
I think this friction may be an indication that a different design might work better.
Can you expand on what motivated you to use multiple threads in a single process to wait on multiple endpoints, instead of just using a single endpoint?
In my setup, the root task implements multiple servers(memory allocator, vspace manager, CPU allocator). Each server is run in a separate thread. So, all the servers share the address space.
Each server thread(which implements only one server) waits on only 1 endpoint. Multiple clients have been handed a separate badged copies of this endpoint during some previous handshake. I wanted to keep 1 EP per service for 2 reasons: - Future-proofing: At some point, I might move servers to separate processes. - To restrict clients. For instance, if I want a client to talk to only the CPU server but not others, it has only the CPU server EP. I realize that this can still be multiplexed with this EP on the server side and the server type can be encoded in the badge too. So this in itself is not a great reason.
-JB _______________________________________________ Devel mailing list -- devel@sel4.systems To unsubscribe send an email to devel-leave@sel4.systems
In my setup, the root task implements multiple servers(memory allocator, vspace manager, CPU allocator). Each server is run in a separate thread. So, all the servers share the address space.
Each server thread(which implements only one server) waits on only 1 endpoint. Multiple clients have been handed a separate badged copies of this endpoint during some previous handshake.
I wanted to keep 1 EP per service for 2 reasons:
* Future-proofing: At some point, I might move servers to separate processes. * To restrict clients. For instance, if I want a client to talk to only the CPU server but not others, it has only the CPU server EP. I realize that this can still be multiplexed with this EP on the server side and the server type can be encoded in the badge too. So this in itself is not a great reason.
Why did you decide for all these servers to share the same address space? If your intent is for the servers to be independent and factorable into separate processes, but you have two servers that need to know about the internal state of each other, should they be merged into a single server? - JB
Hi Jimmy,
You are right to point out that my intent to refactor servers into
independent processes is at odds with them directly accessing each other's
data. I have not thought this through yet.
Additionally, I think I finally understood how unwrapping works - and this
yields quite nicely into why you might have been encouraging me to use the
same endpoint for all the servers(i.e., merge the servers). If I merge them
into a single server, I can hand out badged EP of the same original
endpoint to all the server's clients. Now unwrapping will reveal the badge
of any badged EP cap since they are badged versions of the EP via which the
message arrives. If I still want to encode type information(memory
allocator or vspace manager), I can encode that into the badge instead of
using a different un-badged endpoint for each type.
-Sid
On Thu, Apr 28, 2022 at 11:26 AM Jimmy Brush via Devel
In my setup, the root task implements multiple servers(memory allocator, vspace manager, CPU allocator). Each server is run in a separate thread. So, all the servers share the address space.
Each server thread(which implements only one server) waits on only 1 endpoint. Multiple clients have been handed a separate badged copies of this endpoint during some previous handshake.
I wanted to keep 1 EP per service for 2 reasons:
* Future-proofing: At some point, I might move servers to separate processes. * To restrict clients. For instance, if I want a client to talk to only the CPU server but not others, it has only the CPU server EP. I realize that this can still be multiplexed with this EP on the server side and the server type can be encoded in the badge too. So this in itself is not a great reason.
Why did you decide for all these servers to share the same address space?
If your intent is for the servers to be independent and factorable into separate processes, but you have two servers that need to know about the internal state of each other, should they be merged into a single server?
- JB _______________________________________________ Devel mailing list -- devel@sel4.systems To unsubscribe send an email to devel-leave@sel4.systems
Hello Sid, On 2022-04-30 20:23, Sid Agrawal wrote:
I can hand out badged EP of the same original endpoint to all the server's clients. Now unwrapping will reveal the badge of any badged EP cap since they are badged versions of the EP via which the message arrives.
How is this different than the sender information that seL4_Recv() provides? Quoting from the manual: "The sender information is the badge of the endpoint capability that was invoked by the sender" About unwrapping it says: "If the n-th capability in the message refers to the endpoint through which the message is sent, the capability is unwrapped: its badge is placed into the n-th position of the receiver’s badges array" For practical purposes unwrapping only seems useful in case more than one badge is needed or when cap transfer happened, e.g. because of delegation. Like A passes cap to B, B passes it to C, and there it gets unwrapped because it's the same EP, and C wants to know A's badge, not B's. Greetings, Indan
Hi Indan,
On 2022-04-30 20:23, Sid Agrawal wrote:
I can hand out badged EP of the same original endpoint to all the server's clients. Now unwrapping will reveal the badge of any badged EP cap since they are badged versions of the EP via which the message arrives.
How is this different than the sender information that seL4_Recv() provides?
IIUC, Sid is describing exactly that functionality. Is it incorrect to refer to that as unwrapping? I've always thought of unwrapping as a general term that describes both scenarios. - JB
Hello Sid,
Additionally, I think I finally understood how unwrapping works - and this yields quite nicely into why you might have been encouraging me to use the same endpoint for all the servers(i.e., merge the servers). If I merge them into a single server, I can hand out badged EP of the same original endpoint to all the server's clients. Now unwrapping will reveal the badge of any badged EP cap since they are badged versions of the EP via which the message arrives. If I still want to encode type information(memory allocator or vspace manager), I can encode that into the badge instead of using a different un-badged endpoint for each type.
To restate, I would encourage you to design your servers as if they were in different address spaces and cspaces, even if they are not. If you encounter a situation where you can't separate two servers cleanly, that may be an indication of a design problem. For example, you may be trying to separate functionality into different servers that may make more sense living in one server. IMHO, in general, each server should use exactly one endpoint, and issue badged endpoint caps as you have described in order to multiplex over the same endpoint. I don't recommend putting everything in the same server though -- I'm just pointing out there are tradeoffs to consider in how you separate things, and I don't think sharing address spaces or badge secrets is a good solution/workaround to these server design issues. - JB
participants (3)
-
Indan Zupancic
-
Jimmy Brush
-
Sid Agrawal