Hello, while moving Genode's seL4 support to seL4 version 2, I stumbled over the dynamic allocation within untyped memory regions. In the old experimental branch that I used previously, the retype-untyped operation allowed me to manually specify an offset where the new kernel object should be placed within the untyped memory region. So I was able to manage the allocation of untyped memory manually. In short, I added each initial untyped memory region to a roottask-local "phys-mem" allocator. Each time when creating a kernel object, I asked this allocator for a region of the required size and alignment, determined the untyped-memory capability that belongs to the found region, and used this capability to create the kernel object(s) at the calculated offset within the untyped-memory region. This worked very well. After switching to version 2, I found the retype-untyped operation to lack the offset argument (which admittedly was never present in the non-experimental version). Instead of accepting an offset parameter, the kernel maintains a built-in allocator per untyped-memory region. (please correct me if my understanding is wrong) The allocator is basically a simple offset value ("FreeIndex") that is increased with each allocation. Since I no longer have the chance to specify the offset explicitly, I have to rely on the allocation policy of the kernel and hit the following problem: There exists an untyped memory region of size 0x2000. Initially, the FreeIndex is 0. Now, I am creating the following kernel objects within the untyped-memory region: 1. CNode with totalObjectSize 0x400: -> The allocation increases FreeIndex to 0x400. 2. CNode with totalObjectSize 0x800: -> The FreeIndex gets aligned to 0x800 (natural alignment of the to-be-created CNode). This creates a gap between 0x400 and 0x800. -> The allocation increases FreeIndex to 0x1000. 3. seL4_IA32_4K with a totalObjectSize 0x1000: -> The allocation increases FreeIndex to 0x2000. 4. CNode with totalObjectSize 0x400: <<seL4 [decodeUntypedInvocation/209 T0xe3fc9900 "rootserver" @2013228]: Untyped Retype: Insufficient memory (1 * 1024 bytes needed, 0 bytes available).>> Since FreeIndex equals the size of the untyped memory region, the kernel concludes that untypedFreeBytes is 0. Even though there exists a gap of 0x400 within the untyped memory region that satisfies the alignment of the to-be-created kernel object, the kernel won't allow me to use it. In contrast, with the retype-untyped-at-offset operation, such a situation never occurred. Right now, I am a bit puzzled about how to proceed and have the following questions in the back of me head: * What is the rationale behind fixing the allocation policy within untyped memory regions in the kernel. Doesn't this design contradict with the principle of avoiding policy in the kernel? In my specific case, this policy seemingly makes my life much more difficult. To use the kernel in a deterministic way, I would need to model the kernel's allocation behavior in the user land. * From what I gather from the kernel's source code, the built-in allocator would not allow me to reuse parts of untyped memory regions because 'FreeIndex' is always increasing. E.g., after revoking the CNodes of steps 1 and 2, I still cannot create any new kernel objects within the untyped memory region because FreeIndex remains equal the size of the untyped memory region. Is this correct? * Is it possible to give me the retyped-untyped-at-offset operation back? ;-) e.g., in the form of a patch? Or alternatively, are there any best practices that I could follow wrt managing untyped memory? Maybe I'm just approaching the problem from the wrong angle? Best regards Norman -- Dr.-Ing. Norman Feske Genode Labs http://www.genode-labs.com · http://genode.org Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
Hi Norman! As I understand it, one way to deal with this kind of problem is to use retyping in combination with a user level buddy allocator. The idea is to maintain a list of untyped memory areas for each possible object size. In the scenario you've described, we start with one untyped memory object of size 0x2000. This means that there is no immediate way to handle a request for an object of size 0x400. But instead, we can find the next largest object size that is available---the 0x2000 object---and use retype to break it down in to two smaller untyped memory objects of size 0x1000. Of course, this is still not enough, so we repeat the process with two more retype calls so that we now have split the original untyped memory in to two untyped memory objects of size 0x400, one of size 0x800, and one of size 0x1000. Now one of those 0x400 objects can be used (via another call to retype) for the CNode that you wanted, and the remaining objects will be allocated in an appropriate manner to provide storage for the 0x800 CNode, the 0x1000 4K Page, and the final 0x400 CNode on each of the subsequent allocator calls in your scenario. I don't believe there was any support at all for the FreeIndex scheme in early versions of seL4. But at some point, somebody noticed that there was unused space to store a FreeIndex in the capability object, and that this could be used to provide a lightweight mechanism for object allocation in simple programs that don't need a more sophisticated allocator. A simple server that requires only a static configuration of objects, for example, might be able to get by with a capability to just one small untyped memory area. At initialization time, it would a sequence of retype calls on that same untyped memory object to carve out the space that it needs for some appropriate collection of TCBs, endpoints, and other objects. Internally, the FreeIndex would be incremented after each retype, but once the initialization is complete, the server would be able to run without the need for any further allocation or allocator overheads. In other words, my *impression* (I'm not an authority here!) is that the "pure" seL4 way to handle problems like this is via repeated halving/retyping of larger objects. But I suspect that support for allocation from a single untyped memory via a monotonically increasing FreeIndex was added as a pragmatic step to simplify the task of constructing small applications. Hope this helps! All the best, Mark On Feb 4, 2016, at 5:12 AM, Norman Feske <norman.feske@genode-labs.com> wrote:
Hello,
while moving Genode's seL4 support to seL4 version 2, I stumbled over the dynamic allocation within untyped memory regions.
In the old experimental branch that I used previously, the retype-untyped operation allowed me to manually specify an offset where the new kernel object should be placed within the untyped memory region. So I was able to manage the allocation of untyped memory manually.
In short, I added each initial untyped memory region to a roottask-local "phys-mem" allocator. Each time when creating a kernel object, I asked this allocator for a region of the required size and alignment, determined the untyped-memory capability that belongs to the found region, and used this capability to create the kernel object(s) at the calculated offset within the untyped-memory region. This worked very well.
After switching to version 2, I found the retype-untyped operation to lack the offset argument (which admittedly was never present in the non-experimental version). Instead of accepting an offset parameter, the kernel maintains a built-in allocator per untyped-memory region. (please correct me if my understanding is wrong) The allocator is basically a simple offset value ("FreeIndex") that is increased with each allocation. Since I no longer have the chance to specify the offset explicitly, I have to rely on the allocation policy of the kernel and hit the following problem:
There exists an untyped memory region of size 0x2000. Initially, the FreeIndex is 0. Now, I am creating the following kernel objects within the untyped-memory region:
1. CNode with totalObjectSize 0x400:
-> The allocation increases FreeIndex to 0x400.
2. CNode with totalObjectSize 0x800:
-> The FreeIndex gets aligned to 0x800 (natural alignment of the to-be-created CNode). This creates a gap between 0x400 and 0x800. -> The allocation increases FreeIndex to 0x1000.
3. seL4_IA32_4K with a totalObjectSize 0x1000:
-> The allocation increases FreeIndex to 0x2000.
4. CNode with totalObjectSize 0x400:
<<seL4 [decodeUntypedInvocation/209 T0xe3fc9900 "rootserver" @2013228]: Untyped Retype: Insufficient memory (1 * 1024 bytes needed, 0 bytes available).>>
Since FreeIndex equals the size of the untyped memory region, the kernel concludes that untypedFreeBytes is 0.
Even though there exists a gap of 0x400 within the untyped memory region that satisfies the alignment of the to-be-created kernel object, the kernel won't allow me to use it. In contrast, with the retype-untyped-at-offset operation, such a situation never occurred. Right now, I am a bit puzzled about how to proceed and have the following questions in the back of me head:
* What is the rationale behind fixing the allocation policy within untyped memory regions in the kernel. Doesn't this design contradict with the principle of avoiding policy in the kernel? In my specific case, this policy seemingly makes my life much more difficult. To use the kernel in a deterministic way, I would need to model the kernel's allocation behavior in the user land.
* From what I gather from the kernel's source code, the built-in allocator would not allow me to reuse parts of untyped memory regions because 'FreeIndex' is always increasing. E.g., after revoking the CNodes of steps 1 and 2, I still cannot create any new kernel objects within the untyped memory region because FreeIndex remains equal the size of the untyped memory region. Is this correct?
* Is it possible to give me the retyped-untyped-at-offset operation back? ;-) e.g., in the form of a patch? Or alternatively, are there any best practices that I could follow wrt managing untyped memory? Maybe I'm just approaching the problem from the wrong angle?
Best regards Norman
-- Dr.-Ing. Norman Feske Genode Labs
http://www.genode-labs.com · http://genode.org
Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
_______________________________________________ Devel mailing list Devel@sel4.systems https://sel4.systems/lists/listinfo/devel
Hi Norman, First an explanation of the kernel branches/versions. The master branch has never had the retype at offset functionality, this has always been solely on the experimental branch. As the master branch is what is verified, this is what we release and are versioning. You could continue to use the experimental branch, and retain the use of retype at offset, but you will have to leave behind the nice versioning. Now for a slightly more detailed exploration of retype strategies. Mark had things mostly correct here. Originally the kernel required you to retype an entire untyped at once. That is, you could take a single untyped and split it into N objects of equal size. Thus resulting in the common paradigm of splitting them in half as this provided an easy way to book keep and implement a user level allocator. But this is also inefficient, as you end up with many intermediate untyped capabilities. To make efficient allocation practical untypeds were changed for so called 'incremental retype'. This is what you see currently in the verified kernel, with the freeIndex. The reason the freeIndex is monotonic increasing is because the kernel has no way to know if any previous memory is now free or not. The kernel only knows two things * Initially none of the memory of the untyped is used, so freeIndex is set to 0 * After the untyped is revoked all retyped objects are destroyed, so we know none of the memory is used, so freeIndex can be set to 0 But any other question of 'is there an object at this address?' cannot be efficiently answered. I say 'efficiently' because it is not impossible, rather the operation would be potentially very long running and a malicious user could exploit this. In particular a user could force an O(n) operation, where N is the amount of memory under control of the user. But being able to retype many objects directly out of a single untyped does not help if you if you want to potentially free them and allocate other objects. For example, if you want to create some object of size X, you instead need to create an untyped of size X and then your object. Now if you want to delete that object you can, and create a new object of size X (or multiple objects of size < X) using that intermediate untyped. On the experimental kernel there is a different in kernel data structure for managing capabilities (specifically a heap instead of a doubly linked list). Now the same question of 'is there an object at this address?' can be answered in O(log(n)) time, which is considered far more reasonable, and so retype at offset can be written. For writing actual user level allocators what Mark said is basically what we tend to do. Have a buddy allocator, manage different untyped pool sizes etc. You can see examples of an untyped manager that we wrote for using on the master branch here https://github.com/seL4/seL4_libs/blob/master/libsel4allocman/src/utspace/sp..., note that it is part of a larger allocator but should still give some of the logic. Adrian On Fri 05-Feb-2016 8:27 AM, Mark Jones wrote:
Hi Norman!
As I understand it, one way to deal with this kind of problem is to use retyping in combination with a user level buddy allocator. The idea is to maintain a list of untyped memory areas for each possible object size. In the scenario you've described, we start with one untyped memory object of size 0x2000. This means that there is no immediate way to handle a request for an object of size 0x400. But instead, we can find the next largest object size that is available---the 0x2000 object---and use retype to break it down in to two smaller untyped memory objects of size 0x1000. Of course, this is still not enough, so we repeat the process with two more retype calls so that we now have split the original untyped memory in to two untyped memory objects of size 0x400, one of size 0x800, and one of size 0x1000. Now one of those 0x400 objects can be used (via another call to retype) for the CNode that you wanted, and the remaining objects will be allocated in an appropriate manner to provide storage for the 0x800 CNode, the 0x1000 4K Page, and the final 0x400 CNode on each of the subsequent allocator calls in your scenario.
I don't believe there was any support at all for the FreeIndex scheme in early versions of seL4. But at some point, somebody noticed that there was unused space to store a FreeIndex in the capability object, and that this could be used to provide a lightweight mechanism for object allocation in simple programs that don't need a more sophisticated allocator. A simple server that requires only a static configuration of objects, for example, might be able to get by with a capability to just one small untyped memory area. At initialization time, it would a sequence of retype calls on that same untyped memory object to carve out the space that it needs for some appropriate collection of TCBs, endpoints, and other objects. Internally, the FreeIndex would be incremented after each retype, but once the initialization is complete, the server would be able to run without the need for any further allocation or allocator overheads.
In other words, my *impression* (I'm not an authority here!) is that the "pure" seL4 way to handle problems like this is via repeated halving/retyping of larger objects. But I suspect that support for allocation from a single untyped memory via a monotonically increasing FreeIndex was added as a pragmatic step to simplify the task of constructing small applications.
Hope this helps!
All the best, Mark
On Feb 4, 2016, at 5:12 AM, Norman Feske <norman.feske@genode-labs.com> wrote:
Hello,
while moving Genode's seL4 support to seL4 version 2, I stumbled over the dynamic allocation within untyped memory regions.
In the old experimental branch that I used previously, the retype-untyped operation allowed me to manually specify an offset where the new kernel object should be placed within the untyped memory region. So I was able to manage the allocation of untyped memory manually.
In short, I added each initial untyped memory region to a roottask-local "phys-mem" allocator. Each time when creating a kernel object, I asked this allocator for a region of the required size and alignment, determined the untyped-memory capability that belongs to the found region, and used this capability to create the kernel object(s) at the calculated offset within the untyped-memory region. This worked very well.
After switching to version 2, I found the retype-untyped operation to lack the offset argument (which admittedly was never present in the non-experimental version). Instead of accepting an offset parameter, the kernel maintains a built-in allocator per untyped-memory region. (please correct me if my understanding is wrong) The allocator is basically a simple offset value ("FreeIndex") that is increased with each allocation. Since I no longer have the chance to specify the offset explicitly, I have to rely on the allocation policy of the kernel and hit the following problem:
There exists an untyped memory region of size 0x2000. Initially, the FreeIndex is 0. Now, I am creating the following kernel objects within the untyped-memory region:
1. CNode with totalObjectSize 0x400:
-> The allocation increases FreeIndex to 0x400.
2. CNode with totalObjectSize 0x800:
-> The FreeIndex gets aligned to 0x800 (natural alignment of the to-be-created CNode). This creates a gap between 0x400 and 0x800. -> The allocation increases FreeIndex to 0x1000.
3. seL4_IA32_4K with a totalObjectSize 0x1000:
-> The allocation increases FreeIndex to 0x2000.
4. CNode with totalObjectSize 0x400:
<<seL4 [decodeUntypedInvocation/209 T0xe3fc9900 "rootserver" @2013228]: Untyped Retype: Insufficient memory (1 * 1024 bytes needed, 0 bytes available).>>
Since FreeIndex equals the size of the untyped memory region, the kernel concludes that untypedFreeBytes is 0.
Even though there exists a gap of 0x400 within the untyped memory region that satisfies the alignment of the to-be-created kernel object, the kernel won't allow me to use it. In contrast, with the retype-untyped-at-offset operation, such a situation never occurred. Right now, I am a bit puzzled about how to proceed and have the following questions in the back of me head:
* What is the rationale behind fixing the allocation policy within untyped memory regions in the kernel. Doesn't this design contradict with the principle of avoiding policy in the kernel? In my specific case, this policy seemingly makes my life much more difficult. To use the kernel in a deterministic way, I would need to model the kernel's allocation behavior in the user land.
* From what I gather from the kernel's source code, the built-in allocator would not allow me to reuse parts of untyped memory regions because 'FreeIndex' is always increasing. E.g., after revoking the CNodes of steps 1 and 2, I still cannot create any new kernel objects within the untyped memory region because FreeIndex remains equal the size of the untyped memory region. Is this correct?
* Is it possible to give me the retyped-untyped-at-offset operation back? ;-) e.g., in the form of a patch? Or alternatively, are there any best practices that I could follow wrt managing untyped memory? Maybe I'm just approaching the problem from the wrong angle?
Best regards Norman
-- Dr.-Ing. Norman Feske Genode Labs
http://www.genode-labs.com · http://genode.org
Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
_______________________________________________ Devel mailing list Devel@sel4.systems https://sel4.systems/lists/listinfo/devel
_______________________________________________ Devel mailing list Devel@sel4.systems https://sel4.systems/lists/listinfo/devel
________________________________ The information in this e-mail may be confidential and subject to legal professional privilege and/or copyright. National ICT Australia Limited accepts no liability for any damage caused by this email or its attachments.
Thanks Mark and Adrian for your elaborate and helpful replies! On 05.02.2016 01:46, Adrian Danis wrote:
First an explanation of the kernel branches/versions. The master branch has never had the retype at offset functionality, this has always been solely on the experimental branch. As the master branch is what is verified, this is what we release and are versioning. You could continue to use the experimental branch, and retain the use of retype at offset, but you will have to leave behind the nice versioning.
I am currently trying to find the right balance between effort and being future proof. I wonder, will the master version eventually follow the experimental version? If so, I would certainly like to continue using the retype-at-offset mechanism. If not, I would rather sooner than later adapt Genode's allocation scheme to the seL4 way. What would you recommend?
Now for a slightly more detailed exploration of retype strategies.
Mark had things mostly correct here. Originally the kernel required you to retype an entire untyped at once. That is, you could take a single untyped and split it into N objects of equal size. Thus resulting in the common paradigm of splitting them in half as this provided an easy way to book keep and implement a user level allocator. But this is also inefficient, as you end up with many intermediate untyped capabilities.
This was reason I hesitated to follow this route initially. I was afraid of the book keeping needed for all those capability selectors. Now, after reading Mark's email, I may consider the buddy-allocator approach. But right now, I still cannot clearly see where to store / whom to account the meta data of this allocator - so I need some further thinking. Thanks Mark for this direction! Another (rather extreme) approach would be to split the initial untyped memory regions into (let's say) 4K untyped memory regions where each 4K region would be named after its corresponding physical page number. This would be like a giant slab allocator with no further meta data needed. The CNode containing the capability selectors for the 4K untyped memory regions could be preallocated depending on the amount of physical memory. For 4GiB of physical memory, I needed 16MiB for the CNode. This way I would be able to rededicate untyped memory at the granularity of 4K and keep Genode's existing memory management. The obvious downside would be that I could never create kernel objects larger than 4K, i.e., super pages. But the latter raises a bunch of other problems anyway.
To make efficient allocation practical untypeds were changed for so called 'incremental retype'. This is what you see currently in the verified kernel, with the freeIndex. The reason the freeIndex is monotonic increasing is because the kernel has no way to know if any previous memory is now free or not. The kernel only knows two things * Initially none of the memory of the untyped is used, so freeIndex is set to 0 * After the untyped is revoked all retyped objects are destroyed, so we know none of the memory is used, so freeIndex can be set to 0 But any other question of 'is there an object at this address?' cannot be efficiently answered. I say 'efficiently' because it is not impossible, rather the operation would be potentially very long running and a malicious user could exploit this. In particular a user could force an O(n) operation, where N is the amount of memory under control of the user.
Thank you for providing this rationale.
But being able to retype many objects directly out of a single untyped does not help if you if you want to potentially free them and allocate other objects. For example, if you want to create some object of size X, you instead need to create an untyped of size X and then your object. Now if you want to delete that object you can, and create a new object of size X (or multiple objects of size < X) using that intermediate untyped.
This is actually my use case.
On the experimental kernel there is a different in kernel data structure for managing capabilities (specifically a heap instead of a doubly linked list). Now the same question of 'is there an object at this address?' can be answered in O(log(n)) time, which is considered far more reasonable, and so retype at offset can be written.
From my perspective, this represents a significant difference between both branches. Each approach calls for a different design of the user land. With Genode, I'd like to implement only one way. Which way would be the best to go, master or experimental?
For writing actual user level allocators what Mark said is basically what we tend to do. Have a buddy allocator, manage different untyped pool sizes etc. You can see examples of an untyped manager that we wrote for using on the master branch here https://github.com/seL4/seL4_libs/blob/master/libsel4allocman/src/utspace/sp...,
note that it is part of a larger allocator but should still give some of the logic.
Thank you very much for the pointer and for covering my questions so thoroughly. Cheers Norman -- Dr.-Ing. Norman Feske Genode Labs http://www.genode-labs.com · http://genode.org Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
Hi Norman, My suggestion at this point in time would be to implement an allocation strategy that supports plain old incremental retype. Remembering that any strategy that works for incremental retype can be trivially made to work with retype at offset. Currently there are no plans to bring retype at offset to the verified kernel. By my understanding (note that I'm not an actual verification person) the main two reasons for this are * The heap used is both a more complicated data structure, and has very complicated invariants due to the nature of the searches performed on it * Changing the implementation of the cap derivation tree will invalidate large parts of the proof, which will simply require large amounts of verification people power to fix Whilst we will always have an experimental branch, it is never clear what experiments we will continue and which we will abandon for various reasons. This uncertainty is of course mostly applicable to items that do not have roadmap for verification and life on the master branch. Whilst the searchable cdt (and the associated retype at offset) is clearly useful, unfortunately it is not on the roadmap (https://sel4.systems/Info/Roadmap/) and so it's continued existence is forever uncertain. Adrian On Sat 06-Feb-2016 12:44 AM, Norman Feske wrote:
Thanks Mark and Adrian for your elaborate and helpful replies!
First an explanation of the kernel branches/versions. The master branch has never had the retype at offset functionality, this has always been solely on the experimental branch. As the master branch is what is verified, this is what we release and are versioning. You could continue to use the experimental branch, and retain the use of retype at offset, but you will have to leave behind the nice versioning. I am currently trying to find the right balance between effort and being future proof. I wonder, will the master version eventually follow the experimental version? If so, I would certainly like to continue using
On 05.02.2016 01:46, Adrian Danis wrote: the retype-at-offset mechanism. If not, I would rather sooner than later adapt Genode's allocation scheme to the seL4 way. What would you recommend?
Now for a slightly more detailed exploration of retype strategies.
Mark had things mostly correct here. Originally the kernel required you to retype an entire untyped at once. That is, you could take a single untyped and split it into N objects of equal size. Thus resulting in the common paradigm of splitting them in half as this provided an easy way to book keep and implement a user level allocator. But this is also inefficient, as you end up with many intermediate untyped capabilities. This was reason I hesitated to follow this route initially. I was afraid of the book keeping needed for all those capability selectors. Now, after reading Mark's email, I may consider the buddy-allocator approach. But right now, I still cannot clearly see where to store / whom to account the meta data of this allocator - so I need some further thinking. Thanks Mark for this direction!
Another (rather extreme) approach would be to split the initial untyped memory regions into (let's say) 4K untyped memory regions where each 4K region would be named after its corresponding physical page number. This would be like a giant slab allocator with no further meta data needed. The CNode containing the capability selectors for the 4K untyped memory regions could be preallocated depending on the amount of physical memory. For 4GiB of physical memory, I needed 16MiB for the CNode. This way I would be able to rededicate untyped memory at the granularity of 4K and keep Genode's existing memory management. The obvious downside would be that I could never create kernel objects larger than 4K, i.e., super pages. But the latter raises a bunch of other problems anyway.
To make efficient allocation practical untypeds were changed for so called 'incremental retype'. This is what you see currently in the verified kernel, with the freeIndex. The reason the freeIndex is monotonic increasing is because the kernel has no way to know if any previous memory is now free or not. The kernel only knows two things * Initially none of the memory of the untyped is used, so freeIndex is set to 0 * After the untyped is revoked all retyped objects are destroyed, so we know none of the memory is used, so freeIndex can be set to 0 But any other question of 'is there an object at this address?' cannot be efficiently answered. I say 'efficiently' because it is not impossible, rather the operation would be potentially very long running and a malicious user could exploit this. In particular a user could force an O(n) operation, where N is the amount of memory under control of the user. Thank you for providing this rationale.
But being able to retype many objects directly out of a single untyped does not help if you if you want to potentially free them and allocate other objects. For example, if you want to create some object of size X, you instead need to create an untyped of size X and then your object. Now if you want to delete that object you can, and create a new object of size X (or multiple objects of size < X) using that intermediate untyped. This is actually my use case.
On the experimental kernel there is a different in kernel data structure for managing capabilities (specifically a heap instead of a doubly linked list). Now the same question of 'is there an object at this address?' can be answered in O(log(n)) time, which is considered far more reasonable, and so retype at offset can be written. From my perspective, this represents a significant difference between both branches. Each approach calls for a different design of the user land. With Genode, I'd like to implement only one way. Which way would be the best to go, master or experimental?
For writing actual user level allocators what Mark said is basically what we tend to do. Have a buddy allocator, manage different untyped pool sizes etc. You can see examples of an untyped manager that we wrote for using on the master branch here https://github.com/seL4/seL4_libs/blob/master/libsel4allocman/src/utspace/sp...,
note that it is part of a larger allocator but should still give some of the logic. Thank you very much for the pointer and for covering my questions so thoroughly.
Cheers Norman
________________________________ The information in this e-mail may be confidential and subject to legal professional privilege and/or copyright. National ICT Australia Limited accepts no liability for any damage caused by this email or its attachments.
Hi Adrian, On 06.02.2016 00:40, Adrian Danis wrote:
My suggestion at this point in time would be to implement an allocation strategy that supports plain old incremental retype. Remembering that any strategy that works for incremental retype can be trivially made to work with retype at offset.
I will go for it then. ;-)
Currently there are no plans to bring retype at offset to the verified kernel. By my understanding (note that I'm not an actual verification person) the main two reasons for this are * The heap used is both a more complicated data structure, and has very complicated invariants due to the nature of the searches performed on it * Changing the implementation of the cap derivation tree will invalidate large parts of the proof, which will simply require large amounts of verification people power to fix
Whilst we will always have an experimental branch, it is never clear what experiments we will continue and which we will abandon for various reasons. This uncertainty is of course mostly applicable to items that do not have roadmap for verification and life on the master branch. Whilst the searchable cdt (and the associated retype at offset) is clearly useful, unfortunately it is not on the roadmap (https://sel4.systems/Info/Roadmap/) and so it's continued existence is forever uncertain.
These insights are very valuable. Thank you for sharing them. Cheers Norman -- Dr.-Ing. Norman Feske Genode Labs http://www.genode-labs.com · http://genode.org Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
participants (3)
-
Adrian Danis
-
Mark Jones
-
Norman Feske