seL4 kernel memory footprint
I'm trying to figure out the physical memory used by the kernel. This is for riscv if it matters. -Sam
On 9 Jul 2022, at 05:15, Sam Leffler via Devel
I'm trying to figure out the physical memory used by the kernel. This is for riscv if it matters.
Hi Sam, The RAM usage of the kernel is dominated by application specifics. The kernel, at boot time, allocates enough RAM to boot up and start the init task. That’s it’s own text segment, its own page tables, a small amount of global data, and a kernel stack per core, plus whatever’s needed for init. (No heap!) It’s a while we’ve done an audit, but should fit into 64KiB. This is normally dominated by what you need to run an actual system: for each user process you’ll need - page tables - TCB(s) - Cspace - other objects, such as endpoints, Notifications These are provided to the kernel by re-typing Untypeds, and as such the responsibility of usermode. Gernot
On Fri, Jul 8, 2022 at 6:38 PM Gernot Heiser
On 9 Jul 2022, at 05:15, Sam Leffler via Devel
wrote: I'm trying to figure out the physical memory used by the kernel. This is for riscv if it matters.
Hi Sam,
The RAM usage of the kernel is dominated by application specifics.
I don't think so, but perhaps I misunderstand what "application specifics" means. The memory footprint of the _kernel_ (not anything running in use space) appears to be fixed at the point where it launches the rootserver (+idle thread). Or does the kernel dynamically allocate memory _after_ starting the rootserver?
The kernel, at boot time, allocates enough RAM to boot up and start the init task. That’s it’s own text segment, its own page tables, a small amount of global data, and a kernel stack per core, plus whatever’s needed for init. (No heap!) It’s a while we’ve done an audit, but should fit into 64KiB.
Is 64KB what you expect for a release build up to the point where the rootserver setup happens? What target cpu + config? Does this include the idle thread? This is substantially less than my release build for riscv w/ MCS but I enable CONFIG_PRINITING and we have a few drivers (timer, uart) though they should be small. IIRC the ELF headers for our release kernel.elf have a load segment ~110KB.
This is normally dominated by what you need to run an actual system: for each user process you’ll need - page tables - TCB(s) - Cspace - other objects, such as endpoints, Notifications
These are provided to the kernel by re-typing Untypeds, and as such the responsibility of usermode.
Sure. I believe I'm reclaiming all rootserver resources so once it terminates I expect to see all of physical memory accounted for by the kernel, idle thread, rootserver-allocated resources (to construct CAmkES components), and unallocated memory held by untyped objects. But I don't know for sure what the kernel portion is and the total "reserved" memory seems high. Hence my ask. I don't suppose there's some kernel variable I can print that will help answer my q?
Gernot _______________________________________________ Devel mailing list -- devel@sel4.systems To unsubscribe send an email to devel-leave@sel4.systems
On Sun, Jul 10, 2022 at 3:04 PM Sam Leffler
On Fri, Jul 8, 2022 at 6:38 PM Gernot Heiser
wrote: On 9 Jul 2022, at 05:15, Sam Leffler via Devel
wrote: I'm trying to figure out the physical memory used by the kernel. This is for riscv if it matters.
Hi Sam,
The RAM usage of the kernel is dominated by application specifics.
I don't think so, but perhaps I misunderstand what "application specifics" means. The memory footprint of the _kernel_ (not anything running in use space) appears to be fixed at the point where it launches the rootserver (+idle thread). Or does the kernel dynamically allocate memory _after_ starting the rootserver?
The kernel, at boot time, allocates enough RAM to boot up and start the init task. That’s it’s own text segment, its own page tables, a small amount of global data, and a kernel stack per core, plus whatever’s needed for init. (No heap!) It’s a while we’ve done an audit, but should fit into 64KiB.
Is 64KB what you expect for a release build up to the point where the rootserver setup happens? What target cpu + config? Does this include the idle thread? This is substantially less than my release build for riscv w/ MCS but I enable CONFIG_PRINITING and we have a few drivers (timer, uart) though they should be small. IIRC the ELF headers for our release kernel.elf have a load segment ~110KB.
The ELF load segment # is misleading 'cuz it includes the rootserver image which in turn includes the CAmkES compoents. So 110KB might make sense relative to 64KB for the kernel proper. Will check.
This is normally dominated by what you need to run an actual system: for each user process you’ll need - page tables - TCB(s) - Cspace - other objects, such as endpoints, Notifications
These are provided to the kernel by re-typing Untypeds, and as such the responsibility of usermode.
Sure. I believe I'm reclaiming all rootserver resources so once it terminates I expect to see all of physical memory accounted for by the kernel, idle thread, rootserver-allocated resources (to construct CAmkES components), and unallocated memory held by untyped objects. But I don't know for sure what the kernel portion is and the total "reserved" memory seems high. Hence my ask.
I don't suppose there's some kernel variable I can print that will help answer my q?
Gernot _______________________________________________ Devel mailing list -- devel@sel4.systems To unsubscribe send an email to devel-leave@sel4.systems
I'm trying to figure out the physical memory used by the kernel. This is for riscv if it matters.
Starting from the ram region defined by the device tree, I recall the following happens for RISC-V: - Some memory is subtracted from the start for the SBI program which I think is 2MiB for both RV32 and RV64. This might be where most of your memory is being lost to. There's an unmerged PR for making this value configurable in the platform device tree: https://github.com/seL4/seL4/pull/759 - memory used by the kernel is for code + data and should be just what is reported by the ELF segment headers. - During boot the kernel does allocate memory dynamically but this should be only for the rootserver and should entirely happen in the function create_rootserver_objects() - The remaining memory is handed to the rootserver as untypeds. The kernel also frees code and data that was only required during startup and releases that as untypeds too. These untypeds can be inspected in the boot info. What I've done in the past is to print out the untyped list and then audited how it corresponds to the memory usage rules listed above.
I don't suppose there's some kernel variable I can print that will help answer my q?
The untypeds handled to user level is the kernel's record of memory available for allocation. During boot time, calculate_rootserver_size() can be called to return the amount of memory the kernel dynamically allocates for the root_server, and during boot the range of memory used is kept in the global: region_t rootserver_mem, but would be freed once the kernel jumps to user level. The kernel should already print the ranges of memory it is given to use at the start of boot. Kent.
On Sun, Jul 10, 2022 at 5:33 PM Kent Mcleod
I'm trying to figure out the physical memory used by the kernel.
This is
for riscv if it matters.
Starting from the ram region defined by the device tree, I recall the following happens for RISC-V: - Some memory is subtracted from the start for the SBI program which I think is 2MiB for both RV32 and RV64. This might be where most of your memory is being lost to. There's an unmerged PR for making this value configurable in the platform device tree: https://github.com/seL4/seL4/pull/759
Aha, thank you (that's very hidden)! I do see the 2MB reserve in the generated DTS (gen_headers/plat/machine/devices_gen.h). And we don't have/use SBI. Any reason why this PR has yet to be merged?
- memory used by the kernel is for code + data and should be just what is reported by the ELF segment headers. - During boot the kernel does allocate memory dynamically but this should be only for the rootserver and should entirely happen in the function create_rootserver_objects() - The remaining memory is handed to the rootserver as untypeds. The kernel also frees code and data that was only required during startup and releases that as untypeds too. These untypeds can be inspected in the boot info.
What I've done in the past is to print out the untyped list and then audited how it corresponds to the memory usage rules listed above.
I don't suppose there's some kernel variable I can print that will help answer my q?
The untypeds handled to user level is the kernel's record of memory available for allocation. During boot time, calculate_rootserver_size() can be called to return the amount of memory the kernel dynamically allocates for the root_server, and during boot the range of memory used is kept in the global: region_t rootserver_mem, but would be freed once the kernel jumps to user level. The kernel should already print the ranges of memory it is given to use at the start of boot.\
Kent.
Sam
Aha, thank you (that's very hidden)! I do see the 2MB reserve in the generated DTS (gen_headers/plat/machine/devices_gen.h). And we don't have/use SBI. Any reason why this PR has yet to be merged?
Basically lack of time and nobody was really pushing for that. It's still on my list of pending PRs, but dropped a bit in priority as the current solution also works. Seem all RISC-V board have the same physical memory layout, so the hard-coded values are not too painful. What's left is maybe a bit more testing with the parameters and another round of comparing with what is done on ARM, to see if the behavior is similar or if there are pitfalls we forgot. If you can confirm this PR id working for you, that feedback is really appreciated. Axel
On Mon, Jul 11, 2022 at 1:24 PM Axel Heider
Sam
Aha, thank you (that's very hidden)! I do see the 2MB reserve in the generated DTS (gen_headers/plat/machine/devices_gen.h). And we don't have/use SBI. Any reason why this PR has yet to be merged?
Basically lack of time and nobody was really pushing for that. It's still on my list of pending PRs, but dropped a bit in priority as the current solution also works. Seem all RISC-V board have the same physical memory layout, so the hard-coded values are not too painful. What's left is maybe a bit more testing with the parameters and another round of comparing with what is done on ARM, to see if the behavior is similar or if there are pitfalls we forgot. If you can confirm this PR id working for you, that feedback is really appreciated.
Mixed answer. The PR seems to DTRT but I had to do some mangling for it to apply on my old tree. As to whether this resolved my issue, the answer is no. Our target platform uses opentitan to boot and that was already assuming no memory was reserved for SBI. So the end result was that I got back a fraction of the 2MB, not all of it as I hoped. -Sam
On Wed, Jul 13, 2022 at 7:36 AM Sam Leffler via Devel
On Mon, Jul 11, 2022 at 1:24 PM Axel Heider
wrote: Sam
Aha, thank you (that's very hidden)! I do see the 2MB reserve in the generated DTS (gen_headers/plat/machine/devices_gen.h). And we don't have/use SBI. Any reason why this PR has yet to be merged?
Basically lack of time and nobody was really pushing for that. It's still on my list of pending PRs, but dropped a bit in priority as the current solution also works. Seem all RISC-V board have the same physical memory layout, so the hard-coded values are not too painful. What's left is maybe a bit more testing with the parameters and another round of comparing with what is done on ARM, to see if the behavior is similar or if there are pitfalls we forgot. If you can confirm this PR id working for you, that feedback is really appreciated.
Mixed answer. The PR seems to DTRT but I had to do some mangling for it to apply on my old tree.
As to whether this resolved my issue, the answer is no. Our target platform uses opentitan to boot and that was already assuming no memory was reserved for SBI. So the end result was that I got back a fraction of the 2MB, not all of it as I hoped.
With the change applied, is the new load address of the kernel.elf at the start of the provided memory region rather than a 2MiB offset? What is the size of the kernel image footprint that you're observing?
-Sam _______________________________________________ Devel mailing list -- devel@sel4.systems To unsubscribe send an email to devel-leave@sel4.systems
On Tue, Jul 12, 2022 at 3:40 PM Kent Mcleod
On Wed, Jul 13, 2022 at 7:36 AM Sam Leffler via Devel
wrote: On Mon, Jul 11, 2022 at 1:24 PM Axel Heider
wrote: Sam
Aha, thank you (that's very hidden)! I do see the 2MB reserve in the generated DTS (gen_headers/plat/machine/devices_gen.h). And we don't have/use SBI. Any reason why this PR has yet to be merged?
Basically lack of time and nobody was really pushing for that. It's still on my list of pending PRs, but dropped a bit in priority as the current solution also works. Seem all RISC-V board have the same physical memory layout, so the hard-coded values are not too painful. What's left is maybe a bit more testing with the parameters and another round of comparing with what is done on ARM, to see if the behavior is similar or if there are pitfalls we forgot. If you can confirm this PR id working for you, that feedback is really appreciated.
Mixed answer. The PR seems to DTRT but I had to do some mangling for it
apply on my old tree.
As to whether this resolved my issue, the answer is no. Our target
to platform
uses opentitan to boot and that was already assuming no memory was reserved for SBI. So the end result was that I got back a fraction of the 2MB, not all of it as I hoped.
With the change applied, is the new load address of the kernel.elf at the start of the provided memory region rather than a 2MiB offset?
It's unchanged because opentitan uses the kernel.elf headers to specify where to load the kernel (prior to applying the PR the SBI reservation was just ignored)--which took me a while to understand :().
What is the size of the kernel image footprint that you're observing?
Looks like ~130KB, likely because we have CONFIG_PRINTING + some drivers. I haven't looked too closely at that # because the immediate goal is to reclaim all the rootserver resources which will allow us to meet our target platform constraints--and that looks doable. We likely have lots of places we can trim fat (e.g. kernel config, user space optimizations, removing devel facilities).
-Sam _______________________________________________ Devel mailing list -- devel@sel4.systems To unsubscribe send an email to devel-leave@sel4.systems
Oh, and thanks again Kent, as always you've been very helpful!
On Tue, Jul 12, 2022 at 4:02 PM Sam Leffler
On Tue, Jul 12, 2022 at 3:40 PM Kent Mcleod
wrote: On Wed, Jul 13, 2022 at 7:36 AM Sam Leffler via Devel
wrote: On Mon, Jul 11, 2022 at 1:24 PM Axel Heider
wrote: Sam
Aha, thank you (that's very hidden)! I do see the 2MB reserve in the generated DTS (gen_headers/plat/machine/devices_gen.h). And we don't have/use SBI. Any reason why this PR has yet to be merged?
Basically lack of time and nobody was really pushing for that. It's still on my list of pending PRs, but dropped a bit in priority as the current solution also works. Seem all RISC-V board have the same physical memory layout, so the hard-coded values are not too painful. What's left is maybe a bit more testing with the parameters and another round of comparing with what is done on ARM, to see if the behavior is similar or if there are pitfalls we forgot. If you can confirm this PR id working for you, that feedback is really appreciated.
Mixed answer. The PR seems to DTRT but I had to do some mangling for it
apply on my old tree.
As to whether this resolved my issue, the answer is no. Our target
to platform
uses opentitan to boot and that was already assuming no memory was reserved for SBI. So the end result was that I got back a fraction of the 2MB, not all of it as I hoped.
With the change applied, is the new load address of the kernel.elf at the start of the provided memory region rather than a 2MiB offset?
It's unchanged because opentitan uses the kernel.elf headers to specify where to load the kernel (prior to applying the PR the SBI reservation was just ignored)--which took me a while to understand :().
What is the size of the kernel image footprint that you're observing?
Looks like ~130KB, likely because we have CONFIG_PRINTING + some drivers. I haven't looked too closely at that # because the immediate goal is to reclaim all the rootserver resources which will allow us to meet our target platform constraints--and that looks doable. We likely have lots of places we can trim fat (e.g. kernel config, user space optimizations, removing devel facilities).
-Sam _______________________________________________ Devel mailing list -- devel@sel4.systems To unsubscribe send an email to devel-leave@sel4.systems
It's unchanged because opentitan uses the kernel.elf headers to specify where to load the kernel (prior to applying the PR the SBI reservation was just ignored)--which took me a while to understand :().
What is the size of the kernel image footprint that you're observing?
Looks like ~130KB, likely because we have CONFIG_PRINTING + some drivers. I haven't looked too closely at that # because the immediate goal is to reclaim all the rootserver resources which will allow us to meet our target platform constraints--and that looks doable. We likely have lots of places we can trim fat (e.g. kernel config, user space optimizations, removing devel facilities).
Are you able to send a dump of the kernel.elf's section and program headers? When I do a release build of the riscv32 kernel I get a kernel footprint of 56KiB. (One interesting thing of note is the .boot.bss section is in the wrong place and so won't be recovered when switching to user level): Program Header: LOAD off 0x00001000 vaddr 0xff800000 paddr 0x84000000 align 2**12 filesz 0x000098e8 memsz 0x0000e000 flags rwx Sections: Idx Name Size VMA LMA File off Algn 0 .boot.text 00001558 ff800000 84000000 00001000 2**1 CONTENTS, ALLOC, LOAD, READONLY, CODE 1 .boot.rodata 00000008 ff801558 84001558 00002558 2**2 CONTENTS, ALLOC, LOAD, READONLY, DATA 2 .text 00006b6a ff802000 84002000 00003000 2**6 CONTENTS, ALLOC, LOAD, READONLY, CODE 3 .small 000001f4 ff808b80 84008b80 00009b80 2**6 CONTENTS, ALLOC, LOAD, DATA 4 .rodata 000006b0 ff808d74 84008d74 00009d74 2**2 CONTENTS, ALLOC, LOAD, READONLY, DATA 5 ._idle_thread 00000200 ff809500 84009500 0000a500 2**8 CONTENTS, ALLOC, LOAD, DATA 6 .boot.bss 000001e8 ff809700 84009700 0000a700 2**2 CONTENTS, ALLOC, LOAD, DATA 7 .bss 00004718 ff8098e8 840098e8 0000a8e8 2**12 ALLOC 8 .riscv.attributes 0000002b 00000000 00000000 0000a8e8 2**0 CONTENTS, READONLY 9 .comment 0000000e 00000000 00000000 0000a913 2**0 CONTENTS, READONLY
-Sam _______________________________________________ Devel mailing list -- devel@sel4.systems To unsubscribe send an email to devel-leave@sel4.systems
140KB was high (not sure where I got it as a debug build is 160KB):
$ readelf -lS kernel.elf
There are 15 section headers, starting at offset 0x1a158:
Section Headers:
[Nr] Name Type Addr Off Size ES Flg Lk
Inf Al
[ 0] NULL 00000000 000000 000000 00 0
0 0
[ 1] .boot.text PROGBITS ff800000 001000 001dfe 00 AX 0
0 2
[ 2] .boot.rodata PROGBITS ff801dfe 002dfe 0000ca 00 A 0
0 2
[ 3] .text PROGBITS ff802000 003000 00d19a 00 AX 0
0 64
[ 4] .sdata PROGBITS ff80f19c 01019c 000004 00 WA 0
0 4
[ 5] .srodata PROGBITS ff80f1a0 0101a0 000010 00 A 0
0 4
[ 6] .rodata PROGBITS ff80f1b0 0101b0 0063d6 00 A 0
0 4
[ 7] ._idle_thread PROGBITS ff815600 016600 000200 00 WA 0
0 256
[ 8] .boot.bss PROGBITS ff815800 016800 00025c 00 WA 0
0 64
[ 9] .bss NOBITS ff815a5c 016a5c 0075ac 00 WA 0
0 4
[10] .riscv.attributes RISCV_ATTRIBUTE 00000000 016a5c 00002f 00 0
0 1
[11] .comment PROGBITS 00000000 016a8b 000012 01 MS 0
0 1
[12] .symtab SYMTAB 00000000 016aa0 001b90 10 13
95 4
[13] .strtab STRTAB 00000000 018630 001ab2 00 0
0 1
[14] .shstrtab STRTAB 00000000 01a0e2 000076 00 0
0 1
Key to Flags:
W (write), A (alloc), X (execute), M (merge), S (strings), I (info),
L (link order), O (extra OS processing required), G (group), T (TLS),
C (compressed), x (unknown), o (OS specific), E (exclude),
D (mbind), p (processor specific)
Elf file type is EXEC (Executable file)
Entry point 0xff800000
There is 1 program header, starting at offset 52
Program Headers:
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
LOAD 0x001000 0xff800000 0x28000000 0x15a5c 0x1d008 RWE 0x1000
Section to Segment mapping:
Segment Sections...
00 .boot.text .boot.rodata .text .sdata .srodata .rodata
._idle_thread .boot.bss .bss
So a release build is 116KB at start and 108KB after boot completes (I
wasn't considering the reclaiming of .boot sections). I built an image w/o
CONFIG_PRINTING and got 84KB/76KB. As I noted before we have multiple
kernel changes so this is likely in-line with your 56KB number. Regardless,
these numbers are small compared to the rootserver. But thanks for making
me look :)
Why isn't .boot.bss reclaimed? Can you point me in the direction of the
.boot section reclaiming? (didn't immediately see it)
On Tue, Jul 12, 2022 at 4:39 PM Kent Mcleod
It's unchanged because opentitan uses the kernel.elf headers to specify where to load the kernel (prior to applying the PR the SBI reservation was just ignored)--which took me a while to understand :().
What is the size of the kernel image footprint that you're observing?
Looks like ~130KB, likely because we have CONFIG_PRINTING + some drivers. I haven't looked too closely at that # because the immediate goal is to reclaim all the rootserver resources which will allow us to meet our target platform constraints--and that looks doable. We likely have lots of places we can trim fat (e.g. kernel config, user space optimizations, removing devel facilities).
Are you able to send a dump of the kernel.elf's section and program headers? When I do a release build of the riscv32 kernel I get a kernel footprint of 56KiB. (One interesting thing of note is the .boot.bss section is in the wrong place and so won't be recovered when switching to user level): Program Header: LOAD off 0x00001000 vaddr 0xff800000 paddr 0x84000000 align 2**12 filesz 0x000098e8 memsz 0x0000e000 flags rwx
Sections: Idx Name Size VMA LMA File off Algn 0 .boot.text 00001558 ff800000 84000000 00001000 2**1 CONTENTS, ALLOC, LOAD, READONLY, CODE 1 .boot.rodata 00000008 ff801558 84001558 00002558 2**2 CONTENTS, ALLOC, LOAD, READONLY, DATA 2 .text 00006b6a ff802000 84002000 00003000 2**6 CONTENTS, ALLOC, LOAD, READONLY, CODE 3 .small 000001f4 ff808b80 84008b80 00009b80 2**6 CONTENTS, ALLOC, LOAD, DATA 4 .rodata 000006b0 ff808d74 84008d74 00009d74 2**2 CONTENTS, ALLOC, LOAD, READONLY, DATA 5 ._idle_thread 00000200 ff809500 84009500 0000a500 2**8 CONTENTS, ALLOC, LOAD, DATA 6 .boot.bss 000001e8 ff809700 84009700 0000a700 2**2 CONTENTS, ALLOC, LOAD, DATA 7 .bss 00004718 ff8098e8 840098e8 0000a8e8 2**12 ALLOC 8 .riscv.attributes 0000002b 00000000 00000000 0000a8e8 2**0 CONTENTS, READONLY 9 .comment 0000000e 00000000 00000000 0000a913 2**0 CONTENTS, READONLY
-Sam _______________________________________________ Devel mailing list -- devel@sel4.systems To unsubscribe send an email to devel-leave@sel4.systems
On Thu, Jul 14, 2022 at 2:44 AM Sam Leffler
140KB was high (not sure where I got it as a debug build is 160KB):
$ readelf -lS kernel.elf There are 15 section headers, starting at offset 0x1a158:
Section Headers: [Nr] Name Type Addr Off Size ES Flg Lk Inf Al [ 0] NULL 00000000 000000 000000 00 0 0 0 [ 1] .boot.text PROGBITS ff800000 001000 001dfe 00 AX 0 0 2 [ 2] .boot.rodata PROGBITS ff801dfe 002dfe 0000ca 00 A 0 0 2 [ 3] .text PROGBITS ff802000 003000 00d19a 00 AX 0 0 64 [ 4] .sdata PROGBITS ff80f19c 01019c 000004 00 WA 0 0 4 [ 5] .srodata PROGBITS ff80f1a0 0101a0 000010 00 A 0 0 4 [ 6] .rodata PROGBITS ff80f1b0 0101b0 0063d6 00 A 0 0 4 [ 7] ._idle_thread PROGBITS ff815600 016600 000200 00 WA 0 0 256 [ 8] .boot.bss PROGBITS ff815800 016800 00025c 00 WA 0 0 64 [ 9] .bss NOBITS ff815a5c 016a5c 0075ac 00 WA 0 0 4 [10] .riscv.attributes RISCV_ATTRIBUTE 00000000 016a5c 00002f 00 0 0 1 [11] .comment PROGBITS 00000000 016a8b 000012 01 MS 0 0 1 [12] .symtab SYMTAB 00000000 016aa0 001b90 10 13 95 4 [13] .strtab STRTAB 00000000 018630 001ab2 00 0 0 1 [14] .shstrtab STRTAB 00000000 01a0e2 000076 00 0 0 1 Key to Flags: W (write), A (alloc), X (execute), M (merge), S (strings), I (info), L (link order), O (extra OS processing required), G (group), T (TLS), C (compressed), x (unknown), o (OS specific), E (exclude), D (mbind), p (processor specific)
Elf file type is EXEC (Executable file) Entry point 0xff800000 There is 1 program header, starting at offset 52
Program Headers: Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align LOAD 0x001000 0xff800000 0x28000000 0x15a5c 0x1d008 RWE 0x1000
Section to Segment mapping: Segment Sections... 00 .boot.text .boot.rodata .text .sdata .srodata .rodata ._idle_thread .boot.bss .bss
So a release build is 116KB at start and 108KB after boot completes (I wasn't considering the reclaiming of .boot sections). I built an image w/o CONFIG_PRINTING and got 84KB/76KB. As I noted before we have multiple kernel changes so this is likely in-line with your 56KB number. Regardless, these numbers are small compared to the rootserver. But thanks for making me look :)
Why isn't .boot.bss reclaimed? Can you point me in the direction of the .boot section reclaiming? (didn't immediately see it)
I think it was just an oversight in the linker script as the section is not explicitly listed in there. Likely because .boot.bss symbols are newer than the others. I added a change to include it: https://github.com/seL4/seL4/pull/882
participants (4)
-
Axel Heider
-
Gernot Heiser
-
Kent Mcleod
-
Sam Leffler