On 2 Dec 2021, at 23:23, Jack Chen via Devel
Quick questions regarding the return value of getKernelWcetUs():
1. For ARM the value is assigned during build process taking config.cmake files, where
KERNEL_WCET's are all 10us (except for tk1 it's 100us, typo?) For x86 the
function returns 10 directly, and for risc-v, comment suggests it's copied from x86
hoping it's an overestimate. Where does this "10us" come from? Based on
static estimate done previously on Armv6 (imx31)? AOS lecture mentioned a
"378us" kernel WCET estimate and 99.5us observed WCET for Armv6.
All of the current WCET config values are placeholders, because none of the current boards
have undergone a real WCET analysis (the one that did have one was on Armv6). 100 for TK1
is not a typo AFAIR, but I don't quite remember the reason it is higher.
The value is configurable, because it depends on how you set up the system. I.e. if you
make certain operations impossible, which is entirely realistic for a hard-realtime
system, you will get a much lower WCET than in a fully general, fully dynamic system.
2. Is this 10us WCET value the one used as padding in
the Temporal Partitioning routine? ("worst-case flushing time"?)
Will have to leave that to someone who knows more about how the computation there works