On Mon, 02 Jan 2017 21:40:35 -0500, Corey Richardson wrote:
On 01/02/2017 06:30 PM, Adrian.Danis@data61.csiro.au wrote:
<snip>
It is for similar reasons that the RT kernel goes away from tick based timing to tickless scheduling, although now precision does come into play. Clearly you now have a trade off between tick rate and deadline precision. Taking 'unnecessary' timer interrupts when operating at higher degrees of precision will again increase the pessimism of any scheduling analysis over what it would be with tickless scheduling. Nothing stops you from setting up a timer and counting ticks at user level though.
Sure, tickless is great, +1 to tickless. The biggest thing I'm concerned with is accumulated error in the tsc value. I guess that's not really relevant at the timescales that these measurements are used for.
One issue is that I think the word "ticks" is being aliased here. Corey's concern was that TSC-increment-units ("ticks") are being converted (lossily) into microseconds before being exposed in the API. Meanwhile, your response is discussing whether rescheduling-interrupt- intervals ("ticks") should be exposed in the API. Userspace on Robigalia will _also_ be using TSC-increment-units as its fundamental concept of time, and will have to find its own conversion factors to wall-clock units (nanoseconds, for the APIs we're considering). As a result, there's a (high) risk of _skew_ in the conversions, making it very difficult to use the RT APIs in a way that allows confidence in the result. Since the kernel and userspace will independently maintain distinct conversion factors from TSC increments to seconds, they cannot reliably communicate when both _measure_ the TSC but _interact_ using seconds. <snip>