Edit: found the patchset. In includes benchmarks for several architectures as well: https://lore.kernel.org/linux-arm-kernel/20190621095252.3230...
clock_monotonic greatly increases the failure surface of intra- (and inter-) machine timings than clock_monotonic_raw. A misconfigured ntp can cause bad slew in clock_monotonic. For clock_monotonic_raw, the main source of failures should be the oscillator controlling your CPU. If that happens, you have bigger problems.
Is this the shared object that gets mapped to the address space of each process?
How Golang[1] implements monotonic clocks — basically it retrieves always both wall and monotonic clocks on "time.Now()" and when doing operations of subtraction it uses the monotonic. When printing, it uses the wall time. Pretty neat. Details on the proposal by Russ Cox [2].
[1] https://golang.org/pkg/time/#hdr-Monotonic_Clocks [2] https://go.googlesource.com/proposal/+/master/design/12914-m...
import time
time.get_clock_info(name)
where name can be one of those: 'monotonic': time.monotonic()
'perf_counter': time.perf_counter()
'process_time': time.process_time()
'thread_time': time.thread_time()
'time': time.time()A reasonable first approximation of a solution would be to just check every second (or minute, or hour, depending on requirements) whether the current system time is later than the scheduled time for any pending events. Then you probably want to make sure events are marked as completed so they don’t fire again if the clock moves backwards.
Trying to predict how many seconds to sleep between now and 3pm on Saturday is a difficult task, but you can probably use a time library to do that if it’s important enough... but what happens when the government suddenly declares a sudden change to the time zone offset between now and then? The predictive solution would wake up at the wrong time.
A Duration represents the elapsed time between two instants as an int64 nanosecond count. The representation limits the largest representable duration to approximately 290 years.
Which is not really good because we have to calculate the time between now() and Saturday. If "wall time" changes, the scheduler will not be triggered when expected.In that case I would not rely on Timer to use it in a cron-like fashion, or trigger a Ticker every second, check the _current_ wall time, and decide if anything needs to be done.
You can read more about Timers, Tickers and Sleeps in this (pretty interesting) article[1]
lfence works as a execution barrier and has an explicit cost of only a few cycles. You can accurately time a region with something like:
lfence
rdtsc
lfence
// timed region
lfence
rdtsc
This will give you accurate timing with some offset (i.e. even with an empty region you get a result on the order of 25-40 cycles), which you can mostly subtract out.Carefully done you can get results down to a nanosecond or so.
rdtscp has few advantages over lfence + rdtsc, and arguably some disadvantages (you can control where the implied fence goes).
* If software requires RDTSC to be executed only after all previous instructions have executed and all previous loads are globally visible, it can execute LFENCE immediately before RDTSC.
* If software requires RDTSC to be executed only after all previous instructions have executed and all previous loads and stores are globally visible, it can execute the sequence MFENCE;LFENCE immediately before RDTSC.
* If software requires RDTSC to be executed prior to execution of any subsequent instruction (including any memory accesses), it can execute the sequence LFENCE immediately after RDTSC. This instruction was introduced by the Pentium processor.
rdtscp is usually a bit more disruptive, and cpuid is probably 100 or 1000 times more disruptive.
How does KVM affect this?
How does Docker on KVM affect this?
How does Hypervisor affect this?
Add "... for a given network driver, e2e, measured RTT.."