In reality, physical time elapses at slightly different “rate” depending on where you’re located on the surface of the earth (or possibly not on the surface of the earth), also due to nonuniformities in the earth’s mantle affecting the gravitational field.
TAI is taken as the average between the 400-or-so contributing atomic clocks, adjusted for their relative height above sea level, and possibly other factors, and taking into account the signal propagation time between the clocks.
Compared to the time in the reference frame of the sun, for example, (which may be taken as a solar-system wide time-keeping standard) TAI wiggles around that solar time according to earth’s yearly cycle around the sun.
Of course, those variations are much smaller than DUT1 (at least close to earth’s reference frame).
On the other hand, the physical clocks in the laboratory are not immune and that's why they have to be corrected for a dozen factors, the largest of which is usually gravitational redshift. To appreciate the complexity and precision of this correction see this NIST paper:
This is not quite correct. The definition of the SI second does not make any requirements or assumptions about the state of motion or location of the cesium atoms. The only requirement is that the device that measures the frequency of the cesium atom hyperfine transition is at rest relative to the atoms themselves and spatially co-located with them. That is what ensures that no relativistic effects are involved in the measurement.
The definitions of TAI and UTC are not simply based on the SI second, but on the SI second as recorded by clocks on the geoid that are at rest relative to the rotating Earth. That extra qualifier is why measurements recorded by clocks not on the geoid have to be adjusted.
Because TAI's frame of reference is the geoid it will experience relativistic effects based on earth's motion viewed from any other frame of reference (in OP's example, from the reference of the sun)
Clocks cannot exist without a frame of reference. Accurate timekeeping necessarily involves tracking spatial information as well.
[0]: https://www.bipm.org/documents/20126/41483022/SI-Brochure-9-... page 16
I think author is wrong about astronomers, for whom UTC is an unwanted complication. Astronomers use sidereal time, which is unrelated to ther apparent motion of the sun. For short intervals, physicists and engineers may as well use atomic time.
The WP article on UTC has a section titled "Rationale", but doesn't explain what problems/compromises UTC was supposed to address. It's worthy of note, however, that of the three bodies involved in the first version of UTC, two were national naval observatories.
Also UT1 is actually measured based on distant celestial objects, because it is easier to measure those at very high precision than the precise transit time of the Sun.
If you'd like greater accuracy, UTC isn't a great starting point because UTC's leap seconds complicate getting an accurate estimate of UT1. Its hard to tell if your local idea of UTC has had the latest leap second applied or has had false leap seconds applied (or worse, accidentally received a leap smeared source, which can't be backed out to convert to UT1).
If you're going to produce a more accurate estimate of UT1 either using interpolations of IERS bulletin B predictions or a model based on the daily USNO ultra-rapid UT1 VLBI measurements the leap seconds don't do anything to help you, but they can mess up your calculations because you need to correctly and consistently back them out.
So essentially the practice of leap seconds helps applications that need UT1 to within better than an hour or two (otherwise just using TAI with offset or TAI plus a static linear correction is good for thousand of years), but hurts applications that need subsecond UT1 accuracy, hurts anyone that needs consistent or accurate duration, and creates a lot of software bugs including ones that can show up when there hasn't actually been a leap second.
I would argue that today the number of applications that need UT1 at all are much smaller and less significant than ones that need subsecond consistent durations or times. And that of the applications that want UT1 most either don't need leapseconds at all (e.g. TAI or TAI plus the simplest static linear correction is enough) or would prefer better than 1 second accuracy, where at best leapseconds don't help and in practice they add a lot of complexity and still sometimes cause failure.
Besides when UTC was standardized and adopted in the sixties, software wasn't really much of a concern
Personally, in my non-expert opinion, we should stop adding or subtracting leap seconds from UTC (that is, UTC would thereafter be at a constant offset from TAI), and leap seconds would instead be added to the timezone DB. That way local time (like Aug 14 2022 xx:53) would still retain the familiarity with 12:00 being noon, at least at some point in each timezone, but calculations with UTC seconds would not need to bother with leap seconds.
It's actually quite tricky to back out leap seconds accurately because the underlying time will be discontinuous for you and you can't reliably tell when leap seconds have or haven't been applied... you might even, without knowing it, be being fed leap-smeared time which could evenmake your sidereal tracking wrong and can't be backed out because different smearers do it differently. Because leapseconds they're infrequent, you also don't get many live fire tests.
I'm aware of multiple observatories that just shut down operations across leap seconds. The significant amount of work to handle leapseconds correctly and be confident you're correct isn't justified vs taking some downtime.
Especially if one only wants to communicate times between the civilizations, such as meet here at 13:00 in exactly three years from now. This is much easier to communicate based on rotation angles of the Earth than on a atomic time scale that would need to be transferred very precisely to be useful at all.
This may seem pedantic, and most of the time it's not important to understand the difference. But when this misunderstanding bites, it will take your leg off at the knee.
During a positive leap second, the POSIX second repeats itself (an alternative mental model could be that the given POSIX-second lasts 2 UTC-seconds). During a negative leap second, the given POSIX-second disappears (or alternatively, the given POSIX-second lasts 0 UTC-seconds).
On my Ubuntu box, the `date +%s` command prints the number of POSIX seconds since 1970-01-01, not the number of UTC seconds since 1970-01-01. To get the number of UTC seconds, we must add the number of intervening leap seconds since 1970-01-01.
I get a headache every time I have to think about this.
When you want to know the number of SI seconds between two dates in either posix or UTC you need an accurate table of past leapseconds. If you want to compute TAI from posix or UTC you need an accurate table of leap seconds that your UTC clock has applied (which may not be the same as the table of leap seconds, since you might have missed the most recent one!).
That's also why the leap second correction is not a big deal because it just happens when computers sync. A few seconds correction is just a routine correction. Happens all the time.
That may be how things work for someone's word processor appliance, but on Linux/etc. NTPD or chrony continually estimate your local clocks drift and compensate for it and change their polling rate as a function of uncertainty (e.g. from temperature effects). They save the estimate drift rate in a persistent file on disk (chrony can optionally compensate for temperature effects; see tempcomp in the manpage).
On some random supermicro server here the crhony drift file says that the clock is slow by -33.208515 parts per million (not atypical for a non-temp-compensated oscillator), with an uncertainty of 0.031571 in that measurement.
As a result other than leap seconds there is no discontinuity in your local time, no few seconds correction that you suggest. Just a continuous clock that is steered smoothly to agree with UTC.
This is really important when you're dealing with subsecond events that must be synchronized among distributed systems.
Hosts on my network here that have no special handling of their time keeping will have time that agrees with each other within +/- 100 microseconds (and with UTC, for that matter, thanks to the local gps disciplined clock). Without a local GPS clock you'll get extra uncertainty from network path asymmetry, but that uncertainty is limited to the round trip time... which is a lot less than 1s if your network is at all usable interactively. :)
A 1 second jump would be gargantuan.
For many applications +/- 1 second is fine. But for others it isn't fine. Modern computers with network time available should have no problem being accurate much much better than 1 second.