https://stackoverflow.com/questions/6841333/why-is-subtracti...
And computerphile explains so well in their 10-min video:
https://www.youtube.com/watch?v=-5wpm-gesOY
---
I've long ago learned to never build my own Date/Time nor Encryption libraries. There's endless edge cases that can bite you hard.
(Which is also why I'm skeptical when I encounter new such libraries)
I do think the documentation could be a little clearer. The author talks about “time zones” but the library only deals with time zone offsets. (A time zone is something like America/New_York, while a time zone offset is the difference to UTC time, which is -14400 seconds for New York today, but will be -18000 in a few months due to daylight saving time changes.)
Not even that. UTC has leap seconds, which this code doesn’t handle (FTA: “The calendrical calculations always assume a Gregorian calendar, with no leap seconds”)
It copies that from the golang time package, which makes the same claim (https://pkg.go.dev/time)
That makes life a lot simpler for the implementer, but doesn’t that mean you can only reliably use these two libraries for computing with durations, not with moments in time or vice versa? The moment you start mapping these times to real world clocks and adding durations to them, you run the risk of getting small (up to about half a minute, at the moment) inconsistencies.
It's US/Eastern. Paul Eggert can call this a "deprecated compatibility time" all he wants, but "Eastern Time Zone" is the official name of the time zone as maintained by the civil time keeping authority.
Once you've decided you're using nanosecond precision, a 64-bit representation can only cover 584 years which ain't enough. You really want at least 2 more bits, so you can represent 2024 years.
But once you're adding on 2 bits, why not just add on 16 or even 32? Then your library can cover the needs of everyone from people calculating how it takes light to travel 30cm, to people calculating the age of the universe.
That's how I imagine the design decisions went, anyway :)
Of course you can't really provide sub-second accuracy without leapsecond support and what does pre-human-civilisation leapsecond support even mean?
SELECT * FROM my_table WHERE duration_s >= 2h
and have the database DWIM, converting "2h" to 7200.0 seconds and comparing like-for-like during the table scan.Years ago, I wrote a special-purpose SQL database that had this kind of native unit handling, but I've seen nothing before or since, and it seems like a gap in the UI ecosystem.
And it shouldn't be for time. We should have the whole inventory of units --- mass, volume, information, temperature, and so on. Why not? We can also teach the database to reject mathematical nonsense, e.g.
SELECT 2h + 15kg -- type error!
Doing so would go a long way towards catching analysis errors early.But bit pattern is an issue internal to the library. If you can find a bug in the code, certainly point it out and offer a fix if it’s in your skillset.
However, at the same time it seems to indicate that it stores data using sqlites built in number type, which to my understanding does not support unsigned? Secondly, the docs mention you can store with a range of 290 years and the precision is nanoseconds, which if you calculate it out works out to about 63 bits of information, suggesting a signed implementation.
(uint8)(-3) is 253, for example, and (uint8)5-(uint8)253 = (uint8)8, corresponding to 5 - (-3)
How so?
Here is an example for signed integers.
These represent zero time but have different representations in memory:
Seconds: 2 Nanoseconds: -2,000,000,000 (fits in a 32 bit number) Time: zero seconds
Seconds: -2 Nanoseconds: 2,000,000,000 Time: zero seconds
Here is an example for unsigned: Seconds: 1 Nanoseconds: 0 Time: 1 second
Seconds: 0 Nanoseconds: 1,000,000,000 Time 1 second
Extensible type system is a worst thing that could happend with database end-user performance. Then one may not short-cut no single thing in query parsing and optimization: you must check type of any single operand, find correct operator implemenation, find correct index operator family/class and many more all through querying system catalog. And input/output of values are also goes through the functions, stored in system catalog. You may not even answer to "select 1" without consulting with system catalog.
There should be sane set of builtin types + struct/json like way of composition. That is like most DBs do except PostgreSQL. And I strongly believe it is right way.
Not with static typing.
The problem with PG is that it's not fully statically typed internally. SQLite3 is worse still, naturally. But a statically typed SQL RDBMS should be possible.
I don't do "proper" science so the value of nanoseconds seems limited to very clever experiments (or some financial trade tracking that is probalby even more limited in scope).
But being able to represent historical dates seems more likely to come up?
Thoughts?
Simply reducing the precision to 10ns will provide enough range in practice.
Overall, this means if you're representing historical dates I would question whether a seconds-since-epoch timestamp representation is what you want at all, regardless of range and precision.
Edit: yes, you can kinda handle this as part of handling timezones, but still, it's complicated enough that you may want to retain more or different information if you're displaying or letting users enter historical dates.
With nanosecond precision? Just decide what you want to do beforehand, i bet even datetime don't make much sense for that time period, bare date would suffice. also, you'll likely need location, calendar system etc since real dates were not that standardized back then
select time_to_nano(time_now());
-- 1722979335431295000I wonder what does
select time_sub(time_date(2011, 11, 19), time_date(1311, 11, 18));
return?I can think of a few plausible reasons, but the only one that is really significant is "what epoch"? In the case of UNIX-based systems and systems that try to mimic that behaviour, that is well defined. But as you haven't said what your complaints are, it's hard to provide any counterpoint or justification for why things are as they are.
> time_date(1311, 11, 18)
That isn't defined in the epoch used by most computer systems, so all bets are off. Perhaps it'll return MAX_INT, MIN_INT, 0, something that's plausible but doesn't take into calendar reforms that have no bearing on the epoch being used, or perhaps it translates into a different epoch and calculates the exact number of seconds, or anything else. One could even argue that there are no valid epochs before GMT/UTC because it was all just local time before then.
But of course, you can argue either way whether -ve values should be supported. Exactly 24 hours before 1970-1-1 0:00:00 UTC could be reasonably expected to be -86400, on the other hand "since" strongly implies positive only.
Other people might have entirely different epochs for different reasons, again within the domain it's being used, that's fine as long as everyone agrees.
Or did you have some other objection?