story
Every production database I've ever seen goes through some version of the same evolutionary lifecycle:
1. "We only need the current state". The database acts as a snapshot of the current world. Updates cause the loss of historical data. The database is like a state machine.
2. "Oops, actually, we need point-in-time reports". The database is hacked with date_from and date_until fields (which introduce interesting anomalies and impose programming overhead on every query written).
2a. "This is a mess, let's clean it up". The database schema is refactored so that the central model is logs of transactions. Point-in-time snapshots are derived at query time. Note that this both replicates the underlying logic of database design (much as network protocol layers are fractal). Note also that it recreates the way basic accounting works, which was the inspiration for database transactions.
3. "Oh crap, the regulations/laws/reporting standards changed". Now you need yet another layer to represent changes in the domain, not just in the data. Your point-in-time reports become even hairier as you must now write different queries depending on the time period being accounted for; and sometimes you must write queries that span both periods and include logic to combine them.
The concept of a temporal database is to make points-in-time a universal, cross-cutting part of everything that happens to the database, either in the schema or the data. Rich has correctly identified the correspondence with functional immutability, where instead of modelling things as having mutable state, you model changes as a series of successor models, each of which is by itself immutable.
I think it's a good idea. The world would be very different if proper temporal logic had been baked into SQL in the first place.