http://www.cs.arizona.edu/people/rts/tdbbook.pdf
More generally, you can usually build a model that handles (2), the changing state, by generalising your original model. I've done in both of classical ways, having either validity period fields (thus pushing complexity into every query), or by having audit tables (thus relying on triggers).
But building a changing model is hard, because at the schema level, SQL only supports (1). That is, it provides primitives to change the current state of the model.
So you wind up having to build a meta-model. The deficiencies of SQL (and I'm an RDBMS bigot) mean that we wind up building inner platforms. Or outer platforms -- witness migration toolkits.
Yet support for changing models is essential. No model is ever constant. Most of my work is for a public-sector client whose legal reporting requirements are constantly changing. You cannot throw away old reports which were made under old regulations; you must be able to recreate the world at any point in time. Which requires either a lot of bookkeeping code or taking periodic snapshots.
Datomic looks neat. I see that I'm not the only one who made the leap from transactional memory to transactional models.
Edit: I just found notes I wrote in 2009 on database languages while looking for something else. Spooky: http://chester.id.au/2013/08/28/notes-towards-a-set-objectiv...