In the database world, we call it temporal databases, because it introduces the concept of time as a first-class part of the conceptual/logical model.
Every production database I've ever seen goes through some version of the same evolutionary lifecycle:
1. "We only need the current state". The database acts as a snapshot of the current world. Updates cause the loss of historical data. The database is like a state machine.
2. "Oops, actually, we need point-in-time reports". The database is hacked with date_from and date_until fields (which introduce interesting anomalies and impose programming overhead on every query written).
2a. "This is a mess, let's clean it up". The database schema is refactored so that the central model is logs of transactions. Point-in-time snapshots are derived at query time. Note that this both replicates the underlying logic of database design (much as network protocol layers are fractal). Note also that it recreates the way basic accounting works, which was the inspiration for database transactions.
3. "Oh crap, the regulations/laws/reporting standards changed". Now you need yet another layer to represent changes in the domain, not just in the data. Your point-in-time reports become even hairier as you must now write different queries depending on the time period being accounted for; and sometimes you must write queries that span both periods and include logic to combine them.
The concept of a temporal database is to make points-in-time a universal, cross-cutting part of everything that happens to the database, either in the schema or the data. Rich has correctly identified the correspondence with functional immutability, where instead of modelling things as having mutable state, you model changes as a series of successor models, each of which is by itself immutable.
I think it's a good idea. The world would be very different if proper temporal logic had been baked into SQL in the first place.
Right the term functional is just confusing. Temporal database theory is relatively mature and even part of the SQL standard for which IBM implements in DB2.
Any database that uses a append only log can potentially be temporal so long as it retains the log and exposes the information in the log (which most don't).
Even say PostgreSQL uses a form short term temporal querying in order to provide MVCC, to bad they haven't taken it further to allow fully temporal standards.
I spent years making all kinds of databases, and I approve your message.
I recently noticed that CouchDB with its notion of map-reduce views can be used as a temporal database - make each change a "document", then make a map-reduce job to roll up the changes to the present state (or "as of" state", if you need it). Nice side effect is that you get free multi-master replication with automatic conflict resolution.
Do you have any other suggestions for a temporal database?
To my very great shame I am aware that there is an entire literature on the topic and I have barely even skimmed the surface of it.
Snodgrass, a leading researcher on temporal databases, wrote a book in 90s about it which is available for free from his website[1].
It's good because it was written before some temporal extensions were added in SQL:2003 (I think). The problem is that almost nobody has implemented those extensions (I believe some versions of DB2 have them), so you are left with doing things by hand. The Snodgrass book goes into amazing detail as to why you'd do such a thing and how to do it.
Temporal databases and immutability are independent concepts. A 'time series database' only solves the simple temporal database problems. When validity comes into play things become really hard (something was entered yesterday but will be valid from tomorrow).
Temporal databases rely on the immutability of each record (whether by convention or as a system guarantee). Changes are no longer expressed with updates on a given row; instead you add new rows to represent the change.
It's more than that. It's actually a lot like distributed version control (a la git). When you ask the connection for the current value of the database, you get back a real value that will never change out from under you. This is very much akin to making a clone of a repository in git. This gives you the enormous power of speculation and experimentation within your application. You can do whatever you want with this database value and it will never affect anyone else nor be affected by anyone else.