For example, if I rely on "updated-at" and infer that whatever record has the latest updated-at is the "current" record, then I may create the illusion that there are no gaps in my facts. That may not be so.
A reference system to look at is Crux: https://opencrux.com/
I believe that notion is captured by timely's capabilities [0]. Your capability has a current time, and you can only produce records at or greater than the current time. So you could produce a record at, say, t + 3, then t + 5, and then produce a record at t + 1. But not until you downgrade your capability to t + 6 will the record at t + 5 be considered final; downgrading your capability is how you indicate that you have the correct and final set of facts for all times less than t.
If your events can arrive out of order forever, then you have a problem, as you'll never be able to downgrade your capability because you'll never be willing to mark a time as "final." That's where bitemporalism (as mentioned in that issue I linked previously) comes into play. You can mark a result as final as of some processing time, and then issue corrections as of some processing time in the future if some out-of-order data arrives. Materialize will (likely) gain support for bitemporalism eventually, and the underlying dataflow engine supports arbitrary-dimension timestamps already.
Would be happy to chat about this more, if you're curious, but I feel like this discussion is getting a bit unwieldy for an HN thread! (At the very least I might need to put you in touch with Frank.) Feel free to reach out on GitHub [1] or our Gitter [2], or shoot me an email at benesch@materialize.io.
[0]: https://docs.rs/timely/0.11.1/timely/dataflow/operators/stru...