Window functions are a particular favorite of mine, but we haven’t seen much customer demand for them yet, so they haven’t been officially scheduled on the roadmap. They require some finesse to support in a streaming system, as you have to reconstruct the potentially large window whenever you receive new data. Probably some interesting research to be done here, or at least some interesting blog posts from Frank.
Please feel free to file issues about any of these functions that you’d like to see support for! We especially love seeing sample queries from real pipelines.
I wrote shambolic stream-of-consciousness notes on it several years ago: https://docs.google.com/document/d/1ZlPp099_fV1lyYWACSyuWY_j...
The gist being that the mechanisms of windowing, triggering and retraction a la Beam are actually workarounds for a lack of bitemporalism.
For example, if I rely on "updated-at" and infer that whatever record has the latest updated-at is the "current" record, then I may create the illusion that there are no gaps in my facts. That may not be so.
A reference system to look at is Crux: https://opencrux.com/
I believe that notion is captured by timely's capabilities [0]. Your capability has a current time, and you can only produce records at or greater than the current time. So you could produce a record at, say, t + 3, then t + 5, and then produce a record at t + 1. But not until you downgrade your capability to t + 6 will the record at t + 5 be considered final; downgrading your capability is how you indicate that you have the correct and final set of facts for all times less than t.
If your events can arrive out of order forever, then you have a problem, as you'll never be able to downgrade your capability because you'll never be willing to mark a time as "final." That's where bitemporalism (as mentioned in that issue I linked previously) comes into play. You can mark a result as final as of some processing time, and then issue corrections as of some processing time in the future if some out-of-order data arrives. Materialize will (likely) gain support for bitemporalism eventually, and the underlying dataflow engine supports arbitrary-dimension timestamps already.
Would be happy to chat about this more, if you're curious, but I feel like this discussion is getting a bit unwieldy for an HN thread! (At the very least I might need to put you in touch with Frank.) Feel free to reach out on GitHub [1] or our Gitter [2], or shoot me an email at benesch@materialize.io.
[0]: https://docs.rs/timely/0.11.1/timely/dataflow/operators/stru...
"What do I believe I believed yesterday?" is slightly harder and needs additional information to be stored. You can rewind a stream and replay it up to the point of interest, but that can be quite slow.
"What did I believe today would be, last week?", "What is the history of my belief about X?", "have I ever believed Y about X?" etc are much harder to answer quickly without full bitemporalism. So too the problem of having implicit intervals that are untrue, which is where "updated at" can be so misleading.
If you trying to wholesale replace a "traditional" batch oriented data warehouse like the ones I mentioned above I think building support for window functions would be essential.