> Before Streams we needed to create a sorted set scored by time: the sorted set element would be the ID of the match, living in a different key as a Hash value.
I think the sorted set would be a much better choice, because then you could still insert items in the past, like when that admin remembers there was a tennis match last week he never recorded. Same goes for modifying past values, or deleting values. These operations are trivial using a sorted set & hash, not so using streams.
I'm excited for streams and I'm glad Antirez is taking time to blog and evangelize, but this article didn't convince me there's a compelling use-case for streams aside from the Kafka-like use-case.
We use sorted sets as queues heavily and this would be a necessary thing for us to consider giving streams a go which would indeed be interesting from a memory savings (we sometimes have millions of items in our queues for a short time). Sometimes, say on error conditions, you want to stuff something back at the start of the queue (because the order of processing matters) instead of at the end as one example.. priority being another.
Does anyone have good patterns for joining across entries from two or more Redis streams? This is one of the most interesting aspects of Kafka/Flink/Spark/Storm/etc. Would be useful to be able to develop with streaming joins in Redis playgrounds.
Let's say tennis games are recorded on a piece of paper and entered into the computer later. What is different?
Aside: would an embeddable redis be a useful thing for apps and other isolated devices?
1. Embedded systems are often used in environments where you need very resilient software. To crash the DB because there is a bug in your app is usually a bad idea.
2. As a variation of "1", it's good to have different modules as different processes, and Redis works as a glue (message bus) in that case. So again, all should talk to Redis via a unix socket or alike.
3. Latency is usually very acceptable even for the most demanding applications: when it is not, a common pattern to solve such problem is to write to a buffer from within the embedded process, that a different thread moves to Redis. Anyway if you have Redis latencies of any kind, you don't want to block your embedded app main thread.
4. Redis persistence is not compatible with that approach.
5. Many tried such projects (embedded Redis forks or reimplementations) and nobody cared. There must be a reason.
Hydrating/deserializing data from Sqlite into types/objects and doing whatever goodness those need, then using Redis to make "updating the database" super fast (in memory after all) and let Redis write it back to Sqlite as there is IO/time/lull in traffic.
Kinda like how Epic Cache does its transaction journal flushing every X minutes?
I understand that Elasticsearch is a common place to put logs, also because I assume that searching through logs is a common use case, but I wonder whether Redis has particular benefits for this use case. The data structure seems particularly tailored to it (but not so much to searching I guess).
For logs without full indexing, Loki (https://github.com/grafana/loki) is a recent entry into the space, and it probably a good option to look at. It indexes metadata (labels), so it allows searching by labels but not full text. It is also supposed to be horizontally-scalable, which is probably something you want in a log storage solution.
"A key difference I observed was that if a Kafka consumer crashes, a rebalance is triggered by Kafka after which the remaining consumers seamlessly start consuming the messages from the last committed offset of the failed consumer.
Whereas with Redis streams I had to write code in my application to periodically poll and claim unacked messages pending for more than some threshold time."
So far I haven't used it outside of hobby projects for webGL games and such, but it's worked brilliantly, and no Kafka required for hobby async-streaming infrastructure!
Hopefully it's useful to someone out there! https://github.com/erulabs/redis-streams-aggregator