People always think "theory is overrated" or "hacking is better than having a school education"
And then proceed to shoot themselves in the foot with "workarounds" that break well known, well documented, well traversed problem spaces
- ACKed messages can be silently lost due to minority-node corruption.
- A single-bit corruption can cause some replicas to lose up to 78% of stored messages
- Snapshot corruption can propagate and lead to entire stream deletion across the cluster.
- The default lazy-fsync mode can drop minutes of acknowledged writes on a crash.
- A crash combined with network delay can cause persistent split-brain and divergent logs.
- Data loss even with “sync_interval = always” in presence of membership changes or partitions.
- Self-healing and replica convergence did not always work reliably after corruption.
…was not downvoted, but flagged... That is telling. Documented failure modes are apparently controversial. Also raises the question: What level of technical due diligence was performed by organizations like Mastercard, Volvo, PayPal, Baidu, Alibaba, or AT&T before adopting this system?
So what is next? Nominate NATS for the Silent Failure Peace Prize?
One or two of the comments on GitHub by the NATS team in response to Issues opened by Kyle are also more than a bit cringeworthy.
Such as this one:
"Most of our production setups, and in fact Synadia Cloud as well is that each replica is in a separate AZ. These have separate power, networking etc. So the possibility of a loss here is extremely low in terms of due to power outages."
Which Kyle had to call them out on:
"Ah, I have some bad news here--placing nodes in separate AZs does not mean that NATS' strategy of not syncing things to disk is safe. See #7567 for an example of a single node failure causing data loss (and split-brain!)."
https://github.com/nats-io/nats-server/issues/7564#issuecomm...
I have to note the following as a NATS fan:
- I am horrified at Jespen's reliability findings, however they do vindicate certain design decisions I made in the past
- 'Core NATS' is really mostly 'redis pubsub but better' and Core NATS is honestly awesome, low friction middleware. I've used it as part of eventing systems in the past and it works great.
- FWIW, There's an MQTT bridge that requires Jetstream, but if you're just doing QoS 0 you can work around the other warts.
- If you use Jetstream KV as a cache layer without real persistence (i.e. closer to how one uses Redis KV where it's just memory backed) you don't care about any of this. And again Jetstream KV IMO is better than Redis KV since they added TTL.
All of that is a way to say, I'd bet a lot of them are using Core NATS or other specific features versus something like JetStream.tl;dr - Jetstream's reliability is horrifying apparently but I stand by the statement that Core NATS and Ephermal KV is amazing.
> 2. Delayed Sync Mode (Default)
> In the default mode, writes are batched and marked with needSync = true for later synchronization filestore.go:7093-7097 . The actual sync happens during the next syncBlocks() execution.
However, if you read DeepWiki's conclusion, it is far more optimistic than what Aphyr uncovered in real-world testing.
> Durability Guarantees
> Even with delayed fsyncs, NATS provides protection against data loss through:
> 1. Write-Ahead Logging: Messages are written to log files before being acknowledged
> 2. Periodic Sync: The sync timer ensures data is eventually flushed to disk
> 3. State Snapshots: Full state is periodically written to index.db files filestore.go:9834-9850
> 4. Error Handling: If sync operations fail, NATS attempts to rebuild state from existing data filestore.go:7066-7072"
https://deepwiki.com/search/will-nats-lose-uncommitted-wri_b...
Well, its an LLM ... of course its going to be optimistic. ;-)
It provides a dictionary of terms that we can use to have educated discussions, rather than throwing around terms like "ACID".
There is also this [1] which Aphyr collabed on which you might find interesting if you haven’t seen it yet.
https://github.com/nats-io/nats-server/discussions/3312#disc...
(I opened this discussion 2.5 years ago and get an email from github every once in a while ever since. I had given up hope TBH)
Why? Why do some databases do that? To have better performance in benchmarks? It’s not like that it’s ok to do that if you have a better default or at least write a lot about it. But especially when you run stuff in a small cluster you get bitten by stuff like that.
Many applications do not require true durability and it is likely that many applications benefit from lazy fsync. Whether it should be the default is a lot more questionable though.
Pretty much no application requires true durability.
The kind of failure that a system can tolerate with strict fsync but can't tolerate with lazy fsync (i.e. the software 'confirms' a write to its caller but then crashes) is probably not the kind of failure you'd expect to encounter on a majority of your nodes all at the same time.
Yes, exactly.
> They will refuse, of course, and ever so ashamed, cite a lack of culture fit. Alight upon your cloud-pine, and exit through the window. This place could never contain you.
https://aphyr.com/posts/340-reversing-the-technical-intervie...
Just use redpanda.
Is the performance warning in the NATS possible to improve on? Couldn't you still run fsync on an interval and queue up a certain number of writes to be flushed at once? I could imagine latency suffering, but batches throughput could be preserved to some extent?
Yes, and you shouldn't even need a fixed interval. Just queue up any writes while an `fsync` is pending; then do all those in the next batch. This is the same approach you'd use for rounds of Paxos, particularly between availability zones or regions where latency is expected to be high. You wouldn't say "oh, I'll ack and then put it in the next round of Paxos", or "I'll wait until the next round in 2 seconds then ack"; you'd start the next batch as soon as the current one is done.
I am getting strong early MongoDB vibes. "Look how fast it is, it's web-scale!". Well, if you don't fsync, you'll go fast, but you'll go even faster piping customer data to /dev/null, too.
Coordinated failures shouldn't be a novelty or a surprise any longer these days.
I wouldn't trust a product that doesn't default to safest options. It's fine to provide relaxed modes of consistency and durability but just don't make them default. Let the user configure those themselves.
One of the most used DB in the world is Redis, and by default they fsync every seconds not every operations.
The problem is it has terrible defaults for performance (in the context of web servers). Like just bad options legacy options not ones that make it less robust. Ie cache size ridiculously small, temp tables not in memory, WAL off so no concurrent reads/writes etc.
The docs explicitly state that clusters do not provide strong consistency and can lose acknowledged data.
I like that, and it allows me to build things around it.
For us when we used it back in 2018, it performed well and was easy to administer. The multi-language APIs were also good.
Not so fast.
Their docs makes some pretty bold claims about JetStream....
They talk about JetStream addressing the "fragility" of other streaming technology.
And "This functionality enables a different quality of service for your NATS messages, and enables fault-tolerant and high-availability configurations."
And one of their big selling-points for JetStream is the whole "stora and replay" thing. Which implies the storage bit should be trustworthy, no ?
The trouble is that you need to specifically optimize for fsyncs, because usually it is either no brakes or hand-brake.
The middle-ground of multi-transaction group-commit fsync seems to not exist anymore because of SSDs and massive IOPS you can pull off in general, but now it is about syscall context switches.
Two minutes is a bit too too much (also fdatasync vs fsync).
tl;dr "multi-transaction group-commit fsync" is alive and well
> I wouldn't trust a product that doesn't default to safest options
This would make most products suck, and require a crap-ton of manual fixes and tuning that most people would hate, if they even got the tuning right. You have to actually do some work yourself to make a system behave the way you require.
For example, Postgres' isolation level is weak by default, leading to race conditions. You have to explicitly enable serialization to avoid it, which is a performance penalty. (https://martin.kleppmann.com/2014/11/25/hermitage-testing-th...)
Woah, those are _really_ strong claims. "Lost writes are accepted"? Assuming we are talking about "acknowledged writes", which the article is discussing, I don't think it's true that this is a common default for databases and filesystems. Perhaps databases or K/V stores that are marketed as in-memory caches might have defaults like this, but I'm not familiar with other systems that do.
I'm also getting MongoDB vibes from deciding not to flush except once every two minutes. Even deciding to wait a second would be pretty long, but two minutes? A lot happens in a busy system in 120 seconds...
Even if most users do turn out to want “fast_and_dangerous = true”, that’s not a particularly onerous burden to place on users: flip one setting, and hopefully learn from the setting name or the documentation consulted when learning about it that it poses operational risk.
- Read Committed default with MVCC (Oracle, Postgres, Firebird versions with MVCC, I -think- SQLite with WAL falls under this)
- Read committed with write locks one way or another (MSSQL default, SQLite default, Firebird pre MVCC, probably Sybase given MSSQL's lineage...)
I'm not aware of any RDBMS that treats 'serializable' as the default transaction level OOTB (I'd love to learn though!)
....
All of that said, 'Inconsistent read because you don't know RDBMS and did not pay attention to the transaction model' has a very different blame direction than 'We YOLO fsync on a timer to improve throughput'.
If anything it scares me that there's no other tuning options involved such as number of bytes or number of events.
If I get a write-ack from a middleware I expect it to be written one way or another. Not 'It is written within X seconds'.
AFAIK there's no RDBMS that will just 'lose a write' unless the disk happens to be corrupted (or, IDK, maybe someone YOLOing with chaos mode on DB2?)
Wait, isn't that the whole point of acknowledgments? This is not acknowledgment, it's I'm a teapot.
Dude ... the guy was testing JetStream.
Which, I quote from the first phrase from the first paragraph on the NATS website:
NATS has a built-in persistence engine called JetStream which enables messages to be stored and replayed at a later time.I got very deep into using NATS last year, and then realized the choices it makes for persistence are really surprising. Another horrible example if that server startup time is O(number of streams), with a big constant; this is extremely painful to hit in production.
I ended up implementing from scratch something with the same functionality (for me as NATS server + Jetstream), but based on socket.io and sqlite. It works vastly better for my use cases, since socketio and sqlite are so mature.
https://github.com/nats-io/nats.rs/issues/1253#issuecomment-...
[0]: https://www.redpanda.com/blog/why-fsync-is-needed-for-data-s...
[1]: https://jack-vanlightly.com/blog/2023/4/24/why-apache-kafka-...
Pros: unlimited streams with the durability of object storage – JetStream can only do a few K topics
Cons: no consumer groups yet, it's on the agenda
https://s2.dev/blog/dst https://s2.dev/blog/linearizability
We have also adopted Antithesis for a more thorough DST environment, and plan to do more with it.
One day we will engage Kyle to Jepsen, too. I'm not sure when though.
I understand that you need to make money. But you'll have to have a proper self-hosting offering with paid support as well before you're considered, at least by me.
I'm not looking to have even more stuff in the cloud.
To implement backpressure without relying on out of band signals (distributed systems beware) you need to have a deep understanding of the entire redis streams architecture and how the the pending entries list, consumers groups, consumers etc. works and interacts to not lose data by overwriting yourself.
Unbounded would have been fine if we could spill to disk and periodically clean up the data, but this is redis.
Not sure if that has improved.