RethinkDB was Master-Slave (strongly consistent) with an amazing developer community and actually survived Aphyr's tests better than most systems.
CockroachDB is also in the Master-Slave (strongly consistent) camp, but more enterprise focused, and therefore will probably not fail, however their Aphyr report worked... but was unfortunately very slow. But hey, being strongly consistent is hard and correctness is a necessary tradeoff from performance.
Other systems, like Cassandra, us (https://github.com/amark/gun), Couch/Pouch, etc. are all in the Master-Master camp, and thus AP not CP. Our argument is that while P holds, realtime sync with Strong Eventual Consistency is good enough yet has all the performance benefits, for everything except for banking. Sure, use Rethink/Cockroach for banking (heck, better yet, Postgres!), but for fun-and-games you can do banking on top of AP systems if you use a CRDT or blockchain (although that kills performance, CRDTs don't) on top.
So yeah, I agree with you about CAP Theorem and stuff, disagree with Cockroach's particular choice - but they do have some pretty great detailed explainers/documentation on their view, and therefore should be treated seriously and not written off.
CRDB is also master-master, all the nodes are the same and can serve reads and writes. That has no bearing on AP or CP.
It seems like a good solution for data that doesn't change too quickly?
there's a great talk on how its done with a custom gossip protocol implementation: https://www.youtube.com/watch?v=HfO_6bKsy_g
Cockroach DB: trade some performance for geographic redundancy. The trade off may work in your favor - e.g. read heavy workloads (or not).
I plugged in CDB I place of Postgres for some testing this week, was surprised it worked so well.
This is almost never the case, and CDNs are no exception. A CDN like Cloudflare that reuses your domain(s) means that they become just a useless point that your dynamic requests have to travel to and from the main server. A CDN that uses its own domains requires extra DNS queries, extra TCP & SSL connection setup, etc, plus it only starts loading when the browser has started processing the HTML.
There are also http headers that can instruct the browser to fetch cdn resource before the html is delivered.
This was a huge win for QoS on cache misses, which were a significant portion of our traffic. There are tons of tradeoffs to make this happen which is why we couldn't find an off-the-shelf solution to deliver the same results.
When partitions heal you simply merge all versions through conflict-free replicated data types. No ugly decisions, no sacrificing neither latency nor consistency. We call it strong eventual consistency [1] nowadays. And it's exactly like CDNs, except more reliable.
I'm wondering, since CockroachDB keeps lying about and attacking eventual consistency in these PR posts, the whole "consistency is worth sacrificing latency" mantra might not work in practice after all. People just don't buy it, they want low latency, they want something like CDNs, something fast and reliable, something that just works. Something that CockroachDB can never deliver.
[1] https://en.wikipedia.org/wiki/Eventual_consistency#Strong_ev...
What if two users want to change e.g. the telephone number of an existing record during a network partition.
There just is no obvious way to merge a telephone number. One of them is correct, the other is incorrect.
Can CRDTs solve my simple problem?
In your example, that might mean changing the single "phone number" field to many "phone numbers", so that merging the two writes results in a customer record with two phone numbers. This preserves the data, but pushes conflict resolution (which number should be used?) out into the consumers.
i.e. this is solved with CQRS and Event Sourcing and it probably works in like all databases. It's quite complex but pretty reliable, I'm pretty sure that everybody already built at least a extremly simple append only event log.
all CRDT will give you a deterministic outcome and provide enough information to allow a user to decide if this outcome is acceptable
There are other DB systems with such properties "they" can use, e.g. Cassandra. But CockroachDB also gives ACID transactions, which may be important for others.
If the write has to be consistent and available across multiple regions, it will need to synchronously replicate that write to all the regions, thus incurring the same performance penalty as RDS or any other consistent database.
[1]: https://www.cockroachlabs.com/blog
[2]: https://www.cockroachlabs.com/docs/stable/frequently-asked-q...
While some of the technical underpinnings could be improved, the state of the internet is determined by social and legal considerations, not technical ones.
S3?
It would also be useful to me, in some cases, to be able to perform a read-only query in an "inconsistent" mode to avoid that cross-region latency, at the expense of potentially receiving stale data.
Might be helpful if you already know the couchdb answer for each feature.
Or more like federated SPARQL on RDF? https://www.w3.org/TR/sparql11-federated-query/
Never mention lower performance (even on single-node). Add to that aws-vps with pseudo-cores and spectre-upgrade and good luck with your tps-reports.