Ok, so F=2, so 2F+1 is 5. So client 1 creates a txn that writes to object x. x has replicas on A-E, but not F and G. So client 1's txn gets sent to F+1 drawn from A-E. In your scenario A and B are down so that only leaves C, D and E. So they vote on the txn and vote to Commit. This vote forms a Paxos round as per Gray and Lamport's paper "Consensus on transaction commit". Assuming that all these votes get received and written to disk by at least F+1 acceptors (which will almost certainly also be C, D, and E), and that C, D and E all form the exact same overall outcome then consensus has been achieved: the outcome is fixed and stable.
A and B come up at the same time that C and D go down. C, D and E all knew (in their role as acceptors, not voters) that the result of that txn has to go to _all_ of A-E. So when A and B come up, E will see this and send them the txn outcome. Now A and B can't progress at this point yet because they've only seen the outcome from one acceptor (E), not the minimum F+1. So as they spot C and D are down, they will start additional paxos rounds to try and get the txn aborted. Those rounds will _have_ to contact E, and they will find that actually the txn votes led to a commit. Now A and B will also play the role of acceptor, taking over from the failed C and D. Thus we'll finally have A B and E as acceptors all with the original commit votes, they will form the same outcome, and all the voters and learners will get the same result. I realise this is probably not the cleanest explanation, but this is the chief purpose of consensus algorithms in general: once a majority (F+1) of nodes form consensus about something, that consensus can never change, and can only be propagated further.
> Client 2 tries to read, contacting nodes A, B, C, F, G. 4 out of the 5 respond successfully, so it calls it good. But it can't possibly read the value that client 1 wrote.
Client 2 is trying to read object x and object x only exists on A-E, not F and G. So client 2 cannot ask F and G to take part. I assume you mistyped and meant A, B, and E, not C as I thought C and D were down at this point.
Client 2 will get A B and E to vote on this next transaction and that will occur as normal as we have only have 2 failures currently (C and D).
> Whatever way you do it, the number of nodes contacted for a write + number of nodes contacted for a read has to add up to more than the cluster size.
No, that's not true. It has to add up to more than the number of replicas of an object. That can be much less than the cluster size in GoshawkDB. This property is not true of several other data stores.
> If your cluster is somehow sharded then you can work around this with e.g. consistent hashing (so that client 2 knows which 5 of the 7 nodes a particular key must be stored on, and contacts the correct subset), but that only works if client 2 can know what the key was
Yes, this is exactly what GoshawkDB does (though my consistent hashing algorithms are quite unlike the norm).
> and I think Cassandra already supports that
Maybe - I'm certainly not a cassandra expert. I'm pretty sure cassandra does not support full transactions. For example: https://docs.datastax.com/en/cassandra/2.0/cassandra/dml/dml...
"Cassandra supports atomicity and isolation at the row-level, but trades transactional isolation and atomicity for high availability and fast write performance".
Which is fair enough - there's definitely a use case for the properties that Cassandra offers - as I've written on the GoshawkDB website, GoshawkDB is certainly not a competitor of Cassandra. GoshawkDB does support full transactions.