1) Failover of a MongoDB Replica Set is totally automated and requires no manual intervention. The replica set remains available for writes as long as a quorum can be established between remaining members. See http://www.mongodb.org/display/DOCS/Replica+Sets for more info
2) MongoDB does support different consistency models through Write Concerns and Safe Mode. The client can choose to wait for the transaction to be written to multiple replicas if it wants. See http://www.mongodb.org/display/DOCS/Verifying+Propagation+of... for more info
Disclaimer: I work for 10gen
http://www.mongodb.org/display/DOCS/Resyncing+a+Very+Stale+R...
"1. Delete all data. If you stop the failed mongod, delete all data, and restart it, it will automatically resynchronize itself. Of course this may be slow if the database is huge or the network slow.
2. Copy data from another member. You can copy all the data files from another member of the set IF you have a snapshot of that member's data file's. This can be done in a number of ways. The simplest is to stop mongod on the source member, copy all its files, and then restart mongod on both nodes. The Mongo fsync and lock feature is another way to achieve this. On a slow network, snapshotting all the datafiles from another (inactive) member to a gziped tarball is a good solution. Also similar strategies work well when using SANs and services such as Amazon Elastic Block Service snapshots.
http://www.mongodb.org/display/DOCS/fsync+Command "Lock, Snapshot and Unlock
The fsync command supports a lock option that allows one to safely snapshot the database's datafiles. While locked, all write operations are blocked, although read operations are still allowed. After snapshotting, use the unlock command to unlock the database and allow locks again
2. Really? Is this wrong then?
http://www.mongodb.org/display/DOCS/Replica+Set+Design+Conce...
"Writes which are committed at the primary of the set may be visible before the true cluster-wide commit has occurred. Thus we have "READ UNCOMMITTED" read semantics. These more relaxed read semantics make theoretically achievable performance and availability higher (for example we never have an object locked in the server where the locking is dependent on network performance).
2. If you're reading from both primary and secondary nodes, then the view may not be consistent. In most cases you simply read from the primary for fully-consistent reads. You get to decide whether reads from secondaries are consistent or not by setting the write concern (i.e., the minimum number of nodes to replicate to before returning each write.)
This one does seem to be more informed (and I agree with a lot his criticism of MongoDB here), but it's almost like comparing apples to oranges. Things are done in MongoDB a certain way for a number of reasons (e.g., the query interface doesn't allow certain things in a distributed context that you could probably do with a SQL database). But I think anyone who's done large-scale MongoDB deployments can (or at least should) attest that it works well, but perhaps not as well as other solutions (or as well as it could/will work eventually/whatever).
If you are using MongoDB in a way that is similar to the way you would have used a SQL DB you are probably doing something wrong. Specifically, you are trying to place normalized data in a database designed for denormalization.