Huh.
EVERY database call should be wrapped in exception handling to make sure that any errors e.g. connection errors are handled appropriately. MongoDB is no different in this case.
You can only handle the errors that you know how to handle, in this case retrying the operation may have created a bigger problem.
Perhaps a better option would be to have an 'unsafe_write' option. But then of course, benchmarks would look less impressive which didn't use a function with 'unsafe' in the name.
MongoDB: "Done!"
[Ed: The following is an unusual default requirement]
Me: "MongoDB, did you store what I asked?"
MongoDB: "Nope! Good thing you checked!"
Me: "MongoDB, please store this: ..."
MongoDB: "Okay, I've accepted your request. I'll get around to it eventually. Go about your business, there's no sense in you hanging around here waiting on me."
Or, if you really want to be sure it's done:
Me: "MongoDB, please store this. It's important, so let me know when it's done."
MongoDB: "Sure boss. This'll take me a little bit, but you said it's important, so I'm sure you don't mind waiting. I'll let you know when it's done."
To me, the choice of performance over reliability is the hallmark of mongodb, for better or worse.