That's why you didn't hear about the great email collapse of 2006.
That's the problem with federated protocols. Without someone who owns the system and who has the resources and central authority to police it, if it becomes popular it will be destroyed by spam and other abuse. (Effectively a sybil attack.) Self-policing protocols (without costly proof of work) are an "AI-hard" problem since your adversary is the human intelligence of the protocol's exploiters.
Niche federated protocols avoid this fate by never becoming popular. The other way to avoid this fate is to impose a severe work function like Bitcoin and other block chains, but this is too expensive (figuratively and literally) for most applications. Could you imagine a forum software that requires a minimum of several hundred watts of power to participate in the network?
All others fall to the tragedy of the commons.
In fact, if total downtime is constint, I would prefer they overlap.
Having said that, Google certainly run many many mail servers, as such an outage that impacts delivery for one group of people does not necessarily impact everyone. This is the difference between robust systems and those with critical lynch pins that create system wide outages.
So the people who are supposed to do work for you also can't work?
Federation is a given for this hypothetical sweet spot of course, but how do you find the spot? Are there any HN readers who can point me to research in this area?
Briefly: you send a message from your user agent (mail program) to your own mail server. Your mail server sends a notification to all the destination mail servers, a notification basically consisting of the headers.
The destination mail server lets the recipient know that the notification has arrived, possibly doing filtering and sorting and prioritization and stuff.
The recipient fetches the mail body from the originating mail server, and then does whatever.
The big change here is that the notifications are store-and-forward but the mail itself is not. The originating mail server needs to be up and functional in order to get a message body delivered.
Spammers are severely impeded: the message body can't be sent unless they have a reliable, traceable machine up when people get around to reading mail. Botnets won't work. Yet anybody who can run a reliable server can run their own mail server.
Mailing lists only send the full body to people who request it. Unsubscribe is actually worthwhile for any legitimate company to implement. Mailing list servers can easily implement archives by just keeping mail available.
And finally, the holy grail of Outlook users is actually implementable: you can cancel an email after you sent it and have that actually work, as long as people haven't pulled the body down yet.
People will use what most of their friends use. If their friends and people they want to follow all use Twitter, why would they use GNUSocial? Answer is they won't.
Wait, is this a joke or was there really a great email collapse in 2006?
There have been other Gmail outages in the past though. My two favorites are:
1. The multi-hour outage in 2009 http://www.cnn.com/2009/TECH/09/01/gmail.outage/index.html?e.... 2. And one where they had to restore data from magnetic tape backups https://gmail.googleblog.com/2011/02/gmail-back-soon-for-eve....
Of course, there are other email providers too and their own outages. But I like Gmail and that's what I follow.
The top-level routing is held together with duct tape and lots of carefully trained eyes looking for problems with it.
And that's not to speak to the amazing amount of curiosity and interest that any downtime in a large public system generates. From the PR side, I would think that some kind of post-mortem is almost necessary to prevent that curiosity and interest from turning to distrust and negative perception.
The chatter can burn a lot of time though. You're absolutely right there.
I just mute the gif-sharing channel.
This is starting to become a common theme...