That’ll do it.
Email and the internet don't have "downtime." Certain key infra providers do of course. ISPs can go down. DNS providers can go down. But the internet and email itself can't go down absent a global electricity outage.
You haven't built a decentralized network until you reach that standard imo. Otherwise its just "distributed protocol" cosplay. Nice costume. Kind of like how everybody has been amnesia'd into thinking Obsidian is open source when it really isn't.
AOL never even got to that level of dominance in the internet 1.0 era.
The point is it's not a distributed network if one node is 99.9% of all traffic.
The simple answer is that atproto works like the web & search engines, where the apps aggregate from the distributed accounts. So the proper analogy here would be like yahoo going down in 1999.
For example, right now in my URL bar I read "news.ycombinator.com", not "google.com/profile/news.ycombinator.com".
If Google goes down now I can keep browsing this website and all the other websites I have in all my other tabs as if nothing had happened.
Didn't Google's AMP project do exactly that?
I expect this is common.
If tens of servers go down, then some people may start noticing a bit of inconvenience. If hundreds of servers go down, then some people may need to coordinate out of bound on what relays to use, but it still generally speaking works ok.
It's like saying "English never burns". Sure, you can't burn English but you can burn specific books, newspapers and so on.
When I tried it long time ago, the idea was just a transposed Mastodon model that the client would just multi-post to dozen different servers(relays) automatically to be hopeful that the post would be available in at least one shared relays between the user and their followers. That didn't seem to scale well.
nostr.com nostr.how nostr.net nostrich.love nostrhub.io usenostr.org And of course https://github.com/nostr-protocol/nostr
Okay, nuff trolling for today
And then normally there's a nice discussion about how production is very different to the test environment.
Goroutines are actually better AFAIK because they distribute work on a thread pool that can be much smaller than the number of active goroutines.
If my quick skim created a correct understanding, then the problem here looks more like architecture. Put simply: does the memcached client really require a new TCP connection for every lookup? I would think you would pool those connections just like you would a typical database and keep them around for approximately forever. Then they wouldn't have spammed memcache with so many connections in the first place...
(edit: ah, it looks like they do use a pool, but perhaps the pool does not have a bounded upper size, which is its own kind of fail.)
15-20 thousand futures would be trivial. 15-20 thousand goroutines, definitely not.
The problem is not resource usage in go. The problem is that they created umpteen thousand TCP connections, which is going to kill things regardless of the language.
There are certainly plenty of projects where garbage collection is too slow, but I don't know that they're the majority, and more people would likely prefer memory safety by default.
[1] https://www.lifeatspotify.com/jobs/senior-backend-engineer-a...
Off-topic, but "real" feels like the new "delve". Is there such a thing as "fake" or "virtual" downtime, or why do people feel the need to specify that all manner of things are "real" nowadays?
The article does work in lynx, at least I can read it.