The problem is that you’re computing the probability of one service going down, when I actually care about the union of the downtime of all of the services that I need (or, equivalently, the intersection of their uptime).
If every Arch package hosted its own source code, then even if each one of them has better uptime than GitHub, at least one of them is probably going to go down every single day (assuming their uptime is uncorrelated).
If you can personally centralize all of your own work on a server that you host, like a company-run Mattermost or something, and you have someone who can be on call to keep it up, do it. The result will, as you’ve described, be simpler and better than GitHub. But expecting every individual to run their own Git server is just stupid, because then my project is going to wind up depending on dozens of separately-hosted Git servers and I’m back to taking the intersection of their uptimes to compute the uptime of the whole system. There’s a sweet-spot here; GitHub is above the sweet spot, but a lot of “FOSS projects” (the kind that have a single maintainer) are below it.