> Becoming part of Project Galileo is quick. On average, participants are up and running within a couple of hours; however, set up time ranges from 15 minutes to a couple of days.
> CloudFlare does not cap its DDoS mitigation service. CloudFlare has experience defending against some of the largest DDoS attacks on record. We will keep your website online.
The web is fragile in so many ways... But it's worse: the perpetrators of online attacks are (as good as) anonymous -- so this charitable initiative should be lauded for the load it's carrying.
Pun intended.
Is there any progress on infrastructure improvements that could potentially improve this current state of affairs? Is our only solution for benevolent companies like Cloudflare to offer their blanket of protection? I guess I'm asking, who will guard the guards?
Services like Cloudflare, Blacklotus, etc. act like insurance companies [e.g. You have a pool of X services and only Y are getting attacked at a time]. This gives them an economy of scale others can't match on their own. I'd like to see a non-profit public internet security service tbh but I don't think it'd raise the capital it would need to get to the level Cloudflare is at.
Provisioning something like this yourself is going to probably cost you $450 per Gbps of mitigation per month. HE is selling transit for $.45/Mbps/month, for instance. Then you'd need to clean it. HE can't provision this instantly or on demand, so you'd need to have it built out and semi-permanent [e.g. long term contract for 100s of Gbps].
You can create multiple targets too but the costs are still roughly the same vs. one big target. [e.g. 10 x 10 Gbps is pretty much as effective as 1 x 100 Gbps and similar costs]
So, a site which has a lot of outbound traffic (most web servers) essentially has an equal amount of "free" inbound capacity available. You could sell this to someone doing web crawls, or online backups, or something, but DDoS (if you end up paying for it) is essentially all inbound, too.
The best position to absorb DDoS, if you're not a specialty firm, is to have a huge amount of outgoing web server traffic, huge systems built for that, and really great cooperation with your upstreams to push out filters as quickly as possible. The problem is this only really works against pure resource-consumption DDoS; if people realize a 50Gbps syn flood doesn't affect you too much, they'll move up the stack to layer 7, and then custom-tailored layer 7.
For a site which is huge and constantly being attacked, I could see this becoming a core competency (USG?) -- for anyone else, it's probably something you could outsource.
There are drawbacks to outsourcing your network, but if you're already hosted in the cloud, those drawbacks are mainly the incremental reliability of your outsourced edge provider -- pick a good one. If you're not in the cloud, you need to be very clear what your security model is -- I definitely wouldn't trust bitcoins to any outsourced service provider operating above the atoms level (i.e. a cage in a colo, with no security dependency on anyone running anything above that), but DDoS mitigation is critical for that kind of business -- the optimal situation is to have "untrusted" frontend nodes handling all your incoming traffic, with DDoS mitigation as a service, WAF, etc. probably outsourced, and then application-specific security on your own infrastructure. The DDoS layer can, if it fails, DoS you, but you can switch away from it. The DDoS layer can't actually subvert your application beyond that.
I'm more curious why we don't start large-scale investigations in response to each DDoS attack: each one gives you a list of machines likely participating in a botnet.
It is a pretty pessimal situation. I think you might see critical services run over clean pipes networks, rather than the public internet, which also is a return to scale.