Other services made GA today, at the time of me writing this post:
- Cloudflare One [1]
- Cloudflare WAF [2]
- Domain Scoped Roles [3]
[0] https://blog.cloudflare.com/welcome-to-ga-week/
[1] https://blog.cloudflare.com/cloudforce-one-is-now-ga/
[1] https://cloud.google.com/armor/docs/adaptive-protection-over...
[2] https://cloud.google.com/blog/products/identity-security/how...
Ouch. I mean, congrats for allowing attack traffic to reach the servers, but you know what... once a request is recognized as "bad", then there are hight chances that other requests on the same http connection will also be bad perhaps?
This is exactly, why a more appropriate mitigation is to issue connection-close (in http1.1 world) or GOAWAY frame (in http2 world).
With that in hand, it's possible to bounce the connection back to the bot. Sure, the bot will retry which will put pressure on TLS. But then, if a source IP is trying very hard to establish many connections it's another strong sign of malicious traffic. In TCP you can rate limit SYN packets and _not_ allow more than X new connections per second, quite easily. Another mitigation is to allow the bot to send http requests, but just not answer them. Allow them hang on the connection, burning the bot's resources (memory, concurrent connetions).
Anyway, the point is: requests-per-second is not a perfect metric for L7 / application attacks. A smart application shoudl be albe to push mitigation up to lower levels (like syn rate limiting, or TLS client-hello signature rate limiting), and _not_ intake large amounts of traffic. A better metric is bot count (or number of unique IP's seen), which was shared in the article and is in mid-range.
To recap, a well designed system should _never_ need to handle 47MRPS. It should be able to packpressure it.
Edit: For others, our problem was probably an edge case, and our use is probably not common. MT seems good compared to the alternative solutions I know so looking forward for an opp to try this if I have ever a need.
Services like domain name providers and ddos protection are so essential for putting content online that they should be regulated as utilities. It would be absurd for a power or water company to cut service because they decided something legal but undesirable was happening at a building just because enough people tweeted at them.
There are some services that should be allowed to pick and choose what to provide. A forum like HN or reddit for example is perfectly justified removing whatever they want. If you don't like it, go somewhere else. But once you get to the point where your domain name and phone number are getting canceled, its no longer possible.
Doesn’t matter how quickly the sites moderators take your post down. Small site owners are expected to react within seconds while big tech is allowed days.
Why is it like that? Is it like this in other industries? Do people that use Coach bags complain when Coach introduces a wallet that they won't use?
I simply do not understand it.
I am using Firefox with multi-account containers and uBlock Origin enabled, and I also have an OpenVPN client running, the amount of captchas and distrust I receive from Cloudflare (and their customers, unknowingly I suppose) feels disproportionate.
It's funny to me that these companies most likely spends a lot of time and energy on how to optimize their websites and purchase flows, and they might not know how often Cloudflare puts up barriers and destroy their hard work.