On a separate note have tcpdump captures been done on these excessive connections? Minus the IP, what do their SYN packets look like? Minus the IP what do the corresponding log entries look like in the web server? Are they using HTTP/1.1 or HTTP/2.0? Are they missing any expected headers for a real person such as cors, no-cors, navigate, accept_language?
tcpdump -p --dont-verify-checksums -i any -NNnnvvv -B32768 -c32 -s0 port 443 and 'tcp[13] == 2'
Is there someone at OpenStreetMap that can answer these questions?Technically we able to block and restrict the scrapers after the initial request from an IP. We've seen 400,000 IPs in the last 24 hours. Each IP only does a few requests. Most are not very good at faking browsers, but they are getting better. (HTTP/1.1 vs HTTP/2, obviously faked headers etc)
The problem has been going on for over a year now. It isn't going away. We need journalists and others to help us push back.
And there are so many of such idiots that it's overwhelming their servers?
Something doesn't math here.
As an SRE, the only legitimate concern here could be the bandwidth costs. But QoS tuning should solve that too.
Supposedly technical people crying out for a journalist to help them is super lame. Everything about this looks super lame.
Every bot is doing something on behalf of a human. Now that LLMs can churn out half-assed bot scripts every "look I installed Arch Linux and ohmyzsh" script kiddie has bots too.
Bots aren't going anywhere.
"Use the web the way it was over 10 years ago plox" isn't going to do it.
The scrapers try hard to make themselves look like valid browsers, sending requests via residential IP addresses (400,000+ IPs at last count).
I reached out to journalists because despite strong technical measures, the abuse will not go away on its own.