The article doesn't say and I constantly get the most difficult Google captchas, cloudflare block pages saying "having trouble?" (which is a link to submit a ticket that seems to land in /dev/null), IP blocks because user agent spoofing, errors "unsupported browser" when I don't do user agent spoofing... the only anti-bot thing that reliably works on all my clients is Anubis. I'm really wondering what kinds of false positives you think Anubis has, since (as far as I can tell) it's a completely open and deterministic algorithm that just lets you in if you solve the challenge, and as the author of the article demonstrated with some C code (if you don't want to run the included JavaScript that does it for you), that works even if you are a bot. And afaik that's the point: no heuristics and false positives but a straight game of costs; making bad scraping behavior simply cost more than implementing caching correctly or using commoncrawl
As a legitimate open source developer and contributor to buildroot, I've had no recourse besides trying other browsers, networks, and machines, and it's triggered on several combinations.
[1]: https://anubis.techaro.lol/blog/release/v1.20.0/#chrome-wont...
I'm curious how, though, since the submitted article doesn't mention that and demonstrates curl working (which is about as low as you can go on the browser emulation front), but no time to look into it atm. Maybe it's because of an option or module that the author didn't have enabled
But that's not what Cloudflare does. Cloudflare guesses whether you are a bot and then either blocks you or not. If it currently likes you, bless your luck
Until the moment someone will figure out the generation of realistic enough 3d faces.