"Why do things suck?" Because parasites ruined it for the rest of us.
> We have to accept a certain amount of abuse. It is a far better use of our time to use it improving Geocodio for legitimate users rather than trying to squash everyone who might create a handful of accounts
Reminds me of Patrick McKenzie's "The optimal amount of fraud is non-zero" [1] (wrt banking systems)
Also, your abuse-scoring system sounds a bit like Bayesian spam filtering, where you have a bunch of signals (Disposable Email, IP from Risky Source, Rate of signup...) that you correlate, no?
[1] https://www.bitsaboutmoney.com/archive/optimal-amount-of-fra...
I suppose you could call it inspired by Bayesian inference since we're using multiple pieces of independent evidence to calculate a score, though that makes it sound a bit fancier than it is and we aren't using the Bayes' theorem. But it's possible I had that in the back of my head from a game theory class I took long ago.
But for the fun of it, let's model it that way:
Probability (Spam | disposable email domain, IP address, etc... ) = [probability(disposable email domain, IP address, etc... | spam) x prior probability(spam rate)] / probability(disposable email domain, IP address, etc...)
Or something like that.
Also — it's a delight to have one of Patrick's articles mentioned in connection with this!
It's a bit like how each 9 of runtime is an order of magnitude (ish) more expensive to achieve, and most use cases don't care if it's 99.999% or 99.9999%.
We have seen customers where free tier abusers created 80k+ accounts in a day and cost millions of dollars. We have also seen businesses, like Oddsjam add significant revenue by prompting abusers to pay.
The phycology of abuse is also quite interesting, where even what appears to be serious abusers (think fake credit cards, new email accounts etc.) will refuse a discount and pay full price if they feel they 'got caught'
We have seen individuals just trying to get free accounts week after week, who when nudged once pay immediately thousands of dollars even after using fake, stolen or empty cards.
These individuals think they are being cheeky and when they are 'caught' they revert to doing the right thing.
This pattern is everywhere. It was foreign to me for a long time because I'm the type of person who likes to play within the rules. There are a lot of people who get a kick out of gaming the system to their advantage, even to the point of breaking the law.
Many people have zero qualms about stealing things when they imagine it's a faceless corporation on the other side. They might even rationalize it with mental gymnastics until they think they're doing the right thing. You see it most often when the topic of media piracy or sharing Netflix logins comes up.
This mindset is very common in startup communities. I've heard so many stories from founders gloating about how they abused some system or used a loophole to avoid paying for something they could clearly afford. It's like a badge of honor to some people. I know one guy who bought an EV but hasn't installed a charger because he drives it to a business down the street and uses the EV charger they installed for their employees every night. Another guy used to brag about sneaking into a cafeteria for another organization and stealing lunch every day. A while ago I talked to a guy who liked to "dine and dash" without paying his tab, even though he could easily afford it. For them, it's all about getting away with it and winning a game.
As soon as you make it obvious that someone is watching them, they cave. They don't want to be the type of person who abuses actual people. They only like to abuse what they see as faceless systems.
For paying customers, it probably doesn't make a lot of sense to use an anonymous email address, since we ask for your name and billing address either way (have to stay compliant with sales taxes!)
https://www.geocod.io/code-and-coordinates/2025-01-13-how-ge...
We have more about our data sources here: https://www.geocod.io/data-sources/
IE, send email, IP, browser agent, and perhaps a few other datapoints to a service, and then get a "fraudulent" rating?
The other versions of recaptcha show the annoying captchas, but v3 just monitors various signals and gives a score indicating the likelihood that it's a bot.
We use this to reduce spam in some parts of our app, and I think there's an opportunity to make a better version, but it'd be tough for it to be better enough that people would pay for it since Google's solution is decent and free.
American stores could prevent most shoplifting by banning people of a certain skin color from entering. The US doesn't let them do this, even though it would most definitely work. They're not allowed to do it for a very good reason, but those reasons seem to be lost to internet companies, who seemingly push so hard for diversity, equity and inclusion.
If your setup makes you look like a bot, that's YOUR problem. Stop doing things that make you look like a bot.
I get that you want privacy, but so do bots.
or perhaps a really big whitelist of good ones? that would be extremely helpful!
I would probably not recommend implementing a whitelist for blocking purposes. But perhaps domains on a whitelist could get a slight scoring bump.
[1] https://github.com/disposable-email-domains/disposable-email... [2] https://github.com/disposable/disposable [3] https://github.com/unkn0w/disposable-email-domain-list
I use this to sign up for a service with a unique email that is basically my junk box, but the email is its own unique entry in my password manager
(This isn’t protecting the SaaS vendor against abusive signups, it is a feature of the SaaS product to help its customers detect fraud committed against themselves within the SaaS product’s scope.)
I realized the machine learning project was a "solution in search of a problem," and left.