He sounds genuinely distressed, and I feel for him, but I don't get why it causes him pain in this specific way. It's not your fault!
And for what it's worth, it's not really YouTube's "fault" either. Remember, they're victims here too. They didn't invite this, they didn't ask for this, and they're asking for ideas because they don't know themselves what to do.
Computers are sometimes likened to "magic" where we just cast a spell by writing code, and something technically beautiful happens, but where magic can interpret intent and act according to the caster's desire, computers can't, and there's no way to "Wrath of God" the spammers.
I hope this guy can find a way to live with this kind of thing, as I really don't see scammers getting wiped out in any conclusive sense, on YouTube or any platform. It sucks, it's unfair, but it's not realistically possible to deal with conclusively.
I somewhat agree with this, but I feel like there's some very low-hanging fruit that should be available.
When posting in comments on a video there should be:
- An avatar similarity check. If you have an avatar that is too similar to the channel of the video you're commenting on, your post automatically goes into a moderation queue (or just remove avatars from comments, except from the channel).
- A name similarity check. If you have a name that is too similar to the channel name, your post automatically goes into a moderation queue.
- A huge indication that a comment comes from the channel author. They have some indications of this now, but it should be very prominent, so it's *obvious* when comments don't come from the channel author.
None of these things are going to be trivial to implement for a company like YouTube, but this has been a problem for years at this point. These things could have been done by now.
I think the indicator that a comment comes from the author is pretty noticeable but it could be better. Its a tough trade off to make between UX and fighting abuse.
The reality is that there are criminal groups who spend great resources to scam people online and sometimes they figure out a clever way around enough mitigations so they can completely hose a platform. There’s not a lot a platform can do against this kind of attack except detection and reacting.
I see it as an international crime issue where certain countries are indifferent to americans or even their own citizens being scammed. This is a very different problem if Google could simply pass their info on the scammer over to a competent law enforcement agency. It would be a lot riskier for scammers, and they’d have to put a lot of effort into evading detection. Definitely a pipe dream though.
> A huge indication that a comment comes from the channel author.
This pattern requires that users have done something to have seen this before. While many users will see it, the users that are probably falling for this problem probably will not have seen this "huge indication" before, so wouldn't know it exists.
e.g. a user who never checks comments really, but then checks comments one day and sees the scammer might not realize there would be an indicator if it was the video's author.
“Let’s see if we can host all the world’s video content and all its commentary only using automated tools!”
That was the idea behind the social media boom and it was a foolhardy one. The moderation problems that social media platforms are having now are 100% of their own invitation. They just managed to hold them off for a while.
If every time you had a party, one of your guests would be hit by lightning and die, you might feel guilty about hosting parties. Would you feel comfortable inviting your friends and family over?
I’m familiar with this personality and his brand is being distressed about many other things. I think he’s just expressing feelings for his audience or something. He’s also apologized for lots of other things that aren’t his fault- rude magic fans, sexists, racists, people who spend too much money, people who don’t spend enough, etc.
He tends to spend a lot of time apologizing for things that aren’t his fault or that he has no control over. It’s not unique to him, and seems kind of common. I’m not sure what to call these over-empathizers.
He is upset that his posts are acting like bait, that his likeness is getting used to scam people who wanted nothing more than to connect with someone they admire.
For a lot of people, it would be impossible not to take this personally.
The bit where it gets complicated is that the scammers are using his name and likeness. The way these scams work is by convincing people that the youtuber in question wants to connect with them and then they steal money from them.
How would you feel if I would go around on HN tell people that I'm Zetice and steal money from them? How would you feel if the way you would hear about this is by confused people messaging you and demanding very angrily that you pay them back? You Zetice worked hard to be a person with integrity, and someone is squandering away your accumulated goodwill, hurting the very people who look up to you and you can't do anything about that.
I think I would feel terrible, and I would try to use my platform to shine a light to this problem. Worst case a few of my followers would wisen up and develop resistance to this kind of scam. Best case some manager at youtube shovels more money into solving the problem.
I think the creator has clearly spent a lot of time and effort building his community, and he is witnessing members of that community being hurt, but also his reputation is being directly attacked by the scammers to hurt his image with the very people of his community.
Probably because he's not a callous bastard.
Don't worry, I'm not saying you're one, either. I suspect you just don't have his responsibilities. Like, hundreds of people who somehow hold you up in great esteem because of the stuff you create. That's a heavy burden and most people who aren't assholes will take it seriously.
Guy just shot up in my esteem five hundred levels.
(yeah yeah, I know M:tG doesn't have levels... :P)
If you insist on seeing it through a lens of self-interested sociopathy, he could loose his most ardent fans & contributors and suffer damage to his brand and/or reputation, via bad interactions with those scammers.
There is no way to automate this because you have too much money on the side of the scammer should they break through it since if they only catch 1 out of 10,000 people, it immediately pays their bills. This is the big issue with email spam, we can fight and fight, but at the end of the say, there is no monetary cost really to sending out a stupidly absurd number of emails.
I would rather have email be free and open, but I see the issues that arise from bad actors in that environment who have no real cost to abusing it and have major potential for gain if they are successful.
That being said, the easiest answer, in my head, would be to make it so that sending email to people in an unsolicited fashion has some cost. Yet, even that is problematic, because I want some people to send me unsolicited email from time to time....
I'm persistently getting emails coming through telling me I won a yeti cooler. About daily...
I mark them as spam, and this kind of email still doesn't go away.
I have given up reporting these. The social media companies usually employ some automated method, whereupon reviewing the post, said automation determines it's just fine.
I also went all the way to finding some prolific scammers in Canada, handing over their details to the FBI and state police... but two years later, their domains are still active in scam campaigns.
The big thing I think I would do in this situation is to find whoever is handling this problem the best and direct that community discussion over there, disabling comments on the video platform. If the video platform is unable to address the scammer bots issue, then that community traffic can just go somewhere else.
I don't know enough about the community discussion platforms/forums to say who's the best, but I've not seen this level of spam on self-hosted forums or even reddit.
This is definitely not the future I had anticipated as a kid on dialup internet in the 90s.
The amount of scammers I get in my private messages on Instagram is also insane. How clumsy they act. Method A: Take a random picture that you commented on a longer while ago. Say they're them and that you, the fan, has been chosen for special treatment. Yada Yada, they want to send you a picture of them but sigh coincidence has it that it requires an iTunes gift card to work. Fucking really? The profile picture is that of a blonde but all of the profile's friends are black including the women. Method B: A more popular porn actress had an Instagram account, you comment on that account, any comment. They copy pictures of that account, use AI to generate similar pictures of that person. Again you're addressed as "the fan". You get the rest. Those accounts have over 200k followers each. Maybe they're also part of that person's network monetization strategy. I know a guy who worked in SMS sex chat, pretending to be a woman and sexing male chatters up. Something like this may be happening here.
The internet, especially social networks, remain a dangerous place.
Sometimes I want to go along with it just to see where it ends and what they do to make it work, but then again who has the time for that?
- there isn't a lot of $$$ in squashing them, since it lowers engagement numbers (engagement inflated by bots is still engagement)
- the cost of hammering down on a real user is high in terms of PR, moreso than the cost of letting a bot continue running
- no one's making them and they have no real competitors in this space, so what does it matter? where are YouTube's customers gonna go?
Remember this in the days to come.
And this bit also:
People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.
https://www.washington.edu/news/2015/09/17/a-q-a-with-pedro-...
Yes it requires manual labor buy Alphabet has the money to afford to pay a group of people to combat this. They will in turn probably reduce the amount of % they pay out creators.
If I had the resources Alphabet does that problem would be solved quickly. Kill the messenger.
Depending on the implementation of a pay-to-comment scenario, it could still be profitable for the scammers to pay. They would definitely be tracking CTR or whatever their equivalent is on scam campaigns.
In this case, the scammers are all using similar profile photos, so that part of the whack-a-mole seems like easy pickins. At least at creation time, they could fuzzy compare the profile photos of the commenter and the channel. And compare among other commenters.
The scammers would definitely move on to something else, but it seems like the scammers are scoring a lot of really easy wins right now.
I hate scammers and I love ruining their day, but this is one of many areas where I can't give more attention to it than the platform can.
Could grandfather in existing accounts, and have existing accounts give references for new accounts to continue getting "free" accounts. The cost being a loss of the ability to "mint" new accounts if any of your existing referenced accounts start spamming.
So you want to comment with your new account, you either need a "mint" from an existing account, or you need to fork over... let's say $50 to activate comments. (Amount could be whatever).
Viewing it through an economic lens the other end of the problem to attack is naïve users - if they're identified and don't see the comments the scammers stop making money.
Another creators-centric model is to disable comments on YouTube and only have comments on another platform that embeds the videos.
Your parents & grandparents should NOT be on social media. They're just going to get ripped off.
If you have elderly family who you care about, spend time helping prune services and educate them on how to spot a fake post.
Also, they could disallow external links in comments.
Bots will now post a link to another YouTube video (internal) while pretending to be the creator. The linked channel will look identical to the creator's channel. The private video played will be "You've been selected as a winner! Please check the description for details on how to redeem your gift". Heck use AI to fake the creator's voice and play that as a voice over.
Etc.
The fundamentals are hard to fight here. Automating these processes are low effort / high-enough reward for scammers, and it's plugging holes in a leaky boat.
The number could be scaled up. They could select out populations that seem 'with it' and bias it toward anyone who interacts with bot comments other than to report them. I'm sure there are other useful signals.
Without getting into the 'how to block bots' side of the problem, this is one way YouTube could help with the user education without individual creators having to make videos like this or recurring community posts. As noted in other comments, a purely technical solution to ban bots probably isn't going to work.
That's why I bought the domains LearnComputersFast.com, EasyComputerGuide.com, and BestComputerAdvice.com. Instead of pointing grandma towards sponsored content, dark patterns, and scammers wanting her credit card number, they would point to FOSS/OSHW/Linux content. Maybe that's not what people are really searching for, but maybe we should also live in a world where FOSS/OSHW/Linux is positioned as the mainstream.
Only problem is I don't know how to make websites.
Without having to watch the video, this Youtuber does a lot of reviews and content on the game. During the course of reviews, he gets a lot of free merchandise. To that end, he then gives it freely to his subscribers.
The problem: there's a deluge of bots that nearly-instantly post on comments and threads in Youtube that redirect unsuspecting users to scammer Telegram channels, Discords, and etc. The common scam is "pay for shipping and you get free stuff". (this youtuber pays for the shipping if sending free product)
To compound this, this is definitely an automated bot driven attack on not only current videos, but also all his historical videos. And Youtube/Google/Alphabet doesn't provide anywhere near the tools to counter these types of botstorms.
He pleas for anyone working at Youtube etc to get in touch and/or make tools available to disable these scammers.
It's a well done video explaining the problem and really the frustration of creators who are fighting these bots and haven't given up.
I suppose they share that ad revenue with TCC.
Secondly, his audience is tabletop card gamers. He does set the scene as to why he's making a video. Most of his audience is not of a technical nature.
I also made a synopsis as a comment so you didn't have to watch the video, since some (including myself!) despise video-only content. But I do follow him on YT. And his problem applies to a LOT of areas on YT, Twitter, Facebook, Reddit, and elsewhere.