Given that this is a case about addiction, that feels like a shockingly bad thing to say in defense of your product. Can you imagine saying the same thing about oxycodone or cigarettes?
[0] https://www.npr.org/2026/03/25/nx-s1-5746125/meta-youtube-so...
I also hope the reasons are obvious.
It should be no surprise that children can be manipulated by highly intelligent adults.
For example see the glossary in https://en.wikipedia.org/wiki/Substance_dependence
Based on the fact that many people here disagree about fundamental things, as well as the fact that “liberal” is a highly overloaded term, I think it should be obvious that it’s not obvious what you mean.
It's one thing if an adult smokes and gambles, it's another thing if a child does. It seems to me that stuff you do in youth tends to stick around for life.
The problem is that this runs directly into the evidence that is mounting from GLP-1 agonists.
A lot more things are tied to the pathways we associate with "addiction" than we thought.
Not careful enough apparently: Nicotine isn't that addictive on its own, tobacco is.
I wish we'd delete that word from the English language.
To be sure. But still an obviously dumb thing for a CEO to say though.
This just comes off as poorly obfuscated self selection. You own a bunch of Meta, Alphabet and other media stocks?
No, but unfortunately I can very easily imagine people saying it, just like the people who made loads of money from pushing those products did. Also just like the people who are profiting from the spread of gambling are saying now.
Why would someone choose to do a thing if it harms them? There are good arguments against laws that restrict personal freedoms, but this isn't one of them.
Though to be fair, I was mostly pointing out the fact that this was a pretty dumb thing to say for a case like this, especially in a jury trial.
A statement that's been brought up even by HN commentators
Facebook is not a free market where you can choose. You're compelled to use it for several different reasons (and before some wiseass comments "you're not forced to. you can delete it" yes I know)
- They captured the early market. There was a small window of time in which to get users
- They ruthlessly bought up the competition
- They've deleted links to competitors
- They outright hijacked people's email addresses. It makes it hard to transfer users to another service or to email them outside the walled garden
- Even while they change privacy settings for users to make things more public, they wall off public pages. Your local neighborhood has a place where they post information? Even if everyone selects "Public" in the audience you can't see it without an account
Edit: Oh, and shadow profiles. And making it nigh-impossible to delete an account permanently
In other words is not the posts by the influencers, but techniques such as infinite schooling, and so on.
This is why meta and google could not relay on User-Generated Content Safe Harbor (Section 230) part of the law.
https://www.reddit.com/r/nosurf/comments/k3vzaa/how_to_break... - used the reddit link because the existence of r/nosurf is another example of people who want to stop but find it difficult.
-- Billionaires
Edit to include: I mean this is coming the same day as the Supreme Court throwing out the piracy case against Cox Communications 9-0. Remember that this case originated with $1 billion dollar jury verdict against them! Was reversed by an appeals court 5 years later and completely invalidated today. Juries should not handle complex civil litigation, I'm sorry
Suing Facebook for systematically behaving badly is one thing, if you can prove it and prove it harmed you.
Suing _everybody_ is one random person getting rich for… being mad at the world she was born into?
Whenever the McDonald's coffee case comes up, I always see caveats about how the actual case was a lot less sensational than the "woman sues McDonald's for coffee being too hot" headline implies.
I strongly disagree. I'm very familiar with the details of the actual case, and the Wikipedia article gives a good overview: https://en.wikipedia.org/wiki/Liebeck_v._McDonald%27s_Restau... . Yes, the plaintiff received horrific third degree burns when she spilled the coffee on herself, but lots of products can cause horrible harm if used incorrectly - people cut fingers off all the time with kitchen knives, for example.
I find the headline "Woman sues McDonald's for their coffee being too hot" a completely accurate description of what happened, with no hyperbole and no "ridiculousness" at all.
Nothing wrong with getting mad at the world when the world is complete and utter garbage to you.
You might be blaming the wrong people. Looking at a lot of those "shockingly large verdicts", in that they would have bankrupted the company and forced it to be dissolved and reformed as perhaps a less objectionable version of itself: cool, shoulda done that. Sad we didn't.
Are we conflating matters of merit with matters of judgment, here?
It could be perhaps as simple as allowing third-party websites and apps for watching Youtube on your phone. And it's okay if this would be a premium paid feature, so there's no counter argument that "it costs them money to host videos".
This is not an entirely new idea either. Before Spotify became popular, people would integrate Last.FM into their media players to get music recommendation based on their listening history, and you could listen to music via YouTube directly on the last.fm website.
Cory Doctorow wrote a great article on it:
"Interoperability Can Save the Open Web" https://spectrum.ieee.org/doctorow-interoperability
> While the dominance of Internet platforms like Twitter, Facebook, Instagram, or Amazon is often taken for granted, Doctorow argues that these walled gardens are fenced in by legal structures, not feats of engineering. Doctorow proposes forcing interoperability—any given platform’s ability to interact with another—as a way to break down those walls and to make the Internet freer and more democratic.
Most notably, he retells how early Facebook used to siphon data from its competitor MySpace and act on user's behalf on it (e.g. reply to MySpace messages via Facebook) - and then when the Zuck(er) was top dog, moved to made these basic interoperability actions illegal by law to prevent anyone doing to him what he did to others.
We need platforms to offer that interoperability and simply connect to these “marketplaces.” Take Shopify for example, sellers use that platform to list on Amazon, Google Shopping, TikTok shop, etc. We need open source alternatives to those where the sellers own the platform and these marketplaces are forced to be interoperable or left behind by those that are.
For Facebook, Instagram, Twitter, each person having their own website where they post and that post being pushed to these platforms is also another way to force interoperability on them or be left behind.
It’s a tall task, but achievable and it will happen given enough time.
Among social media, Mastodon (and anything Fediverse) has it the worst, obviously, but Telegram and Whatsapp are rife with spams and scams, Twitter back when it still had third-party apps was rife with credential and token compromises (mostly used to shill cryptocurrencies).
As for the price tag reference - we've seen that with SMS. It used to be the case that sending SMS cost real money, something like 20 ct/message. It was prohibitively expensive to run SMS campaigns. But nowadays? It's effectively free at scale if you go the legit route and practically free if you manage to get someone's account at one of the tons of bulk SMS providers compromised. Apple's iMessage similarly makes bad actors pay a lot, because access to it is tied to a legitimate or stolen Apple product serial.
While the thing that gives you quick dopamine might win in the very short term, you can still step back and recognize when it's not satisfying in the long term and you're not even enjoying it that much.
And people aren't stupid. Junk food exists, yet lots of people choose to eat more wholesome food as the majority of their diet.
The problem with instagram or youtube is that you can't separate the good from the bad.
It's like if every time you went to store Y to buy milk, you would be exposed to highly manipulative marketing trying to get you to buy junk food. You would probably want to go to a different store instead.
What I'm suggesting is the possibilities of different stores, with different philosophies and standards, so that people can choose where they go. Corner stores (where almost everything is junk food) exist, yet people still choose to go to real supermarkets.
I realize “less addictive algo” is a different thing to pay for than removing ads - but it’s, if anything, an even harder sell - I think the layperson wouldn’t even acknowledge that they are vulnerable to being psychologically manipulated. They think they spend so much time on these apps because it’s so enjoyable.
From most parents’ point of view, paying a monthly bill for their children to have a less toxic experience on TikTok, or YouTube will be considered an extravagance instead of a responsible safety expense.
I still scrobble to Last.fm from Spotify (and other media players). I rarely use it for discovery anymore, but it's occasionally interesting to look at my historical listening trends.
However, I've always thought that it's pretty bizarre for Section 230 protections to apply when the social media company has extremely sophisticated algorithms that determine how much reach every user-generated piece of content gets. To me there's really no distinction between the "opinion" or "editorial" section of a traditional media publication and the algorithms which determine the reach of a piece of user-generated content on Twitter, YouTube, etc.
I’d be strongly in favor of interoperability laws to pry open the monopolies.
(One dynamic you do need to be careful about especially at first - interoperability also means IG can pull your friend graph from Snapchat, so it can also make it easier for big companies to smother smaller ones that are getting momentum based on their own social graph growth due to their USP. I don’t think this is insurmountable, just something to be careful of when implementing.)
Drop the algorithm altogether? I subscribe to channels for a reason.
And how does this prevent addictive algorithms which will win through social selection?
The winning third party algorithm will be the one that gives people the same rush the first party algorithms currently do, because people will use it for the same reasons; they get to see cute AI animals do crazy things forever.
That would make it very hard, nigh impossible, for a platform like YouTube or TikTok to exist as it does today, and would instead favor people self-curating mechanisms like RSS readers etc.
That isn't what would happen.
What would happen is that only the platforms which can afford legal teams - in other words, the big platforms - would host user posted content under strict arbitration only terms, and every other platform (including Hacker News, which uses an algorithmic feed) would simply not. Removing one of the cornerstones of free speech on the web in favor of regulation will only centralize the web more.
And you wouldn't see mass adoption of "self curating mechanisms" because most people aren't like Hacker News people and would find the premise of having to manually curate data feeds from every they visit to be a tedious waste of their time.
I also think that platforms like Youtube and Tiktok shouldn't be illegal. I don't even think that personalized algorithms should be illegal - it's surprising that one has to point this out on a forum of programmers - but algorithms have no inherent moral dimension and the ability to use an algorithm to find and classify relevant content can be useful. The same algorithm that surfaces extremist content surfaces non-extremist content. The algorithm isn't the problem, rather the content and the policies of these platforms are the problem. And I don't think the solution to either is de facto making math illegal and free speech more difficult.
There is no solution for this kind of verdict beyond appeal, or changes to the law to rule such suits out, because it's not rooted in any logical or legal principle beyond the idea that people should not be responsible for their own actions (or their children's actions). But there's no limiting factor to that belief. You can't fix it with RSS or federation or making people select who they follow or chronological feeds. Those would just get blamed for "addiction" instead.
Anecdote, but it does seem like a lot of younger folks I speak with are exhausted by the dark patterns and dopamine extraction that top-k social media platforms create.
If agents/AI/bots inadvertently destroy the current incarnation of social media through noise, I think we'll be better for it.
This sounds like the original internet.
Before adtech took over.
Getting back to community is key.
To me this statement reads as both inaccurate and ignorant of human nature. Social media was actually better when it was about individual ego (Myspace/LiveJournal); as obnoxious as that can be, today everything is worse because of petty tribalism. Most conflicts on social media are inter-tribal, whether it’s racial, political, national, or feuding “stan” culture groups. The worst problems come from groups who organize on platforms like Discord or Kiwi Farms to direct harassment campaigns against perceived enemies (or random “lolcow” victims).
Simple observation of the present world and history will tell you that a platform focused on “collective improvement” will only appeal to a small subset of potential users. Of course such a platform would not be a bad thing. Places like this (such as The WELL) used to be common when the internet was dominated by academics, futurists, and tech enthusiasts. But average people are not interested in this kind of platform, and will not participate in good faith in such an environment.
> But average people are not interested in this kind of platform, and will not participate in good faith in such an environment.
I'm not ignorant of human nature and tribalistic tendencies. The undercurrent of my comment is of an optimistic hope (or cope) that we can move past competitive individual validation programming. I'm aware that it's due to our nature, but also aware that it's exploited by dark patterns and extraction at scale through software.
Do you have a mechanism for this in mind, incentives-wise? I can't see this making money.
We've tied our incentives to a structure which is not in alignment with continued survival. The real question is how can we incentivize ourselves to continue to exist?
The "the incentive structure says we should all destroy our brains" thing is just a small aspect of that.
The incentives would be those which have motivated people throughout history: to create something which benefits humanity.
They are going to be (and AI slop already is) so much worse. Once they get ads to work well / seem natural the dark patterns will pop right back up and the money spigot will keep flowing upwards
Look at the plaintiff in this case: it's a mentally unstable person who blames her life problems on social media. Never mind the fact that she had been diagnosed with mental illnesses as an early teen, or that an overwhelming majority of people who use social media don't develop eating disorders or other mental illnesses as a result of it (and in fact the incidence of say bulimia peaked 30 years ago in spite of almost universal social media adoption among young people). This is not at all like smoking where 15% of smokers will get lung cancer.
And due to some absurd legal reasoning the plaintiff was allowed to pseudonymously extort $3 million out of tech companies. Worst of all I see people on a technology forum applauding this out of some sort of resentment towards large companies!
Actively ignoring harm caused by your product. TV/radio has sold attention, but there were pretty strict rules on what you can/can't broadcast, and to whom. (ignoring cable for the moment) Its the same for services, things that knowingly encourage damaging behaviours are liable for prosecution.
Wouldn't it be better if apps/websites targeting kids didn't use A/B testing to be more addictive?
Pokemon is addictive, computer games are addictive. Its whether they are knowingly causing harm, and or avoiding attempts to stop that harm.
I don't have an answer to fix this whole mess, but it starts with our attitude towards addiction. We've built a system that rewards addiction in all sorts of places. Granted, every addiction is different, and I'm of the opinion that it's not (drug = bad), it's how you use it and react to it. We can control the latter, but we choose to ignore it because we're too busy with anything else. This is a tale as old as time...
Not enough to diffuse liability. 15 years ago when recommender algorithms were the new hotness, I saw every single group of students introduced to the idea immediately grasp the implication that the endgame would involve pandering to base instincts. If someone didn't understand this, it's because
> It is difficult to get a man to understand something, when his salary depends on his not understanding it. - Upton Sinclair
I watched 80s horror movies when I was in elementary school and had nightmares for years. Should I sue now?
How about parents be held responsible for how they care for their kids or not? Maybe a culture that judged parents more strongly for how they let their kids spend their time would be an improvement.
0: https://en.wikipedia.org/wiki/Regulations_on_children's_tele...
Those ads didn't adjust themselves on a per-child basis to their exact interests.
Does any of that obviate the need for safe urban design, anti-CSAM and anti-molestation laws, or laws prohibiting the local dive from serving a cold one to my 11 year old? Will simple appeals for "parental responsibility" suffice as an argument for undoing those child safety systems we put in place, or will they be met with derisive dismissal? Why should your "solution" be treated any differently? In fact you offer none. Yours is the non-solution solution, the not-my-problem solution, the go-away solution. Not good enough on its own, sorry.
It's not, that illegal as well. You cannot target kids with TV advertising.
I homeschool our youngest because the school system here sucks, based on the experiences of our older two. I'm always exhausted. I solved this (the "parents must be more involved") by watching my kid play roblox, arguing with them about spending their money on gift cards instead of lego, posters, or whatever that isn't so fleeting; i also don't let them have a cellphone. They turn 10 in June. We don't have TV or CATV, i have downloaded most of the old TV programs that kids liked, and grandma doesn't watch kid's shows so he really doesn't have a perspective on what everyone else's viewing habits are. He watches YT on his Switch about fireworks, cars, and then also some of the idiots with too much money acting goofy, plus what i would call "vines compilations" of just noises and moving pictures, i don't get it, but it seems harmless. For the record, pihole no longer blocks youtube ads, so i was just told there are ads on the Switch, now.
But anything beyond that, i can't watch nor do i want to watch their every interaction on a computer. I gotta cook, the weather isn't always conducive to send them outside to play, as well. When i was growing up and was bored, there wasn't too much i could do about it. Today, my youngest has virtually anything on the planet just peeking around the corner. America's Funniest home videos and a blue square shooting red squares at orange squares? yeah, ok.
===========
It's getting to the point where i think people who have really strong opinions on topics like this need to disclose any positions they might have that influence their opinion. My disclosure is that i have no positions in any company or entity.
Everyone in the US has been fed a lie that if we just work hard and don't interfere with the billionaire class, that someday, we, too, can be rich like them. It's a bum steer, folks. For each 1 billionaire that "came up from the slums" or whatever, there's 100 that are billionaires because their families did some messed up stuff, probably globally, sometime in the last 200 years. And offhand, knowing the stories of a bunch of billionaires: 10 in the US that were honestly self-made, didn't fraud, cheat, or skirt regulations to become that way seems almost a magnitude too high.
i bring all of the above 2 paragraphs fore, because if one has a position in facebook, of course they're going to rail against facebook losing 230 protection for any part of their operation, instagram, FB feed, whatever. If a person has a position in GOOG, or Apple, or Tesla. What's that Upton Sinclair quote that's been mentioned twice? If someone believes that, given luck and grit, they too could make a "facebook" sized corp, but not if the government says "you can't addict children to sell ads", then i consider them a creep.
record: my oldest two are early 20s, now.
A really good designer could make a highly engaging app or an editor can write clickbait headlines all with without testing.
*Except for your time and mental health of course
It was really annoying turning on a show for 30 minutes then for the next week hearing about that new toy they just have to get. It was exhausting.
> Jurors were charged with determining whether the companies acted negligently in designing their products and failed to warn her of the dangers.
So if you do so while providing warnings and controls for people, that might make it OK in the eyes of the law?
But then again, I manage to get myself addicted to a video game usually once a winter for a few weeks, and don’t play games for the rest of the year. There’s really no solution to this, but I don’t want to live in a world where everyone is hopelessly addicted to shallow digital experiences.
At least legal experts are critical of the decision: '“I don’t think it should have ever gotten to a jury trial,” said Erwin Chemerinsky, dean of the UC Berkeley School of Law'
We had 10 years+ plus of having products like Facebook, Twitter, YouTube, hell even LinkedIn with a basic content model of "you build your own graph of people who you pull content from" and their job was to show it to you and puts ads in there to fund the whole enterprise. If I decided to follow harmful content? That was a pact between me and the content creator, and YouTube was nothing more than a pipe the content flowed through. They were able to build multi-billion dollar businesses off of this. That's really important, this was enormously profitable. But then the problem happened that people's graphs weren't interesting enough, and sometimes they'd go on the thing and there were no new posts from people they followed, and this was leaving money on the table. So they took care of that problem by handing over control of the feed to the reward function.
More accurately, especially for Meta products: they completely took control away from you. You didn't even have the option to retain the old, chronological social graph feed anymore. And it was ludicrously profitable. So now the laws of capitalism dictate that everyone else has to follow suit. I now have extensions on my browser for Instagram and YouTube to disable content from anything I don't follow - because I still find these apps useful for that one original purpose they had when they blew up and became mainstream. Why are these browser extensions? Why can't I choose to not see this stuff in their apps? That's the major regulation hole that led to this lawsuit, imo.
It's the same thing you see with people blaming smartphones for brainrot. We've had 15 to 20 years of smartphones with more or less the same capabilities as they have today and for the vast majority of that time my phone didn't make books less interesting or make me struggle to do chores or manage my time. For a full decade or more I saw my phone as a net positive in my life, was proud to work for Twitter and generally saw technology like the Louis CK bit about the miracle of using a smartphone connected to WiFI on an airplane. But in the last five years or so, things have noticeably and increasingly gone to shit. Brainrot is a thing. All my real life friends who are the opposite of terminally online or technical are talking about it. I don't use TikTok but it seems like that is absolutely annihilating attention spans. The topic of conversation over drinks is how we've collectively self-diagnosed with ADHD and struggle with all kinds of executive function.. but also are old enough to remember a time when none of this existed. Complete normies are reading Dopamine Nation and listening to Andrew Huberman trying to free themselves.
I don't know what the exact solution is, but there's at least a simpler time we can point to when we all had smartphones and we were all connected via platforms and we all posted and consumed stupid pictures of each other and it wasn't.... _this_.
I'd add one additional layer: it's not just that the algorithm picks what you see, it's that the entire UX is built around keeping you in the loop. On YouTube Kids, even with autoplay off, the end-of-episode screen shows a grid of recommended videos. My toddler doesn't care about "the algorithm" in any abstract sense. He just sees more fire truck videos and wants the next one. The transition out of the app is designed to fail.
Your point about smartphones not being the problem is key. I was at Google during the era you're describing, when the phone was a net positive. The hardware didn't change. The business model did.
I don't recall a lot of complaints about Facebook or Instagram when it was actually your friends' content. But now it's force-feeding everybody their own "guilty pleasure" viewing material 24 hours a day. It's fucking sick.
ublock origin for blocking them on desktop. If you're on an iphone... uninstall youtube?
my quality of life has increased substantially... although sometimes the app bugs out and shorts still make it on my home page. I spend like 10 minutes scrolling through shorts and get a weird shock "how the fuck did I end up here?", restart the app and boom shorts gone again.
The guy who made the drugs is guilty. The guy who sold the drugs to kids is guilty. But parents who failed to warn kids about drugs and to oversee them properly are also guilty...
Now if we're in a discussion around the cartels, plenty of people do bring up (and there's also those that get annoyed by it) that the drug users are actually the ones funding the cartels via their drug use.
Along these lines, I think another fun comparison might be opioid use and Purdue.
Maybe you don't do this. Certainly I don't. But when looking around, its much less rosy and... lets say in blue collar families its too common to drug kids with screens so parents have off time. Heck, some are even proud how modern parents they are. Any good advice is successfully ignored, and ideas of passing some proper time with kids instead are skillfully avoided. People got lazy and generally expect miracles from life without putting in any miracle-worth efforts.
Companies just maximize their profits till laws allows them (and then some more), and expecting nice moral behavior by default is dangerously naive and never true.
But sure, "Parents often give too little fucks for long term welfare of their children", that's definitely it. Parents just hate their kids! What a useful perspective you've brought to the discussion.
Its also funny how they “discovered” they were influencing elections after they influenced the 2008 and 2012 elections.
How did the author not know this when she sought out and joined the company in like 2013!
The parts about playing Settlers of Catan with Zuckerberg was funny. I wonder what his side of the story was and if people were really letting him win.
- She was trying to work to change things
- She was pregnant and otherwise had young children and needed the money
Besides a general 'don't be too good' I'm really not sure what companies should do about it. It just seems like it'll lead to some judges allowing rulings against companies they don't like.
Television's goal was always viewer retention as well, they were just never able to target as well as you can on the internet.
The subsequent effects - namely being easier to consume and more addictive - eventually resulted in legislation catching up, and restrictions on what Juul could do. It being "too good" of a product parallels what we're seeing in social media seven years later.
Like most[all] all public health problems we see individualization of responsibility touted as a solution. If individualization worked, it would have already succeeded. Nothing prevents individualization except its failure of efficacy.
What does work is systems-level thinking and considering it an epidemiological problem rather than a problem of responsibility. Responsibility didn't work with the AIDS crisis, it didn't work on Juul, and it's not going to work on social media.
It is ripe for public health strategies. The biggest impediment to this is people who mistakingly believe that negative effects represent a personal moral failure.
Well, a drug addict wants to consume his drug. Because his drug is good at keeping abstinence syndrome at a bay and probably the tolerance hasn't build up to levels when the addict couldn't feel the "positive" effects of it.
The user feels an impulse to consume the content, but whether they want it we can know only by questioning them. They can lie consciously or unconsciously, but there are no better ways to measure a desire to consume it. When talking about doom scrolling I never met a person who said they want to do it, but there are people who do it nevertheless.
> This just seems ripe for selective enforcement if not codified in law.
I agree. I'm not sure how they define "addiction" and how they measure "addictiveness". It is the most important detail in this story.
Unless you hurt children, then its mostly legal and a slap on the wrist.
disassemble the intentionally addictive properties they built into their platforms to maximise engagement and revenue at the cost of the mental health of their users.
Broadly speaking, Section 230 differentiates between publishers and platforms. A platform is like Geocities (back in the day) where the platform provider isn't liable for the content as long as they staisfy certain requirements about havaing processes for taking down content when required. A bit like the Cox decision today, you're broadly not responsible for the actions of people using your service unless your service is explicitly designed for such things.
A publisher (in the Section 230 sense) is like any media outlet. The publisher is liable for their content but they can say what they want, basically. It's why publishers tend to have strict processes around not making defamatory or false statements, etc.
I believe that any site that uses an algorithmic news feed is, legally speaking, a publisher acting like a platform.
Example: let's just say that you, as Twitter, FB, IG or Youtube were suddenly pro-Russian in the Ukraine conflict. You change your algorithm to surface and distribute pro-Russian content and suppress pro-Ukraine content. Or you're pro-Ukrainian and you do the reverse.
How is this different from being a publisher? IMHO it isn't. You've designed your algorithm knowingly to produce a certain result.
I believe that all these platforms will end up being treated like publishers for this reason.
So, with today's ruling about platforms creating addiction, (IMHO) it's no different to surfacing content. You are choosing content to produce a certain outcome. Intentionally getting someone addicted is funtionally no different to changing their views on something.
I actually blame Google for all this because they very successfully sold the idea that "the algorithm" ranks search results like it's some neutral black box but every behavior by an algorithm represents a choice made by humans who created that algorithm.
> (c) Protection for “Good Samaritan” blocking and screening of offensive material
> (2) Civil liability
> (A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
"in good faith" is key here. Here's another opinion [2]:
> One argument advanced by those who want to limit immunity for platforms is that these algorithms are a form of content creation, and should therefore be outside the scope of Section 230 immunity. Under this theory, social media companies could potentially be held liable for harmful consequences related to content otherwise created by a third party.
So far the Supreme Court has sidestepped this issue despite cases making it to the Appeals Court. Until the Supreme Court addresses, none of us can say with any certainty what is and isn't protected.
[1]: https://www.law.cornell.edu/uscode/text/47/230
[2]: https://www.naag.org/attorney-general-journal/the-future-of-...
> (c) (c)Protection for “Good Samaritan” blocking and screening of offensive material
> (1) Treatment of publisher or speaker
> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
This is a protection for being a platform for third-party (including user-generated) content.
Some more discussion on this distinction [2]:
> Section 230’s legal protections were created to encourage the innovation of the internet by preventing an influx of lawsuits for user content.
It goes on to talk about publishers, distributors and Internet Service Providers, the last of which I characterize as "platforms".
By the way, my view here isn't a fringe view [3]:
> One argument advanced by those who want to limit immunity for platforms is that these algorithms are a form of content creation, and should therefore be outside the scope of Section 230 immunity. Under this theory, social media companies could potentially be held liable for harmful consequences related to content otherwise created by a third party.
This is exactly my view.
[1]: https://www.law.cornell.edu/uscode/text/47/230
[2]: https://bipartisanpolicy.org/article/section-230-online-plat...
[3]: https://www.naag.org/attorney-general-journal/the-future-of-...
I don't have time right now to provide a full/quality answer with more examples - you can do a bit of seraching online to learn more.
Also from personal expeirence as well (from family and friends). When their kids comeover they have tiktok on their phone and roblox on their laptop
Jury finds Meta liable in case over child sexual exploitation on its platforms
Even if they do what you're saying, lots of people who've used any Meta property in the last 15 years has a potentially viable case, and no future work can swat those away
The exact same can happen to Big Tech. The goal is to get them to stop the bad behavior now.
YouTube allows you to "show fewer shorts" but what if you don't want them popping up at all?
AI Slop is the best thing to happen to these platforms - because it will lower trust and engagement as people (hopefully) become tired of inauthenticity. Rage bait is potent when the event in the video _actually_ happened, but when you realize it was AI generated, the manipulation feels even more obvious (though it was always there).
These platforms should also allow users to understand how the algorithm has categorized them, and be able to configure it. YouTube, Instagram, et al. would be safer places for viewers if they allowed users to tell them what they want to be exposed to, and what they don't. Big tech is dodgy about this currently, because the more control the user has the lower the engagement (good for the user, bad for profit).
I'm a former Google engineer, now running a children's mental health startup (Emora Health), and my toddler is already on YouTube Kids.
So this verdict hits on every axis for me.I wrote up my full take here [1], but the short version: I don't think the "Big Tobacco moment" framing that NYT is pushing actually holds up.
Litigation is negative reinforcement, and if you've ever tried telling a toddler "no" you know how well that works long-term.The families in this case absolutely deserve to be heard. The harm is real. But courts can only punish — they can't redesign a recommendation algorithm.
The change has to come from people who understand these systems building better ones.
Haidt has been saying for years what this verdict just confirmed. The evidence was never the bottleneck. The will to design differently was.
I will give you a simple experiment. Try blocking Blippi from YouTube Kids, man, it's crazy, even if you block the main Blippi and Moonbug channels. 100s of channels have Blippi content cross-posted. And it keeps popping up. I know it's easy to build a Blippi block feature using AI that blocks across channels.
Thats the kind of solutions we need. I know we have the tools. Just need intent and purpose
[1] https://www.emorahealth.com/clinical-insights/social-media-v...
Parent here. Acting like it’s impossible and you have no choice but to let them have their way is a cop-out. Telling kids “no” and enforcing boundaries is part of the job.
> my toddler is already on YouTube Kids.
> I will give you a simple experiment. Try blocking Blippi from YouTube Kids, man, it's crazy, even if you block the main Blippi and Moonbug channels. 100s of channels have Blippi content cross-posted
I have a better solution that I use: If I can’t stay involved enough to monitor what the kids are choosing to watch, I don’t let them loose watching YouTube. They get to go play outside or with LEGOs or do puzzles or any of the other countless activities that are fun for kids.
This isn’t a problem that is solved by creating advanced filtering that lets you block anything related to Blippi (whoever that is) isn’t going to solve the problems of letting your kids loose on YouTube. They’re going to find another cartoon you dislike. The solution is to parent, set boundaries, enforce them, and find other activities for them.
I believe you're conflating two things: parenting discipline and product design. The question isn't whether I can physically take the TV away. I do.
When I say "block Blippi," I don't mean I dislike the content. I mean I'm done with screen time and the UX makes that transition harder than it needs to be. Autoplay is off, but the end-of-episode screen still shows a grid of next videos. Of course he wants the next one.
So I block Blippi. Except Blippi's main channel cross-posts through Moonbug into hundreds of other channels. It's a hydra
YouTube already does content fingerprinting for music industry DRM. The technology to let a parent say "block this creator everywhere, and let me turn it back on when I choose" exists today. They just haven't built it for parents. Because the system isn't designed for children. It's designed for engagement.
So yes, parental responsibility matters. But "just don't use it" isn't a scalable answer when the product is specifically engineered to undermine your choices. That's the design problem I'm talking about.
My issue is with YouTube's UX. I watch an episode with my son, we're singing along, he's excited about putting out the fire. Episode ends. Even with autoplay off, the next recommended videos show up — and of course he wants to watch the next one.
So I block Blippi. Except Blippi's main channel cross-posts into Moonbug, which cross-posts into hundreds of other channels. It's like trying to kill a hydra. Here's what gets me: YouTube already does content fingerprinting for DRM enforcement in the music industry.
The technology to let me block Blippi across every channel — and turn it back on when I want to exists. They just haven't built it for parents. My point that we can build systems designed for children if we had the intent
Kind of like how tobacco companies now pay out billions every year and its a major source of funding for states.
Hopefully this means more health services available. But it will just serve like an ongoing tax.
The similar case about child predators was brought by NM’s attorney general.
How is it that these days social media can circumvent all these safeguards and then somehow blame the parents if a kid is watching something inappropriate on an app designed for kids (like YouTube kids)?
The issue is that politicians are beholden to social media companies because they can literally get them or their opponent elected. After reading Careless People, I was amazed at how leaders of so many countries wanted to meet Zuck because he wields so much power.
I really hope this ruling is the beginning of the end of the free rein they've had.
In a lot of countries there are specific laws banning the deliberate targeting of advertising to children (and in contexts where you would reach children, heavily regulated), but for over a decade Meta would allow you to target within the ranges of 13 to 18 years old.
That's to say nothing of the scams and deepfake celebrity ads they let run. Imagine if a deepfake ad of Warren Buffet promoting an investment opportunity ran on TV, the network would get sued into oblivion. On Meta though, there's no repercussions.
> When presented with internal research and documents showing that Meta knew young children were in fact using its platforms, Zuckerberg said he "always wished" for faster progress to identify users under 13. He insisted the company had reached the "right place over time".
Soon there will be government IDs required to use social media sites because parent's can't take phones away from their kids.
Maybe the social media companies could do more to combat all these. They certainly have a level of profit compared to what they provide to the average person that makes people squirm.
But does anyone believe for a second that YouTube is responsible for a person's internet / video watching addiction? It's like saying cable television is responsible for people who binge watch TV.
It's hard to square this circle while sports gambling apps and Polymarket / Kalshi are tearing through the landscape right now with no real pushback
Yes? Is there an algorithm or not?
This stop-bot thing can be annoying at times.
It ends up feeling much closer to “what’s interesting in my corner of the web right now?” and much less like a system trying to keep you trapped inside it.
Small scope, obviously, but I think more social tools should feel like utilities, not casinos.
Trial courts will decide pretty much anything. Then the case gets appealed over whether the trial court correctly interpreted things you probably perceive as uncomplicated, like the 1st Amendment.
> It comes on the heels of a Delaware court decision clearing Meta’s insurers of responsibility for damages incurred from “several thousand lawsuits regarding the harm its platforms allegedly cause children” — a ruling that could leave it and other tech titans on the hook for untold future millions.
I’ve argued in the past that the right way to create the change in corporations we want is to change the laws, and people have made valid points that Congress has basically given up on doing that. But even so, civil cases with fines don’t seem like that way to make lasting change. In the analogues to the tobacco fights, there are LAWS that regulate tobacco company behaviors as a result. The civil case here isn’t going to result in any law. So what are companies supposed to do? Tiptoe around some ill defined social boundary and hope they don’t get sued? Because apparently the defense of, “no I didn’t target that person and I didn’t break any laws” is still going to get you fined. What happens when a company from a conservative location gets sued in a liberal location for causing a social ill? Oh, we’re cool with that. But what if a company from a liberal location gets sued in a conservative location for the same thing? Oh, maybe we don’t like that as much. I’m taking the libertarian side here. I know plenty of people who don’t watch TV, don’t use Facebook, and I know plenty of people that recognized that they were spending too much time on digital platforms and decided to quit or cut back. So a healthy person can self regulate on these apps, I’ve seen it and done it. I’m just not sure how much responsibility Meta and YouTube bear in my mind. If they’re getting fined $3M plus some TBD punitive amount, are we saying that this 20 year old person lost out on earning that much money in their life or would need to spend $3M on therapy because of Meta or YouTube? It feels a little steep off a fine for one person.
If Meta and YouTube really were/are making addictive products, wouldn’t a lot more people be harmed? Shouldn’t this be a class action suit where anyone with mental trauma or depression be included?
I don’t know the details of the case, but I highly doubt that this one plaintiff was targeted specifically, and I doubt their case is that unique. I read tons of news articles about cyber bullying, depression, suicide attempts, and tech addiction. Does every one get to sue Meta and YouTube for $3M now?
If I sell you gizmo, and I know, or should know, that using the gizmo could seriously harm you, and I don't tell you or do anything about it, I am liable for damages you incur.
Should Apple or Samsung be held liable for making the phone that the plaintiff probably used to use these apps? How much responsibility do they bear?
Further, Facebook/Instagram and YouTube are free products from the perspective of the plaintiff. These corporations didn’t sell anything to the plaintiff, so can they even be held liable? They did sell the plaintiff’s data to advertisers, which I think you might be able to hold them responsible if they misused that data, but this isn’t what the case was about.
I’m not rooting for depression or suicidal thoughts or anything, but this doesn’t feel like the right direction we need to be moving in as society. We can’t simultaneously argue for free speech and freedom of choice and also claim that we aren’t capable of making our own choices to live our lives responsibly.
I agree that a big part of this is educating children about these hazards, but that also doesn't mean we should allow these companies to data science the shit out of our attention and will power. Many adults have concerning relationships with social media too -- exposure, pressure, and manipulation are key ingredients that are difficult for anyone to deal with.
Cocaine is illegal because it is addictive.
But it's not absolute. Some drugs are illegal for adults as well, for example. Why? Because they're too addicting.
So are Instagram and Youtube just nicotine, or are they heroin?
Cigarettes directly cause physical harm and even death. Social media can sometimes, under certain circumstances, depending on who exactly you're interacting with on social media, indirectly contribute to emotional harm.
Cigarettes are also physically addictive. Your body actually becomes dependent on them and will throw a fit if you try to stop using them. Social media is only "addictive" in the loose sense that all fun, mentally engaging activities are.
I'm not saying social media is fine for kids and we shouldn't do anything to reduce their use of it (TV and video games can be equally unhealthy IMO). I'm not even necessarily against legislation on the subject. But there's a huge difference between fining a company for breaking a law, and fining them for making a perfectly legal product "too fun" because you let your kids spend all their time on it and that turned out to be unhealthy.
This type of civil litigation where the courts effectively create and enforce ex post facto laws based on their opinion about whether perfectly reasonable, 100% legal actions indirectly contribute to bad outcomes is not a great aspect of our legal system IMO.
The best example of this is heroin, which has both a severe physical and mental addiction component, and it's the mental addiction that makes relapse so common.
Mental addictions rewire the brain's chemistry, causing the user to seek and only find joy in the substance. This is a better comparison for social media (albeit not as destructive and instantaneously harmful as narcotics)
There is a difference in creating a food that tastes good vs creating a food that tastes good, but instantly wants you to eat the whole bag.
Although to some extent they're correlated, sometimes the things that are most enjoyable you wouldn't describe as "addicting" and vice-versa.
Eating a nice full meal is more enjoyable than eating doritos on your couch, but you wouldn't describe it as addicting.
If anything, I find my experience of youtube today to be less enjoyable than in the past
Well, that's laughable.
The result, in these corner cases where eating people is profitable? Shelob.
It's a spectrum of risk between the user and the creator. My opinion is that there's enough scientific evidence that social media to show that it has a negative impact on kids and teenagers as their brains are still developing. I think a social media ban on kids is a good thing (similar to a driver's license or age of drinking).
I feel, and it's obvious to most that the only way a society can truly reform is by a shared consensus over their value system. This verdict could be thrown out by the appelette court(i feel it would be), so this is not the culmination of values resulting in what many hoped for.
It does not seem to me that this is a country where consensus on what, if anything, to put above capital will come about any time soon and with capital it's always been ask for forgiveness rather than permission.
The only time true justice that happens is when the harm becomes obvious being the shadow of a doubt(e.g. smoking) that even a monkey can tell it's time, game is up.
Perhaps if one day we can look into the brains of people with the clarity of glass and the precision of electrons and tell, will that time come when we all recognize how bad of an idea social media was.
So this verdict hits on every axis for me.I wrote up my full take here [1], but the short version: I don't think the "Big Tobacco moment" framing that NYT is pushing actually holds up.
Litigation is negative reinforcement, and if you've ever tried telling a toddler "no" you know how well that works long-term.The families in this case absolutely deserve to be heard. The harm is real. But courts can only punish — they can't redesign a recommendation algorithm.
The change has to come from people who understand these systems building better ones.
Haidt has been saying for years what this verdict just confirmed. The evidence was never the bottleneck. The will to design differently was.
I will give you a simple experiment. Try blocking Blippi from YouTube Kids, man, it's crazy, even if you block the main Blippi and Moonbug channels. 100s of channels have Blippi content cross-posted. And it keeps popping up. I know it's easy to build a Blippi block feature using AI that blocks across channels.
Thats the kind of solutions we need. I know we have the tools. Just need intent and purpose
[1] https://www.emorahealth.com/clinical-insights/social-media-v...
Parenting is rough! Good for you, for sticking to your guns.
> The plaintiff, Kaley, started using YouTube at age 6 and Instagram at 11.
Who was at the wheel here? If we call up all Kaleys teachers from this time frame and ask them "were Kaleys parents checked out" what do you think the answer would be? For as bad as education has gotten, I sympathize with with teachers because parents have gotten FAR worse.
It's not like we don't know these things about peoples behavior on devices... maybe it's something that should be talked about in school, along with how credit works, and how to file taxes.
Do we need to tell parents "it's 10am, have your kids touched grass yet?"... "It's 10pm did you take the tablet and phone away so they go the fuck to sleep?" --
"touch grass" as a meme/slang is literally people poking fun at the constantly on line. It's "hazing" and "bullying" to drive social correction.
There are plenty of things in life that can be addicting; drugs, sex, money, power, adrenaline, entertainment, technology... The list goes on. If we remove everything addicting from life, you better believe something else will rise up to take its place.
The solution therefore isn't to remove everything addicting from life, but rather to raise everyone with the forethought to know what might be addictive, the self-awareness to realize when you are addicted to something, and the self-control (and support systems if and when necessary) to stop.
I don't know what the answer is, but it feels wrong to lean _entirely_ on personal responsibility. We live in a world in which we were simply not evolved to live in. People literally make a good living by engineering and exploiting our weaknesses for profit.
> raise everyone with the forethought to know what might be addictive, the self-awareness to realize when you are addicted to something, and the self-control (and support systems if and when necessary) to stop
If only it were that easy. If you've ever known somebody who struggles with a serious addiction you'll know that even when they know it's destroying their life they still can't stop.
They weren't just consciously creating an attractive platform, they were consciously creating a manipulative platform.
The question we should be asking: are these technologies a net-positive to society?
It seems though, increasingly, that the ability to avoid addiction is less about pulling one up by one’s own bootstraps, and in many ways determined more by genetics. That is to say, what might have been possible for you is much harder for others.
Look no further than GLP-1. People who have struggled for years - decades - with overeating are almost immediately able to cut back on addictive eating. It’s not that they suddenly discovered willpower. It’s a biochemical effect.
It’s no wonder then that kids are more susceptible to addictive building behaviors. Their minds are pliable and teachable.
Why would we not legislate things that take advantage of that?
It is not, like, a moral thing to become addicted to something. And the ability to pull yourself out of it is determined, whether you are conscious of it or not, by your broader circumstances and by the same predispositions that brought you there in the first place. At the end of the day we are all fucked up animals reeling from the ongoing consequences of prematurational helplessness..
We should feel together in our problems like this, not distinguish ourselves by how we might individually overcome them. You are not "better" finding yourself standing over a beggar addict, you are lucky, never forget that. If for no other reason that it's not a sustainable world view otherwise, it leads to insecurity, anger, and relapse.
The dark truth of the world is that everyone is doing the best they can. How could they not? Why would they not? What is this thing that separates you from the addict or murderer? Unless you have maybe some spiritual convictions, I can't imagine what it is..
Just really, I know you had a powerful personal journey, but don't let it establish to you that we are all fundamentally alone, because we are not, and its good to help people who maybe need more help.
On the other, it's very different when companies explicitly design their products to be as addictive as possible.
We've been through this with Big Tobacco already. Nicotine and other tobacco substances are addictive on their own, but tobacco companies were prosecuted for deliberately making cigarettes as addictive as possible, besides also marketing to children. The parallels with Big Tech and social media are undeniable.