It's in our interest to safeguard and preserve rights that protect our speech.
In this context, 'our' includes people who (I fervently believe) use their speech to make the world a worse place. Because that's what it means.
It also means that I do not have to carry, parrot, amplify or provide a space for their speech.
You don't, if you're a private individual, but if you're a huge company that provides a space that's become a de-facto public square then you do, because if you don't, you're affecting freedom of speech in a meaningful way by providing everyone apart from a certain few individuals with a giant megaphone, pushing up the background noise level.
To clarify: Twitter and FB are de-facto public squares, this is fact.
Free speech includes the freedom to be heard, and giving everyone a vuvuzela apart from a few prevents those people from having the freedom to be heard.
Of course - if you're such a company - you also have the freedom to not do business at all. Which would probably be better.
So the unintended consequence I expect will be that censoring people for their political views will be the only strongly protected moderation actions.
I'd say where the protection starts is that the board is yours. You can make it as open or restricted or curated or nonsensical as you wish. Other individuals can put up their own boards and they can display whatever they wish.
This is what the 1A protects.
In a reality where an opinion can be displayed from millions of boards - I suggest that one individual removing it their own board is a fairly poor use of the term censorship.
As mentioned in the article, Pruneyard Shopping Center v. Robins ruling rejected that logic. The government can regulate your conduct. The speech in your bulletin board is that of those who wrote it, not your's. Thus not 1A issue.
The underlying issue is if social media is a platform (like a bulletin board) or a publisher (like a newspaper).
That's entirely intended. Look at what happened during COVID, the race riots post George Floyd, the election in 2020 and 2024, etc. Social media platforms are defacto content curation websites and not "free speech zones" in the sense of a soapbox in a public park. It was especially egregious during COVID and the race riots because the platforms made certain that you could not so much as criticize the political zeitgeist even when it was deserved.
The law really should make it clear that, illegal activity excluded, if you engage in any form of censorship you are not given section 230 protections. Places like twitter are left-wing content curation websites. It's no problem to have these - except for the fact they are still given section 230 protections. Under no circumstance should social media, engaging in one-sided censorship, be given any protection under the law for their content they curate. This would go for a similarly censored right-wing content curation website. It just so happens those are extremely small in reach by comparison, and thus significantly less important.
As always, the squeaky wheel gets the grease. The chronically offended, terminally online losers tend to get what they want.
Say I run a forum for pet fish discussion. Would my removal of content derailing the discussion into a flatearther one constitute my loss of 230? What if only logged in members can see the content?
It seems odd that we don't let private property operate as it wants.
I disagree that governments should punish people who fail to amplify the messages of powerful political parties.
Yep, that means a bakery would be required by law to provide wedding cakes to people they don't like.
Anything less than this is straight up hypocrisy. If you think that a business should have the right to refuse business to anyone... this is what that is lol.
The government wants to regulate private social media because that's where the people are, and thus that's where the influence is.
Social media companies have speech? If they have speech then why aren't they liable for that speech?
> I don't think they should be able to have it both ways.
> Social media companies have speech? If they have speech then why aren't they liable for that speech?
The state governments trying hold social media sites liable for users' harmful speech want to "have it both ways": Hold social media sites liable only for certain speech, yet pretend that the proposed laws instituting that liability regulate conduct without regulating speech.
Most of the speech targeted by proposed third-party liability laws such as KOSA is protected by the First Amendment (LGBTQ information, eating disorder discussions, body image posts, etc.). Without some evidence of causal harm on a case-by-case basis, not even first-party liability - liability on the users who posted the speech on social media - would pass the First Amendment, and third-party liability would not apply in such cases. But assume that first-party liability would apply. What about third-party liability for social media sites' moderation of user speech?
The First Amendment can prohibit both direct and indirect restrictions on protected speech. Strict scrutiny [1] requires laws impacting constitutional rights to use the methods least restrictive of those rights. Moderation of user posts by social media sites is a First Amendment right, just as posting by users of social media sites is a First Amendment right. Holding the social media site liable for First-Amendment-protected user speech violates strict scrutiny because a method less restrictive of protected speech is holding the respective users liable for posting that speech.
[1] https://en.wikipedia.org/wiki/Strict_scrutiny
And before anyone brings up republication liability on newspapers: defamation is unprotected speech. Defamation does not have First Amendment protection. But holding a party liable for defamation in the US requires that the party distributing the allegedly defamatory speech know that the speech exists. Newspapers tend to check even their op eds before deciding whether they should stay. But the humans managing social media sites don't necessarily know that any particular post exists, especially when the moderation system includes automatic processes. Section 230 declares that social media sites are not liable for defamation even if they know about it. Without Section 230 in place, social media sites would have an incentive to stop manually moderating posts, because you can't know about specific defamatory posts if you don't look at them.
I read an article [1] that cleared up my confusion about which speech is third-party and which speech is first-party. The grandparent comment by 2OEH8eoCRo0 said this:
> I don't think they should be able to have it both ways.
> Social media companies have speech? If they have speech then why aren't they liable for that speech?
There are "both ways" because there are two different kinds of speech at issue. The first kind of speech is the posts themselves, the content of which were written by users. The second kind of speech is what the social media website does with the post (boosts, downranks, deletes, marks with tags, bans the user of, etc.). (The second kind also includes whatever else someone representing the company writes, but that's not relevant to the confusion about Section 230.) Section 230 declares that the social media site cannot be held liable for the first kind of speech. Social media sites can still be held liable for the second kind of speech. Any harm caused by the very content of the post is actually harm caused by the first kind of speech, even if the social media site boosted the post; holding the social media company liable for boosting such a post would violate the social media company's First Amendment right to moderate. The right to moderate comes from the First Amendment, not from Section 230.
Suppose that I make a post about eating disorders on social media. The social media site boosts my post. Some kid sees it and later develops an eating disorder (correlation, with the question of causation to be decided in court). The parent sues the social media site and argues that the social media site should be liable because the social media site boosted the post.
Scenario 1. If Section 230 didn't exist, then the social media company would have to go through the entire court process. The social media site argues that "First, social media websites have a First Amendment right to moderate. Second, our moderators could not be expected to foresee that a mere discussion of eating disorders would cause more harm than help. Third, the liability should fall on the user who posted the speech". The social media company loses a lot of money, but the court rules that the social media company was not liable for the post.
Scenario 2. Since Section 230 does exist, then the social media company can say, "This lawsuit attempts to hold someone online liable for distributing speech made by someone else. Section 230 says that this liability will not exist." The court declares that the social media company cannot be held liable for the post, and dismisses the case early.
Either way, the social media company would not be liable. But Section 230 is still necessary to prevent social media companies from being overwhelmed with having to go through entire court cases. The parent could sue me. There's still no guarantee that I would be held liable for the specific example post I came up with. (And obviously, there can be no third-party liability on the social media website if the court in Scenario 1 decides that there would be no first-party liability on me.)
[1] https://www.techdirt.com/2024/03/01/we-cant-have-serious-dis...