For most of its history, Reddit didn't have an algorithm that promoted stories beyond upvotes and time since posting, that might even still be the case. Despite this, we have still seen a steady trend toward extreme views on the platform. To be fair, it has the reputation (at least in some circles) of being the most redeemable of the major social media platforms, probably thanks to the simplicity of its algorithm. Unfortunately, that's not saying much, it's a low bar to clear. What explains the polarization of Reddit in the absence of a bouncer amplifying extremism?
I think there is a significant percentage of users that do not initiate extreme content but participate in amplifying it. They may even find it problematic, but they really don't like the extreme views they hear on the other side. Or maybe it is the content they came to see out of morbid curiosity, something I am guilty of sometimes. The bar is so crowded because people find it preferable to the empty one down the street that has the expectation that people behave respectfully.
Incredible presentation, but I think the awareness we need to spread is a movement away from social media in general. As a social outlet it is generally incompatible with healthy social functioning and individual wellbeing. Face-to-face interaction has inherent guardrails for avoiding these problems and supporting the kind of social experience that we are really looking for.
Reddit is heavily botted, including by capital interests, and has been for a long time. This includes basic up/down vote activity.
> I think there is a significant percentage of users that do not initiate extreme content but participate in amplifying it.
Yes, it's probably initiated by bots, and then real users are easily persuaded to follow the manufactured herd.
These issues are not exclusive to reddit, either.
Is that why /r/all is consistently anti-capitalism and anti-business?
There were 3 conditions that were working and were removed very very quickly
1) it was a web application only. Which enforces an interaction that is more contientous
2) it skewed older. Compared to other sites like IFunny or Instagram the age profile was closer to 30 than 12.
3) the upvote/downvote mechanic was used to upvote relevant content not something you agreed with. And downvote to drown overused jokes, lack of nuance posts etc.
But in 2020 reddit destroyed 3rd party apis and went full head on the app.
Age plummeted, app useage is mroe casual than laptops and length of posts went full brainrot and lastly there was no enforcement to teach people what upvotes meant. So it became thumbs up or down, and the jokes went from heavily downvoted to always the top comment.
150 million users in 6 months is the death of any conversation and reddit did it on purpose to try an IPO.
id be seperating social media for any related analysis, as two major milestones demarcate distinct usage patterns (first algorithms, then LLMs). imo those factors would influence the discussion about as much as a platforms inherent construction.
toxicity has evolved over time, we have progressed from mere keyboard warriors, to nation states delivering propaganda campaigns with a click, now finally half the internet is bot activity.
Also someone once told me - "To get your voice heard in a loud room, you either need to be very tall or extremely loud." Tall in this context are rich and influential people. Basically money and influence buys you height in a loud room
The whole debate could be summarized in a paragraph or two, but the social media environment is unfortunately curated towards diluted opinions (as you said) instead of nuanced ones.
All that to say I'm happy HN is still holding strong in terms of quality as compared to other platforms.
This is a double edged sword. Lots of "thought leaders" on twitter have outed themselves as lacking in both thought, and leadership.
The only point I'd add is that it's not handling time evolution in wicked problems quite right. Agree that the noisy room is distorting the world in exactly the ways described. But what if we've been in there so long, and the world has become so distorted.. that reality itself slides towards the once-extreme positions? Easiest to see this with climate-change controversy since that is the way that sort of thing happens, regardless of whether you think it's happened yet. Cascade, phase change, and collapse don't just call a truce.
So you have to anticipate that, acknowledging the pessimist is actually right, and that systems are a real bitch. Then you point out that if we're already doomed, we have nothing to lose nothing by trying. Systems are complex after all, that's the whole problem.. so if we miscalculated on the doom, then bothering to try actually saves us. Checkmate pessimists.
> The Majority Goes Silent - When the majority of people looks at the feed and assumes they're outnumbered, people will often self-censor.
That's not the same thing, is it? Here the majority is, say, anti-, but they are being frightened by a noisy pro- minority. They're moderates in the sense that anti- is the conventional position to take. But they have opinions. (They could also be in the minority, and this fear of speaking up would still be a bad thing.)
Otherwise, if they're truly moderate, but are frightened into silence supposedly, what would they be saying if they dared? "Everybody listen to me, I have no strong opinion on this matter"?
Even when people do have strong opinions on a topic (and a moderate opinion can also be strong), most people have better things to do with their lives than to go around blasting their opinions to the world as a hobby. And the few in this camp that do are not very likely to be amplified by the engagement algorithms.
> The Majority Goes Silent - When the majority of people looks at the feed and assumes they're outnumbered, people will often self-censor.
> That's not the same thing, is it? Here the majority is, say, anti-, but they are being frightened by a noisy pro- minority. They're moderates in the sense that anti- is the conventional position to take. But they have opinions.
I don't follow your argument (which is different to the one in the article):
There's a small noisy pro-side, a small noisy anti-side and a majority, but not necessarily a moderate majority!
The article doesn't say anything about the majority being moderates, does it?
> Otherwise, if they're truly moderate, but are frightened into silence supposedly, what would they be saying if they dared? "Everybody listen to me, I have no strong opinion on this matter"?
Not necessarily true; there's a noisy pro minority, a noisey anti- minority and a silent majority. Who know if they are pro or anti or equally split?
And even if they were actually moderate, they could see opinions like "everyone should have guns" and "no one should have guns", and keep their majority moderate opinion of "people should be allowed guns depending on whether they cross some objective line into dangerous or neglectful behaviour".
That's both a moderate and a majority position, and yet you won't see it expressed in a forum because all the noise is being made by the two extremes.
The argument you're making is that the silent majority must necessarily be moderates, but that's not a requirement.
Take immigration or refugees - the obvious thing is that you're either for or against it. But there's so many things in between, so much nuance, etc. And that takes reasonable adults to think and talk about.
- how you does this handle the fact that a lot of accounts on social media platforms are bots that maybe controlled by a small number of people.
- how do we actually get this implemented?
(then they started having shorts, so I cancelled youtube premium)
Hackers might be interested to know that there's an "open questions" section at the end of TFA. Some of it probably wants simulation, some wants theorems.
Camel-ai pubs/frameworks might be related and useful, for example: https://github.com/camel-ai/agent-trust
Several model checkers also have primitives for working with common-knowledge. TFA puts it like this:
> Learning a fact changes what you know. Seeing it displayed publicly — where everyone else can see it too — where you know others can also see it, changes what everyone knows, and subsequently how they act.
An important piece of technical vocabulary, it really seems we need this to talk about a lot of problems lately. Here is Terence Tao talking about some related math for disinformation and politics ( https://mathstodon.xyz/@tao/114866548969775485 ) and summing it up this way:
> we barely even have the vocabulary to discuss, let alone analyze, games in which control of information is a major battleground.
He kinda means in general though I think.. probably we can find heuristics and crunch a case or two
Instead of bandaid-hack solutions leading to perpetual cat-and-mouse, why not build a citizen-owned platform from the ground up, as detailed here:
https://www.noemamag.com/the-last-days-of-social-media/
You would barely even need to advertise for it if it was obviously better than any of the existing corporate slop. It would sell itself, and the "profit" would be the end result that everybody can enjoy.
The people who run existing social media didn't start out evil, being in a powerful position made them that way.
It's the most disturbing thing I have ever worked on, there is much more out there than moste people realize and a lot of it uses deceptive dark patterns.
If somebody is interested in talking more about this or is working on similar things, always welcome!
1. cheating or being lazy with the sampling
2. Being a weasel with the phrasing to get the desired result
3. Being a push poll.
Still, a "trusted" poll is slightly better than a freeform "community note", especially if it sticks solely to how prevalent an opinion is.
Slashdot used random sampling in moderation 30-ish years ago. It worked OK, except that scores were used for very little (crucially they didn't even sort by them), and they had a more gameable non-randomized system to moderate the random system. And of course it was probably vulnerable to Sybil attacks.
(By the way, I guessed 4% for the number of toxic users)
And the money decides how to run the circus. Not for the benefit of all.
So it is a really hard problem.
It's an interesting initiative though. One that I also think could have unintended consequences that would additionally seed greater distrust in the media—which isn't necessarily a bad thing. But I imagine that the people who already sense this distrust and distaste toward the impression of polarization that the media gives are becoming less and less likely to subject themselves to the nude opinions of anonymous strangers online.
Which ones?
It's obvious from the hyperbole around the discourse alone that this moral panic has reached levels of derangement that far outclass any rational basis for judgement.
Does social media have negative consequences? Sure. Are people assholes on the internet? Always have been. Is social media the greatest and most existentially perilous evil ever conceived by humankind? No.
I think in ten years people will look back at this (on whatever strictly censored and regulated internet replaces this one) with the same bemused confusion as we do the Satanic Panic. And honestly in forty years, if technological civilization still exists, we'll find out how much of that was stoked by the CIA or other interests.
Is traveling to Tokyo just to sprint across the Shibuya Scramble for a slightly less-crowded Instagram selfie really a model of the good life? Should someone like Zuckerberg have this level of control over the activities and minds of the human race? Is Mr. Beast a role model for children by industrializing the exploitation of human virtue?
Human social pressure and follower mindsets are part of the human experience but systematically gaming those instincts in real-time so money flows to a social media company at all costs in some strange digital sharecropping scheme is what’s new and the hierarchy of others trying to capture a small piece of that pie creates these distortions.
> Human social pressure and follower mindsets are part of the human experience but systematically gaming those instincts in real-time so money flows to a social media company at all costs in some strange digital sharecropping scheme is what’s new and the hierarchy of others trying to capture a small piece of that pie creates these distortions.
To what I think @krapp's point is: these dynamics are not exclusive to social media. At their core they're led by something far more primal than what social media only exacerbates. Governments are not as naive as the general public. Regulations effected in 2026 to "regulate social media" could have consequences on how information is spread among people in 2040.
1. insanely low-effort to post 2. requires NO discernment, proof, credibility, or peer review to post 3. 'viral' in that opinions circulate because other people have interacted with them, not because they are right or meaningful. so bad news, good news, real news and fake news all travel at the same speed, lowering discernment even further 4. echo chambers are baked into the form. people are more likely to interact with content they agree with vs. content that is true or impactful. this creates circles of people agreeing with each other on increasingly niched-down topics.
it is extremely different from newspapers and television.
You aren't listing problems intrinsic to social media per se, so much as how people choose to use it and how specific platforms choose to operate. The latter of which is a problem when Twitter, Facebook and the like optimize for engagement through controversy, but I think when we focus on social media as a whole we risk throwing the baby out with the bathwater in restricting human rights and the ability of people to network and communicate freely without interference by state interlocutors.
I mean wtf. Is this your parody account?
Every bit of hyperbole I mentioned is practically quoted verbatim from some thread or another here, it is what people believe, and you can't even bring yourself to approach me in good faith because I've committed wrongthink by defending the existence of social media even implicitly.
The CIA and other governments are running influence campaigns across social media. The links between the major social media platforms and intelligence agencies are well known and well documented. And civilization is threatened by numerous factors, such as our over-investment in AI and the mass deskilling and destabilization that will create, creeping fascism and increasing political violence in a multipolar world, climate change leading to mass famine, pandemics in a post-scientific age, etc.
But people want to destroy social media (and by extension, want to destroy the freedom of communication it allows) rather than bother to consider that the real problem is the same problem we've always had - government and corporate interests trying to control our lives and manufacture consent through fear and panic.
They ran the same playbook prior to social media but the process was so normalized because they controlled so much of the media and culture that no one really even noticed it. Now people notice but they can't distinguish between the symptom and the disease.
I feel like the real problem is the people. Many of us just want to be told what to think to blend in with society, some of us demonstrate Dunning-Kruger publicly and a few of us really want to drive the polarization for clout and attention.
Everyday I see people promote increasingly stupid ideas on both sides, further pushing my believe that the only solution is to severely limit what government can do, therefore making all this discussion pointless.
The part that annoys me about the toxicity, or repetetive and annoying topics on reddit, HN, etc. is not that I am unaware that the content is produced by a small fraction. (I underestimated the count! I guessed 2%)
It's that people espouse it: They upvote and retweet it.
> Both sides develop wildly inaccurate beliefs about who the other side actually is.
That was a guess I had for a while. People have a strawman version of their out-groups in mind and quickly map people to that if an unknown person says something that indicates they might be part of the out-group.
> What percentage of the other side supports political violence?
It would be interesting to see the in-group statistic as well: "What percentage of your own side supports policical violence?", in my experience people also justify very shitty behavior as long as its from their in-group. (This plays heavily into the first point of espousing all kinds of shit)
---
It would be interesting to see if the community check actually changes anything. But the actual data seems to be only possibly for very generic topics - those we have the data on already. Something that would not be available for daily-fresh topics.
For my personal sanity I simply left reddit and stopped opening comments on certain HN posts - of course that does not help with the societal problems. Unfortunately.
I think something that is not calibrated in the post and also missing in this reply is that believes and actions do not need to be aligned.
Both groups say around 10% of members support political violence, however no democratic president is pardoning wholesale domestic terrorists. And the 90% of republicans who condemn political violence are not repudiating, removing themselves or condeming the fact that far right groups are the most dangerous demo according to the FBI, or that most political violence occurs in rep states, or the direct correlation of the NRA infiltration into rep campaigning and mass shootings...
Like if you say you dislike violence but defend the system that creates the violence and pardon the people who commit the violence and share the table and take the money from the violent people... your "beliefs" are not worth much.
The whole conversation about out-groups is less relevant when discussing left wing policy due to the fact that it is not orchestrated AROUND in and outgroups. Right wing ideology is de-facto a ingroup political theory where some people must be excluded. When you add morality being justified due to being in group you end up with some very concerning politics where actions are judged on beloning to the group and not the morality of the action or the consequences.
See the blue collar protect the children anti abortion crew voting for a new york millionaire owner of a beuty pagent who was best friend with the worlds best known human child trafficker...
The believe system collapses the second you put the right tee shirt on, and that is what makes polling those people irrelevant. They simply will support whatever is in front of them as long as they belong to the in group. War bad in ukraine, war good in Iran. Taxes bad in 2018, tariff taxes good now. Sillicon Valley tech people all leftwing indian soy boys in 2016 now all alpha podcast ai cool guys who fund our president.
nothing matters as long as you wear the tee shirt
These people are unwittingly working for the platforms to drive engagement, often to the exclusion of any goal they might've had before the addictive aspects of social media kicked in.
I think we get less of this kind of behavior here on HN because each username is not bedazzled with metrics. You can see up vote counts for your own comments, but you can only infer those counts for others. The scoreboard is hidden, so it isn't triggering as much bad behavior from people who can't handle such things.
I think we could get even better behavior out of people if we never showed them raw counts of updoots, but instead only showed them metrics relating to their explicitly stated social graph, plus maybe one hop out:
> Alice and two of her friends like this
> Charlie likes this
It gives a sort of directionality to the feedback. Instead of seeking the high score as granted, likely, by a bot army, you learn something about Alice's corner of the social network. Maybe you should get to know Alice's friends better.
Because that loud 3% that are being harnessed by the platforms to drive engagement via content we all hate... Their primary sin is just that they fell for it. They're like alcoholics, if we want to help them into a mode where they're less problematic, we should hang out someplace besides the bar.
The tiny minority dominates the feeds because that's how the incentives for algorithmic driven social media are structured. Do we really expect Meta, X, TikTok to anything that could reduce engagement?
Good luck having any of the mainstream social media apps add the banner they propose.
>We Could Do This Now - Platforms already have a lot of these capabilities. They already survey users. They even know how to run sophisticated polls. There are a few technical details to work out (spec here), but this is not a hard problem to solve.
Why do you think something like this is not already implemented? Platforms literally profit from this division, so why would they be incentivised to do anything? What's needed is not a good gesture from the overly powerful platforms, is fast, hard and deep regulation.
This is showing how in the social media system the dynamics play out.
Both Democrats and Republicans estimated 30% but actually.. only 10% of both sides supported political violence
That number is crazy in so many ways and the post is overly nonchalant about it. The "distortion" isn't what's worrying here
I just had an issue with the way that number was completely overlooked
Lobste.rs in these regards are better.
Anyway, social media is dead, has been dead for quite a few years now, the majority of us are out back there touching grass, it’s only the fringes (on both the political left and right) who’re still obsessed about it.