It hasn't been easy. We ban fake AI accounts daily and shrug off around 600 AI content creator accounts monthly.
It's a lot of work, extra work that wasn't needed before AI content came around, and of course, that is an extra cost.
I fear losing the battle.
We had no problems with people using it and posting elsewhere, it was the demands that we must allow it that were problematic and made us question whether we were doing the right thing.
No regrets now, though, as we see competitors being flooded with AI slop and they are too invested in it to change now.
Now I see it as the perfect tool for impostors.
I was in a small niche creative writing community for a while. Circa 2021\22. AI wasn't why I was there but I demo'd a few LLMs to a lot of the users in the Off Topic section because people were curious. Even with an explanation of how they operated, almost everyone was at least interested. One author told me how he operated similarly, rote learning how to write like his favorite authors by copying out their texts, hand written, word for word. Their concern was largely that they were too hard to use from a technical perspective.
These people knew I was there to learn, and that I was unlikely to ever try and publish LLM derived content. I said as much often.
Sometime in late 2022, a switch was flipped. And almost all of them started talking about how AI and those who used it were unambiguously evil. They didn't say my name, but they stopped engaging with me. Gradually, they started reposting twitter content from extremely anti AI people. Complained about AI submissions to various publications. Eventually, someone reposted a tweet calling for the death of anyone who used an LLM, with not even a single disagreement (and lots of encouragement)
I just bailed. I had only ever engaged positively, answered questions for the curious and tried to help people out. I posted one AI assisted story, and that was to demonstrate how my contributions were tracked vs AI contributions automatically in the editor to satisfy someones curiosity. Clearly highlighting the bits I had written. Just a technical demo. No one was asked to enjoy or positively engage with it as if it was human written.
A while later, most of their submission rules were updated with a new clause, if it was judged that AI written content was discovered, they would blacklist that person from all submissions across their entire community. Considering I had demo'd LLMs, and the uselessness of AI detectors, it was clear to me that these people would be able to justify blacklisting me if I poked my head up at all. I had been developing my own story for submission (myself, no LLM content), but I just dropped it. I just didn't feel like sticking my neck out for the witch hunt.
I also used to be quite engaged with blockchain. And it went through a similar process, most people ignored it until that paper about the power usage (Claiming it would spike to some level it never reached) and then suddenly being associated with it was an outrageous moral crime. But after a while, when it turned out that the power use claims were largely a nothing burger, people gave up on the hate parade.
I don't think you will "Lose the battle" (at least in terms of keeping AI users out). And its always ok for small communities to be selective about their membership. I just don't think its possible to maintain such artificial rage for more than a few years. The AI Datacenter water/power claims are a clear London Horse Manure problem that looks set to resolve itself, and the copyright issues will get sorted to some degree. Eventually I think you just wont care enough to ban anyone except low effort spammers (of which there are a huge amount, granted).
YMMV
Perhaps it will even see a (small) resurgence when AI providers start charging for the actual costs.
That ship has sailed long time ago with zealot admins and verbal harassment.
Where there are certainly strong examples of this, a lot of people mistake enforcing the rules as zealotry. Part of the point of SO was that if things don't change then there is a completed state for SO too - no need to ask duplicate questions like on platforms where a post is less long-lived. Unfortunately people take things like “this is a dup”, “provide more information as we can't help”, “this isn't a complete answer”, and so forth, as deeply personal attacks…
One of the good things about LLMs is that they've drawn off all the simple already-answered questions! Unfortunately the more complex ones, or the ones for new solutions, are also going there so SO and its family of sites is ceasing to grow even in the ways it wants to.
> and verbal harassment.
Again, that did/does happen, but a lot less than some people report it. The most abusive people I've seen on there are those who have been given one of the responses I listed above.
So I don't imagine AI is going to go away, especially given that now there are more open source models like Qwen that you can run locally. So even if those American behemoths go bankrupt it will persist.
Depends on how you're looking at it (using speculated numbers for easy math):
1. Having operating costs of $100m on revenue of $10b is very deep in the red, regardless of training costs.
2. Having $90m training costs on $10m revenue means they're just breaking even.
Problem is, we don't know their financials and how it is broken down (they could, of course, clear up the confusion and release some numbers, ut they aren't doing that now); all we know is when they need a new raise to continue operating.
From the raises we can determine what their operating costs are (For example, raised $30m in 2024, then $300m in 2025 is a 10x increase in operating costs because they aren't spending on capex. The training is done on opex).
From their subscriptions (which are all only estimated), we can sorta tell what the revenue is, but that's for subscriptions only which are almost guaranteed to be running at a loss (until recently, anyway). We don't even have estimates on revenue from the PAYG API users. Common sentiment is you'd be a fool to use the PAYG options for anything but trialing the service, but the world is filled with fools, so you never know!
What is interesting is comparing the prices for PAYG on the providers supplying open models vs the PAYG on the closed models - the suppliers providing open models aren't spending on training cost, so the cost to supply tokens on open source models is pretty close to the actual price of running models. This is partially confounded by the fact that many of these will have VC money backing them (they are not bootstrapped), and so will also try to perform landgrabs via subsidised tokens, because their goal is an exit with a buyout, and without an eventual acquisition they will simply fail.
I can't think of many open source model suppliers providing subscriptions, not ones that subsidise the subscription, at any rate.
The first IPO of these SOTA providers is going to be the eye-opener; we'll finally see their financials and we'll see just how much the PAYG was subsidised, and how much the subscriptions were subsidised.
Until then, with a collective industry investment of $800b (last I checked) and a collective revenue of $20b (last I checked), they are most definitely operating in the red for the most common definitions of operating in the red.
At some point an instagram/tiktok/etc user could see nothing by real people and not even know what is promoted vs ad vs post.
I loved maps and geography as a child and still do. I've never met anyone in real life that likes it as much as me. But on the internet there are places were I can discuss it and other people share fascinating articles, pictures, etc.
I don't want to be limited to only the friends I can make who live near me
"Popular" reddit posts and subreddits are a good example of this.
Maybe it's hard getting across what I mean so a more concrete example is there will be SO MUCH clickbait out there that serious outfits instead of being forced to do it will be able to successfully differentiate themselves by NOT doing it. (and many similar things in different arenas)
I'm trying to say that LLMs raising the noise floor will drown out a lot of the toxic noise that's been plaguing us.
I can hope.
I really want to believe this will be true. However, I also suspect there's some external driving force, that I cannot readily name, which is making people incapable of consuming anything except this low-effort content. I mean, obviously it's working to some extent. Perhaps AI will be the thing that accelerates its death, but part of me thinks something else needs to happen beyond just an increase in useless content.
It's the economy of everything being free but supported with advertising. That mechanic is what leads to the race to the bottom lowest common denominator human motivation hacking attention toxicity. (yes that's a bit of a ramble).
If people weren't getting paid for the smallest increment of attention they could grab, it wouldn't be promoted the way it is. I don't have a high opinion of the things which grab my attention, but they still manage to do it sometimes. I think many people are in that boat. If there were other mechanisms with which we rewarded people for doing things, something different would be optimized.
And people just wouldn't reward the 10-second-gratification in anywhere near the same way if it weren't for the advertising.
Now there's more pressure to have a stronger signal and hopefully rewards to match.
I don’t think they crave it enough to make a difference. Even before AI slop, Reddit had made successive changes that led to much less of a feeling of interaction with real, authentic humans who could become your buddies. The UI de-emphasized usernames and hid the sidebars where subreddits could have their own distinct community atmosphere. I hear that now on comment threads, Reddit will even hide a decent number of posts from other users, so that a poster may well be talking into the void.
It is on old-school fora that one can get a sense of actual interaction: with avatars and other personalized touches it’s easy to gradually learn who is who, and there is a culture of longform text where you can actually get a sense of other people’s personalities. But how many people under the age of 35 or 40 are joining those fora that survive? Give people a choice, and it turns out they prefer the dopamine hits of engagement-maximizing commercial platforms, and the smartphone as the default (or sole) interface to the internet with all the death of nuance that spells.
I am not quite there with Hacker News but I do know for a fact that many "users" here are LLMs.
Online communities are definitely dying. I guess I hope that maybe IRL communities have a resurgence in this wake.
I assume they thought they'd be teaching people a lesson by making them feel foolish for responding to AI stories, most of which were too fake to be believable.
However it did not matter. The posts remained popular and continued to bring in comments even after the admission that they were fake. In advice subreddits, commenters continue to give advice on the situation. Some comments would say they saw the notice that it was fake but continue arguing about it anyway.
This makes a feature of Reddit very clear: The truthiness of a post doesn't matter. The active commenter base on popular subreddits just wants something to discuss and, usually, be angry about.
In retrospect it's obvious given that misinfo posts were the easiest way to karma farm for years even before AI.
https://news.ycombinator.com/item?id=47913650
It had 639 comments and 866 upvotes. And that's not a one-off.
That's 90% of current Facebook pages and groups.
I'm active in a number of online communities that are doing just fine but the difference is those all involve ongoing relationships, built over time and with engagement across multiple platforms. I've no doubt this clock is ticking too but it's still harder to fake a user across a mix of text chat, voice and video calls, playing an online game, etc and when much of the web of relationships extends back into real life activity.
But I agree the golden age of easy anonymous connections online has ended.
If my PGP public key has 6 signatures and they’re all members of the East Manitoba Arch Linux User Group, you can probably work out pretty easily which Michael T I am.
Are there successful newer designs, which avoid this problem?
It's probably better to call this something like vouching and leave "attestation" as the contemptible power grab by megacorps delenda est. The advantage in using the same word for a useful thing as a completely unrelated vile thing only goes to the villain.
I want to create a community for immigrants. How would I make it welcoming to recent immigrants for whom no one can vouch?
A web of trust is a wonderful tool, but it's exclusive by design. This is a problem for some communities, even though it makes others much better.
It still happens more informally today, of course, but it used to be a pretty (if un-spoken) part of how a lot of WASPy organizations operated to a greater or lesser degree.
This seems self evident to me too.
It's another factor in why I think the tech community needs to get ahead of governments on the whole "prove your ID on the Internet" thing by having some sort of standard way to do it that doesn't necessarily involve madness in the loop.
Leave them on the device, authorize the device to validate before age inappropriate content appears.
Website wants to know your age? Your face and fingerprint support your attestation signed by a trusted party.
Can it be tricked potentially? Sure, but then you’re probably a super genius kid and not the reason that these laws were created (as if).
Don’t let anyone tell you anonymity must die for safety to exist.
https://eudi.dev/2.8.0/discussion-topics/g-zero-knowledge-pr...
I have a strong preference for remaining anonymous or at least making it a reasonably high bar to tying my online identity to my personal identity
I would love to be involved in helping to design a sort of "human verified" badge that doesn't necessarily make it possible or at least not easy for everyone to find your real identity
I've been thinking about it a bunch and it seems like a really interesting problem. Difficult though.
I suspect there is too much political and corporate will that wants to force everyone online to use their real identity in the open, though
The problem here is that the premise is the error. "Prove your ID" is the thing to be prevented. It's the privacy invasion. What people actually want are a disjoint set of only marginally related things:
1) They want a way to rate limit something. IDs do this poorly anyway; everyone has one so anyone so criminal organizations with a botnet just compromise the IDs of innocent people -- and then the innocent are the ones who get banned. The best way to do this one would be to have an anonymous way for ordinary people to pay a nominal fee. A $5 one-time fee to create an account is nothing to most ordinary people but a major expense to spammers who have 10,000 of their accounts banned every day. The ugly hack for not having this is proof of work, which kinda sorta works but not as well, and then you're back to botnets being useful because $50,000/day in losses is cash money to the attacker that in turn funds the service's anti-spam team, but burning up some compromised victim's electricity is at best the opportunity cost of not mining cryptocurrency or similar, which isn't nearly as much. It would be great to solve this one (properly anonymous easy to use small payments) but the state of the law is a significant impediment so you either need to get some reform through there or come up with a creative way to do it under the existing rules.
2) You want to know if someone is e.g. over 18. This is the one where people keep pointing back to government IDs, but you only need one piece of information for this. You don't need their name, their picture, you don't even need their exact birthdate. Since people get older over time rather than younger, all you need to know is whether they've ever been over 18, since in that case they always will be. Which means you can just issue an "over 18" digital signature -- the same signature, so it's provably impossible to tie it to a specific person -- and give a copy to anyone who is over 18. Maybe you change the signature e.g. once a day and unconditionally (whether they require it that day or not) email all the adults a new copy, but again they all get the same indistinguishable current signature. Then there are no timing attacks because the new signature comes to everyone as an unconditional push and is waiting for them in their inbox rather than something where the request coincides with the time you want to use it for something, but kids only have it if an adult is giving it to them every day. The latter is true for basically any age verification system -- if an adult with an ID wants to lend it to you then you can get in.
3) You want to know if the person accessing some account is the same person who created it or is otherwise authorized to use it. This is the traditional use of IDs, e.g. you go to the bank and want to withdraw some cash so you need a bank card or government ID to prove you're the account holder. But this is the problem which is already long-solved on the internet. The user has a username and password, TOTP, etc. and then the service can tell if they're authorized to use the account. It's why you don't need government ID on the internet -- user accounts do the thing it used to do only they don't force you to tie all your accounts together against a single name, which is a feature. The only people who want to prevent this are the surveillance apparatchiks who are trying to take that feature away.
I'm happy to verify my identity as an honest-to-god sack of meat if it's done in a privacy-protecting way.
That probably is where things are gonna go, in the long run. Too hard to stop bots otherwise.
I’m not sure if that would work for account deletions though.
Let's put aside the idea whether it will be the end of all privacy as we know it (I'm not sure if I personally think it's a good idea), but isn't Sam Altman's World eye ID thing supposed to do that? (https://world.org).
How does it work (like OpenId)? Do I have an orb on my desk, or some sort of phone app? I still want to use my desktop to login to HN.
Would it stop this sort of "get human id", past it into .env, so agents can use it?
even worse many of them are just plain vocal about their disdain for people in general.
at least from what i’m seeing, people are starting to walk away from online at an increasing rate so i definitely don’t see widespread adoption of his creepy eye thing.
https://eudi.dev/2.8.0/discussion-topics/g-zero-knowledge-pr...
I personally can't wait for a mechanism to kill 99% of bot traffic.
Those sorts of places were always the only places with reliably good communities.
How? I have an identity. A state driver's license, birth certificate, social security number. I've even considered getting a federal license before, never bit the bullet. If I wanted to run a bot, what stops me from giving it my identity? How do I prove I'm really me (a "me" exists, that's provable), and not something I'm letting pretend to be me? You can't even demand that I do that, because it's essentially impossible.
Is there even some totalitarian scheme that, if brutal and homicidal enough, could manage to prevent this from happening (even partially)?
I'm limited to a single identity only as a resource constraint. Others more wealthy than I (corporations or ad hoc criminal enterprises) could harvest thousands of real identities and use those. Consensually, through identity theft. The only thing slowing it down at the moment are quickly eroding social norms (and, as you point out, maybe they're not doing that and it's not even slow at the moment).
Not too dissimilar to people bot-leveling in MMOs to the sell the accounts.
That, and probably political astroturfing. Before every election my local subreddit sees a surge of crime stories. Go figure.
It's actively encouraged by some of the platforms too. In Gmail and Google Docs, you have incessant AI prompts along the lines of "help me write this". I think LinkedIn does the same.
Plain advertising, governments' propaganda, political propaganda for one group or another to shift public opinion (it's done on TV networks, why would they not do online campaigns?), astroturfing by corporations promoting acceptance or fighting negative news (e.g. rideshare, AI, whatever certain wealthy personalities are doing) ... the list goes on.
HN has always been relatively influential in the tech industry and therefore worth influencing, and now the cost is very cheap - you don't even need to hire many people, so less-resourced operators will find it worthwhile (and they will also attack lower-value forums).
They aren't going to care about any of the advice in the article about not posting slop -- finding a job is (of course?) more important to them.
Can't really say they are doing anything wrong, maybe I too would have? ... Just that large scale, doesn't work
There are obvious benefits to controlling public discourse, right? Even if it's just to support some project you're working on.
Would be super fascinating to watch play out. I grew up before the internet so, historically, I know how to seek out external communities, but by early high school I was deeply entrenched in online life - so I'm very rusty with finding new IRL clubs, cliques, etc. Fortunately my life is full of many friends and I go out frequently, regardless. For those younger people that never had life without the internet, I wish them luck on their search but at the same time I'm very curious to witness their journey.
Is this based on the belief that an LLM can only represent an "average" human being?
Sure, if you want to chat while gaming, that's the whole point of Discord. Ganbatte.
But, for everything else, Discord is such a horrible misfit that I don't understand why it's the default.
Because it equally well supports real-time communication.
And it looks shiny.
And some people use it to e.g. watch a video together, or other social purposes.
Some communities are better than others but the sheer volume of stinky trash is immense despite discord and the poor volunteer moderators efforts to prevent it. Most mods are neutral on it too.
There are chat communities that are still somewhat safe with zero user verification. But I will not mention them.
Same as it ever was.
This is sad, because Reddit remained one of the final bastions of human content on the internet. For several years, appending "site:reddit.com" to a google search was a valid way to get something usable out of a google search. Doing that is still an improvement over raw-dogging Google's ranking algorithms with an unfettered search, but AI slop increasingly is the result.
This is one of my great disappointments in the current rise of AI. LLM's can give good search results when dealing with a topic they've been specifically trained on by human experts, but they're not good at separating human-produced signal from AI slop noise. We've done nothing to prevent a sea of AI slop from being dumped on top all the human signal that's out there. When AI companies enter their enshittification phase and stop investing in expert human trainers, the search results LLM's produce are going to fall off a cliff. Search is a bigger problem than ever.
_____
[1]https://9to5mac.com/2024/02/19/reddit-user-content-being-sol...
HN autokills comments it detects as LLM. I think maybe you're not giving HN enough credit. :)
No it doesn't. Unless you have proof.... ???
For giggles, here's how it would look for this comment. Rather meta, but in this case it removed the "It needs hellp" so here we are.
I often run my screed through an LLM before posting. I ask it to keep the writing at about a 10th grade reading level and to avoid em dashes.
We may end up with things like that…
I don't suppose you could show some examples? How convincing is the state of the art now?
You can have both IRL and online-free-of-bots. I already wrote about it but one of the very best forum I'm a member of, where real people are posting, requires to be vetted in, web-of-trust (but IRL) style. It's a forum about cars from one fancy brand and you can only ever join the forum by having a member (I think it may be two, don't remember) who's already in confirm that he saw you driving a car of that brand. It's not 100% foolproof (someone could be renting the car for two hours and show up at a cars&coffee or take a friend's car etc.) but this place really feels like a forum of yore.
And people do eventually travel, so it's bound to happen that an owner shall go to another country, meet someone there, vet him in etc.
Now, sure, it may not be the "1 million users acquired in three days thanks to my vibe-coded app" scenario but that is the point.
You can imagine other domains where IRL communities have local groups, but where forums regroup different IRL communities all interested by the same hobby/topic/domain. And when people travel and meet, the vetted members do grow and connect.
Oh and on the forums a lot of the posts are pictures, where "Julian xxx" met "Black yyy Cyril" and you see both cars (and from more than two people): suddenly it becomes much harder to fake a persona... You now need to fake both Julian xxx and Black yyy Cyril and fake the pics. And explain why your car has never been posted by any carspotter on autogespot etc.
You can imagine the same for, say, model trains: "Met Jean at the zzz meetup, where he brought his wonderful 4-8-8-4 'big boy' locomotive, I confirm he's into the hobby, vet him in".
Naysayers and depressive people are going say it cannot work but I'm literally on one such forum and it just works.
P.S: if I'm not mistaken in the past in some nobility circles you had to be vetted by up to sixteen (!) other people from the nobility that'd confirm they knew you, your parents, etc. before you'd even meet the king/emperor/monarch to make sure that someone from far away couldn't come to, say, Versailles or Schonnbrun pretending to be a baroness or count or whatever. Quite the extensive check if you ask me.
It's very obvious that these accounts were abandoned and then either bought from their original owners, or more likely bought from someone who compromised them, because of their history and karma.
And I would bet money that Reddit is well aware of this phenomenon, because not long after it became so common as to be impossible to ignore, they papered over it by allowing users to hide their history from public view. (AFAIK subreddit moderators can still see it, but typical users now have much less ability to see whether they're interacting with actual humans.)
0: https://wiki.roshangeorge.dev/w/Blog/2026-01-06/Is_The_Inter...
Yesterday I was watching people on the street and on the tram. Every other person was staring at their phone and scrolling through something.
That might scare me more than the fact that someone is chatting with an LLM bot online.
(I am pro-ai, use it every day for coding that I couldn’t achieve pre-2022 as I am lame coder.)
Was this a browser using agent? What did you use?
Using just a browser is way too token intensive and slow. It would look for 401 errors then run the browser automation to login with the credentials and grab the token.
Did you clone the Reddit API from browser traffic and then turn it into a 100% API driven thing?
I'd imagine they'd be sniffing browser agents, plugins, cookies, etc. to fingerprint. Using JavaScript scroll position, browsing rate and patterns, etc.
Maybe their protections just aren't that sophisticated.
I'm not saying being a mod means it's bullet proof, but i do notice smaller communities tend to self police better and know what's real.
That said, your experiment scares me as well.
My experiment was focused on niche subreddits as well due to the nature of the product I was trying to market.
People using LLMs without being fed their own post history are still pretty easy to detect. There's just something very recognizable about the cadence and tone of LLMs.
What really stuns me is that if you call someone out for it, 9/10 times you get absolutely buried in downvotes. Even here on HN. Its like people are angry that you're lifting the curtain on the slop, that the writing they enjoyed is fake.
It’s an unpopular opinion but I am looking forward to ID and age verified social media. If done right we can have real people around again.
BTW, ironically the harsher communities like 4Chan doesn’t seem to suffer from the dead internet. I guess it’s either because the advertising value is too low to justify AI use there or maybe AI API providers refuse to work with such a content this reducing opportunities to infest with bots.
- I am trying to learn about the topic at hand and trust a human's comment more than an LLM's guess - I am trying to connect with other humans to fulfill my social needs - I am maybe spending time to help another human out with a response because I want to help someone else - I am interested in the perspective of other humans
Those are just a few reasons. For each of those if it's actually an AI I feel I'm losing out on something.
Imagine an online community where you can only join on the recommendation of two other members, who you must have actually met in person, to participate. Meanwhile, you leave at least some of the activity publicly available to the general public so that interested parties can meet up IRL and join.
This could probably be implemented easily on top of existing online platforms like Discord, Reddit, etc. since it's really just a community building rule, not a community itself.
What factual basis do you have for that?
Whatever allegiances (with people, or allegiances to ideas) Steve Huffman has, or people like him - it's not enough. It's a site seemingly killed by greed
(Yes, I know moderating this stuff at scale is hard)
- A human. Beep boop.
Frankly, online communities have been doing for many years now, when the censorship, anti-free-speech, tone policing mods and mobs started dominating online and America really did not have the self-respect or confidence anymore to enforce the Constitution online.
“Mods are Unconstitutional” lmao
Name and shame.
Good. Tech has been artificially propped up to generate liquidity in markets. Never had organic demand or traction. Most output was trite and meaningless; just capitalism.
Zero cares if all the HNers rich on tech stocks go broke.
Happy to project as little concern for their well being as this forum has projected at other's fields tech eliminated.
Some wild Dark Triad shit peddled here where there is no obligation to humanity's problems, just HNers own personal preferences.
Yeah ok that can be projected back, terminally online script kiddie; fuck your stock portfolio and I hope you end up in public housing assistance.
Please don’t do this here.
If you look at what people outside HN talk about HN, it's not uncommon to see wannabe tech entrepreneurs talking about how to promote their products via Show HNs and how to stay HN front page. It's honestly a little sad considering that HN has a tendency to rip these projects apart.
I've seen some claim they do it to avoid stylometry or being fingerprinted, or because of social anxiety problems.
Some people just have a compulsive need to optimize everything, and HN's guidelines and tone policing are more easily followed by a bot than a human.
This part of the guidelines is a 15 year out-of-date bad joke:
> Please don't post insinuations about astroturfing, shilling, brigading, > foreign agents, and the like. It degrades discussion and is usually > mistaken. If you're worried about abuse, email hn@ycombinator.com and > we'll look at the data.
"We'll look at the data". Sure buddy. You'll do what you always do, which is apply to banhammer to anyone that's not following your talking points, and tone police the actual humans.
Enjoy "conversing curiously" with bots while the mods tone-police non-bots out of existence.
This just makes me wonder...so what?
Some of the oldest posters here with the most karma continue to post absolute garbage takes on topics ranging from US healthcare to history of USSR, that are trivially disproven by learning the very basics from a Wiki article (e.g. not a high bar).
To be fair, this opinion slop is also present for new users and LLM bots, but is one kind really worse than the other, if both of them contribute to killing the community?
We already know what kills communities. It's the eternal Septembers. Infighting within leadership also doesn't help, but time and time again it's the influx of too many new users that nosedive and drown out quality contributions.
No? I’m imagining not at least. Because there would be no point to it.
If you would enjoy it, then I’m surprised you’re here and not just simulating the experience with your LLM by yourself.
Do you really not care one way or the other? Would you really rather just be talking to LLMs here? Or would you just script yourself as well and call it a day? Then what?
Maybe you are. I like getting to a reasonably correct model of a topic or issue. Bad human takes can still be useful here. I just get inevitably tired of the people crying about potential LLM comments all the time.
> Would you really rather just be talking to LLMs here?
Obviously we're not there yet, regardless of what I want. But there is a great number of HN threads posted here that touch on topics that have been discussed so many countless times, that an average LLM summary would do better than most comments.
I see the same thing with "AI Slop". Yes, there is AI Slop but (IME) it's pretty easy to spot. But what's more annoying is how often people are willing to throw that accusation whenever someone takes a position they don't like, much like the "political" label. It's lazy and honestly just as bad as the slop itself because it unintentionally launders the slop in a "boy who cried wolf" kind of way.
I also have a theory that some AI slop isn't inherently successful. It's just heavily botted by people who are interested in promoting certain positions. I bet you could make a pro-administration LLM bot and another one promoting a communist revolution and no amount of model tuning would make the second as popular as the first because the first would hit third-party botting as well as platform content biases (eg Twitter).
I've personally been accused of being a bot. This is particularly true in recent time as I've tried to share facts and fact-based analysis of, say, what's going on with crude oil markets, the military operation in the Gulf and the politics and economics around it. I even saw one hilarious comment saying (paraphrased) "the bots are getting clever and posting about unrelated topics". This was funny because it never occurred to this person that no, it was just a real person posting something you disagreed with.
This happens on HN all the time. For a lot of downvoters and flaggers, there are two kinds of opinions: "Things I agree with" and "Too political for HN."
All you really need to do is give it some guidelines of a style to follow and styles to avoid. There's also a bunch of skills people have already written to accomplish this.
If you are reading LLM content just about everywhere and have no idea. Obviously there are easy to spot things, but the stuff you don't spot is the stuff you don't spot
The only thing worst than a slop comment are the people that bitch about it incessantly. I'm convinced it's become a new expression of a mental illness.
I guess… “that’s not just an AI red flag, it’s generally shit prose” would be how ChatGPT would describe most things nowadays.
Since the AI sloppification we lost considerable amount of traffic to bots. But worse than that, we lost users who tended to contribute back with others.
We can leverage multiple ways of exposing community data to members, so it is not that we are loss because of that, but more in the fact that we have 30y or so of good feedback on how the community around the platform was good for people and now everything is at risk...
Don't get me wrong, my work is work... There are premium features and else, but the amount of value one can get for free is what the platform is known for. And we know many people use it for free for years and when they need or can they subscribe and mostly stay for years and years.
The fact people are losing those connections is depressing to me
I use ai okay. I think it's useful. But people who dove hard into this stuff treat all text on their screen like it's a chat bot and not a person.
"Rewrite this code using the new API" "excuse me?" "Can you do it I need it right now chatgpt won't compile!" "Show me your code please" provides the biggest pile of dookie ever "hey can I ask how you came to decide on any of this? Maybe we should rewrite what you have here because x y z is concerning" "the ai did it I am learning. There is no need to rewrite anything just write this section for me" " no thanks" someone else does . user leaves
One may be quiet, but what if your friend/acquaintance/fellow got possessed by some AI slot machine, and is sharing his "products" enthusiastically? I had such case, and right from the very beginning was dismissive and rude, and it doesn't work -- he keeps sharing various artifacts.
On a global level, yes communities die out. I think, global communication has reached the point when it's more a liability than a benefit. In late '90s and early '00s, maybe until early '10s, getting more connected could lead you to nice clients, getting hired etc. Nowadays, even before ChatGPT 3 in '22, every such area became overcrowded, underbidded, etc, and LLMs, surprisingly, added not much new -- just augmented this trend.
Edit - I am not anti AI but it is slowly killing the digital human interaction.
> A good use of AI is when it enables people to do something they couldn’t do before, to contribute to a community when they couldn’t before.
I agree 100% with the novel contribution aspect. But there's some nuance there.
For example a project might have no active contributors. It might not be something you can drop directly into your codebase. Neither of those is inherently bad.
As AI becomes more responsible for higher-level planning decisions, the value of an OSS project becomes less tied to visible community activity like PRs and issues.
I notice this in my own work a lot. I might not use that project's code directly. But I think about a problem differently as a result. I often point my agent to existing OSS projects as inspiration on how to solve a problem. The project provides indirect value by supporting architectural decisions, deployment approaches etc. Unfortunately OSS activity doesn't capture this.
There are two separate things here that are getting silently conflated.
> A good use of AI is when it enables people to do something they couldn’t do before
This could be good on an individual level, if say, a doctor wants to vibe code an app of some sort for his individual practice.
>to contribute to a community when they couldn’t before.
This is where it goes off the rails. If they couldn't meaningfully contribute before, they aren't going to suddenly be able to discern that whatever slop they want to contribute is of value to the community. That's just another way of saying, if I wanted an AI opinion on something, why wouldn't I get it directly from the source, and write the prompt myself, instead of have some intermediate human prompt the AI for me?
No, I don't think I will.
Smaller communities are generally a lot healthier anyway, so tbh I don't think this is all that bad of a thing. I don't think it's possible to be open to millions and also be healthy, unless you spend a lot of money paying moderators (and regularly rotating them, to prevent burn-out or mental harm from too much exposure, which ~0 do in an even slightly ethical way).
That highlights the problem - its not AI - it's the oversharing thats the issue. Many people have moved from "Sharing whats unusual/interested/excited me" to "What can I share today".
The constant stream of mediocrity drove me away from Facebook (years ago) and then Instagram.
Stuff started moving to web site forums which I still don't think are as good as a Usenet newsreader. slrn was my favorite.
Then reddit came along and a lot of online forums started dying as people moved to reddit.
Just this morning on reddit I reported 4 separate posts as AI slop to the moderators. They need to change the categories as I flag it as "disruptive use of bots"
For 2 of the posts the moderators agreed with me and about 5 hours later the posts were removed. For the other 2 the moderators haven't done anything.
It's a losing battle.
Some of the posts start by asking questions like "I was thinking about this and... [long rambling paragraphs] Your thoughts on this?"
I waste a minute reading then another minute skimming the rest of it and then realize I wasted 2 minutes of my life. Then another 30 seconds reporting it to the mods.
This has exploded in the last 6 months.
Then there are all the repost bots farming for karma. Some subs have a rule that you can't repost something in the last 30 days or 6 months. But it is really ridiculous when something get 500 upvotes and then literally the next day a bot reposts the same thing and it still gets 300 upvotes. I think it is just a bot farm upvoting stuff.
It’s entirely possible to be born in the 90s and have the same experience.
https://www.androidauthority.com/google-recaptcha-play-servi...
The baseline level of trust in an online interaction has been eroded significantly by LLMs.
The question is, how can we reverse this trend and increase trust?
I have a sneaking suspicion that it would help enormously if the stock prices of the largest companies in the world were not tied to how effective they are at hijacking as much of humanity’s time and attention as possible.
Maybe the fediverse can (eventually) help? It’s been a while since I looked at it.
Let’s empower people to effectively have more control over the content they interact with.
Social dynamics can make this difficult. We all want to be in the loop. The recent striking successes of the movement to ban phones in schools gives me hope.
The fediverse has been around for well over a decade in some form or another. It never caught on with society enough to make a difference. And unfortunately, the fediverse has now developed such a distinct culture of its own, Highly Online people with distinctive political and social shibboleths, that it even alienates many tech idealists around the world, let alone the general public.
As far as the "tech idealists," a lot of them seem to want every space to be 4chan where they can be racist trolling assholes without consequence. And those folks have Nostr.
Sites and apps don’t need your actual national ID, just to know that you have one. I think it could be possible to have 3rd party verification services that don’t know where the verification request is coming from, thus preserving privacy on both sides.
For instance, I really liked how Karpathy shared a high-level idea on the LLM-based wiki. It was sadly followed by a long tail of no-one-cares-about "Here is my LLM wiki product" posts pointing to the generic LLM-generated landing page.
Most people aren’t willing to go through a identity verification process, or pay to join a community, and invitation-only spaces would probably lose diversity of thought pretty quickly.
Even still, I guess one of the above is a lesser evil because the bot problem is only going to become more unbearable.
P.S. Props to the author. I really liked this writing style.
The alternative is having a community born that will be small, have early adopters who can be overly passionate or critical and gatekeep folks from discussion. That means high effort to curate initially.
I think what we need is the equivalent of what was done for CORS: client/server cooperation.
That is, APIs should mark that they are human only, and harnesses should cooperate with such flags and prevent calling said APIs.
It's not perfect, as it's client side enforcement, and one could still theorically build their own harnesses without, but that's the only way forward.
That people trust AI over an organizational knowledge is bad enough. I fear that AI is turning people generally antisocial.
It's frustrating because we're bundling this shitty AI with our product so we're just making more work for ourselves. Then there's the push from leadership to use more AI...
I don't think it's making people antisocial though, people just like easy solutions to their problems. We're giving them what seems like an easy solution. But it's easy for them, not easy for the reviewers.
This is by design btw.
It was also so much easier to make a dating app profile back when I was single, like one click. Recently was watching a friend set one up, and now they not only want like 3FA but also proof that you're a human. Assuming the old accounts are grandfathered in.
Upvotes are not a good mechanism for quality control in any way because they force good content to have the same metadata as the content that is technically well-constructed but is irrelevant, meaningless, just a platitude, too obvious to be obvious or pablum. Upvotes turn everything into a shock-value dominated 101 space.
Also people will get used to AI in online spaces as AI quality improves. If I'm online trying to get help for some task, I personally don't care who wrote what if it is correct; it's not like humans have great track records of accuracy or substantial contributions either on average. Correctness is expensive in general.
If I'm online trying to relate to other humans emotionally, well I get what I'm paying for. It's been true forever that the better the gate, the better the community. I've tried to push the boundaries of openness, but as I've written extensively on MeatballWiki, soft security depends on there being more good than bad apples in a community. With machine intelligence, the economics of that are silly.
Regardless, people love people, so we'll figure it out. I'm optimistic we can rise to this challenge.
Disliking AI, even for reasons admitted by the author as valid, and on which they post rants, has become so low status that people now have to preface their articles with this. It's fine to be be happy with the good aspects of a thing and be mad at the bad ones. We don't need to split between sides of history (whatever that means) because of this. Eventually you might want to weigh the benefits against the damage though.
No, it's a problem with art, text and videos too. Reddit was already becoming a creative writing exercise in many ways, with infamous subs like 'Am I the Asshole?' seemingly being about 80% fiction labelled as fact. But now you don't even need to know how to write to flood the site with useless 'content'.
YouTube is arguably even worse, since AI led content farms are not just spamming the hell out of every topic under the sun, but giving outright dangerous advice and misinformation on top of that. I saw this video about medical misinformation by these 'creators' earlier, and it genuinely made me want to see them crack down on this junk:
https://www.youtube.com/watch?v=UEfCTCBDKIU
And there's just this feeling of distrust everywhere too. Is anyone on Hacker News human anymore? Is that Reddit poster I'm responding to human? Are the folks on Twitter, Threads or Bluesky human?
The scary part is that you basically can't tell anymore. Any project you find could be AI generated slop, any account could be a bot using stolen images or deepfakes, any article or video could be blatant misinformation put together as a cash grab...
If something doesn't improve, pretty much every platform under the sun is going to be completely useless, as is a lot of the internet as a whole.
Yes, but how many decimal places did you optimistically give it, only to never use more than the "10s" place?
I failed to truly appreciate how cooked reddit was with bots until I accidentally clicked Popular and stumbled upon a national subreddit post with a 'chad meme', starring a particular political leader, whose unpopularity is hard to adequately convey to foreigners.
It was not just that this post had been so severely upvoted, but the comment section itself had a mantra more or less, with very little actual conversation, just echoing the same sentiment; and all those comments in turn upvoted to the point of drowning out the lone comments at the bottom (not downvoted, just not upvoted) expressing "???". I don't know if I'd ever even written the word 'astroturfing' before expressing my bafflement at a friend, so I don't think I'm very tinfoil hat about these things.
It was just utterly bizarre to see someone who can barely get a single win in public discourse being heralded -- monotonously -- like he was the second coming.
For me it was a wholesome response. It seemed genuinely kind/human.
Click on user profile...it's a bot just pumping out posts like that. Looked organic when seen in isolation, but when you see a wall of them you see that it's got to be an LLM (with a good prompt).
That was disheartening...I had kinda accepted that the sht-stirring rage posts might be bots but the kind comments too? Ouch
He himself says that some AI assisted projects are actually good while other's aren't so. Devil's advocate can say that it's a matter of quality and also a matter of interest. And quality in some manner is a matter of taste - for example there is a market for regular fast food and organic or more high quality fast food, while others never take part.
He says that "Does your offering contribute anything to the community?" is the bar that should be met in regards whether something made with AI should be shared or published. But let's me honest, a good number of things people made or commented or did even before LLMs didn't meet that bar, and second is that most people think their pet project or pet whatever they made or created does contribute to the community. I have a senior security leader in my organization that thinks his Claude Code security scanner he made is amazing while I think it's a piece of garbage.
We're all recalibrating.
I do really think this is just a quick period in time before most people realize that the slop posting doesn't help them personally get anything and most give up and we go back to roughly the ratio of cool things with real value to see but like on a bigger scale because AI helps you do more as one person.
I think people who want to push a certain narrative might just set up a quick bot and tell that bot to start posting on Reddit or whatever and just let it run. Why not? Little effort on their part and they might actually have influence. The same reason why spammers apparently think sending me 10 text messages per day about a loan I've been approved for. It probably does work 0.0001% of the time, but that's okay if it's all automated.
Especially say here on HN with Show HN and such the forcing factors are "i get no votes or community recognition"
But I don't entirely disagree with you I think things won't totally go back I think it will settle way more than now though especially where things are a little more niche.
It used to be because the comments lacked any critical thinking. This is probably due to the fact that most people on instagram are teenagers. That's fine, and for that reason I stopped reading comments.
But now it's pretty obvious that the comments are LLMs talking. Whether a human initiated it, no idea, but the big walls of text done by bobbyfoo2012 seems highly unlikely.
While the site has moved to using /showlim, the AI garbage just bypasses that and goes straight to the home page. Almost every project that’s being shown is vibe coded and looks exactly the same - generated by Claude or the like. This is an excellent test for the site: will it be able to adapt or do we simply end up with a husk of what HN was and it’s the AI posts driving majority of engagement, Overton window, and upvotes/downvotes?
I look forward to this, I think it is an exciting development.
It's implemented for plan9, but clients could be made for any OS:
I'm gonna speak on behalf of language models' capability of making online communities better. In recent times, the frustrating forum phenomenon of "learned helplessness" is making me too annoyed to participate. Even in a fantastic subreddit as /r/LocalLLaMA, there are people posting replies in the vein of
> user1: please help me understand this acronym the post title speaks of > user2: (explains in detail what it means)
In the "good old days", a low effort, surface level question would result in someone either muting or banning the person to keep the discussion high quality.
There I am, browsing a forum dedicated to LLM enthusiasts, and an unbeliavable number of people are asking LMGTFY/RTFM-level questions they could even find an answer to from a free Google Search AI summary, and people are rewarding them by actually responding to them with effort.
Thanks to models being quite intelligent at answering basics, the ban-hammer should be used more swiftly if people keep polluting forums with low-quality posts. There's no need to feel bad for them not having the time or capabilities to read through years of forum posts to feel qualified to answer.
Maybe even these sloppy posts authors can be outright muted or banned with a heavier hand for the sake of quality.
They won't stop talking about it and defending it. But I can't get anyone to share their amazing work with me.
There is a reason the Show HN projects that are mostly vibecoded don't get much response. It's because they aren't any good. Comments that are AI generated are hollow. Videos that AI generated a shell of their sources.
These posts also usually get all these glowing comments from users who clearly haven't checked the code. It's even worse when authors get busted and claim "Okay, Claude wrote it, but the design is mine" despite clearly not understanding the output themselves.
Unfortunately, that makes high-effort projects less visible. The SNR will probably keep getting worse until slop can be flagged on HN.
2. only human generated input composer, no copy/paste, no file uploads ect. control the composer. control the camera sessions for photos videos.
3. no algorithmic feed that is designed for ad-spend and eyeballs.
4. moderate
How, at scale?
There are maybe 20 or so online handles I know, some of whom I've met in person, who I deeply trust. To the extent that I fully trust anyone they vouch for too.
Even with just one degree, that's a large enough international semi anonymous online community that can provide value to each other through online text based communication. Doesn't need iris scans or credit card checks, just "patio11 on hn Twitter and whatever his domain is is one of the good uns" and a network effect from there.
Already seeing some form of this reputation staking in eg Pi PRs, everyone is treated as clanker slop by default but the entry bar remains quite low to prove and build reputation.
I don't think online communities will stay the same in the face of AI but I do think whatever comes next will strongly rhyme
If platforms had a subscription model that you had to pay for in order to do more than just read comments, there’d be a lot less LLM content. There would also be a lot less of all content. But maybe that’s the price you pay (literally) to get rid of AI slop.
Oh hey, now thats an idea.
Thank you OP, this puts into words why I no longer look at Show HNs.
This synthetic participation (LLM or otherwise) has catalyzed weakspots in HN's high-trust environment. The weight we give to the average HN comment is orders of magnitude higher than the average Reddit (& co.) comment, and this relationship probably goes both ways (much higher ROI on ads/propaganda). Due to the low volume & high trust, it seems to be a very different (easier) environment in which to achieve pervasive propaganda/advertising/etc with a disproportionate impact.
I remember when some new LLM version came out (maybe from Meta?) I saw something like 3 of the top 10 posts on the front page were all variations of "Foobar 2.1 New Model". Perhaps not explicit, deliberate manipulation, but the result was the same, and apparently allowed. How many of those generic LLM websites (https://letsbuyspiritair.com/ comes to mind) show up on the front page per day? Zero effort static front-ends for some unremarkable data. I'm not going to touch the politics minefield, but that is a weakspot too.
All of this, and yet I think HN has handled it relatively well. I really appreciate not seeing comments of the form "I asked Clog/Gemini/etc. here's 5 paragraphs". Places like Reddit do not have the agility or control, and have degraded accordingly.
It makes me sad to think that a short time ago, every forum was ~100% humans, and now it is some fraction of that. I wonder if I will ever see that again.
And Listen Notes is removing 4000 to 8000 ai slop podcasts per month - https://www.listennotes.com/podcast-stats/#growth
event went so far as to try to delete all of my messages with an extension, took like 2-3 days to delete my account. even then, i don't trust them enough to think that they don't keep copies of deleted messages.
regret writing a lot of personal details and feelings in 1:1 chats, sometimes also in servers.
i yearn for a new-age of decentralized communities and fora...
Even if everything online is fake, events are not. So if people say they’re going to show up somewhere, there must eventually be a moment of truth. And then you can form high trust private group chats to keep talking together.
It may be hard for the current generation of chronically online people to adjust to that new reality, but the next generation of kids growing up can get used to this now, and eventually socializing in person will be natural again and the internet is for bots and weirdos LARPing as something they’re not.
The large group will have to endure the manipulations that we've come to know and hate from the internet, but they'll also be better coordinated than the small ones. They'll vote together, buy the same sorts of things, have an outiszed influence on the global conversation... They'll define the de facto majority opinion whether or not they actually are a majority and whether or not it's authentically their opinion.
I don't think that's a good outcome. We need ways to get on the same page en-masse, if only to counteract the harms caused by whichever highest-bidder is currently using an AI horde to control the other group. Besides, we should save them from this abuse for their sake, if not for ours.
The internet is worth fighting for, if we abandon it entirely we'll be forever at a disadvantage against those who would use it to manipulate.
Strict invitation trees? Small signup fees? No SEO incentives?
Since it creates a tree structure, you can wipe out entire armies of bot/spam/otherwise accounts by following the vouches up the tree.
* dead online communities
* highly-invasive, government-mandated "prove you are a human" requirements in order to participate in online communities
The intriguing part is that I think it works against scaling. The incremental cost for me to use the 500GB of free space on my disk is $0, but someone scaling a bot farm has to buy all their space.
Real people tend to have a lot more idle capacity than optimized, scaled businesses, so any kind of proof of idle capacity seems like it would disadvantage bot farms.
I’ve also thought that proof of collateral spending would be a good system. For example, you buy groceries and the store gives you a token saying you spent $X of real world money. Those tokens help show you're not a bot. Keeping that system honest and equitable would be extremely difficult though.
Maybe schools could give kids tokens for attendance. It sounds kind of dumb, but who knows.
I have turned to blunt instruments: blocking individuals on their first cliche banner-wave. It has substantially improved comment quality but I still suffer from the problem that I don’t block stories entirely.
I'm not sure about that.
We get it, the current narrative is that coding is the big thing, promoted by billionaires and scabs alike.
So, the coding narrative must be protected until the IPO of Juniper^H^H^H Anthropic happens and the whole thing implodes.
You already could have code for free and faster by using "git clone" without a company of thieves selling your own output back to you.
They muddy the waters. They wheedle, rules-lawyer, carve out exceptions, and talk about how important it is to have nuance in separating virtuous applications for slop from bad ones, and that focusing on the bad ones is actually very tedious and rude. We should have polite discourse about the good things about slop and stop being so mean about bad slop, which isn't even really a problem. The bad kinds of slop will be solved soon, probably, and the harms are overstated. They colonize spaces.
If moderators don't swiftly throw these slop enthusiasts out on their ass, slightly less polite ones will post slop slightly less politely. More and more of the people participating in the space will have favorable opinions toward slop, and shout down people who object to slop. In no time at all, your community is a slop bar. Who could have imagined?