I feel like the moderated subforum is a fundamentally broken system for dealing with content. I much prefer the Federated / X / Instagram approach where I can deal with users and have the tools needed to curate my own content, instead of relying on some ideologically captured no-name account that chooses what I can or cannot see based on whims.
Also, honestly, with AI/LLMs now, do we even need human moderators anywhere anymore
It's makes a great propaganda machine though, given humans have a tendency to measure their own opinions on social clues.
I've seen something similar over the last ~17 years: a bunch of same terminally online accounts uploading content from our local media outlets on country-related subs and local digg-like sites - both active and long defunct for 10 years now. Some of those users even appeared on mastodon and bsky.
The social link aggregators were created for people to share their favorite links, places from the Internet so others could see these and have fun, expand their knowledge and so on. For me it was the cherry on top of the web2.0 period where everything was fresh, beta and innocent. That lasted for a while up until other people, entities figured out that such sites can be used to promote their content, insert ads. The next stage was and remains till today opinion control by "curating" the content and/or reactions in discussions - still done by humans but more prevalent presence of convincing bots.
Reddit itself lost its impartial and independent status a while ago. Big subs related to media franchises or big corporations are heavily controlled to the point it's impossible to submit content that's critical. It's all happy world seen by pink glasses, or as some say toxic positivity. There are still niche places where moderation is limited but as I said last time, from my own experiences: such subs were targeted by bad actors who by submitting forbidden content tried to impose lockouts so later they could take these in their control.
hn isn't free of some of these issues either. while discussions still remain on good levels (tho degradation to reddit levels already happens), there's no control over content: there are accounts who do nothing but upload links every few minutes, hours.
I'm not sure if it's possible to have link aggregators or multi-thematic forums that could be free of such... issues. The similar problem with establishing "real estates" happened on lemmy when some part of userbase decided to abandon reddit due to controversial changes.
I don't think it's an unsolvable problem although new legislation is continuously being considered in order to make the solution harder. Still, not impossible.
IMHO Reddit would be better if it had AI moderators that strictly follow a sub's policies. Users could read the policies upfront before deciding whether to join. new subs could start with some neutral default policy, and users could then propose changes to the policy and democratically vote on those changes.
Which, in fact, would open up the same rat race with determining which accounts are real and so forth.
Not disagreeing with you, just circling around this same problem. Feels like the world still isn't ready yet.
Most places can hide posts and block users at the user level, so why not select which mods can do that for you?
On Google+, it was possible to individually block specific profiles.
This meant that the blocker wouldn't see the blockee's posts and the blockee wouldn't see the blockers, which is pretty much expected behaviour.
But on third-party threads, if a blocker/blockee were both commenting, others could see their comments but they'd be mutually invisible. As the platform matured and the number of such blocks increased, this reached a point where that platform behaviour became common enough that it was frequently commented on. If the thread host isn't sufficiently diligent in their own moderation (effectively each post author is moderator of that thread), it's also possible for such discussions to devolve quickly.
I guess Usenet would be another case where individual killfiles were often applied.
This isn't quite the same as your proposal, but it does raise the challenge that if there are multiple moderation regimes occurring, there is no canonical view of a discussion, leading both the potential confusion over what has or hasn't been said, and potential derailment (or similar behaviours) if a sufficiently disruptive participant is not universally blocked. The canonical flamefest after all is often just two profiles / participants responding endlessly.
Diaspora* is similar to G+, except that on third-party threads the blocks don't work, so that if A blocks B but C does not block B, then A and B will see one anothers' comments on C's posts / threads. This ... can be frustrating.
Oh, and the post-author-as-moderator model also somewhat resembles what you'd suggested, in that you could choose to participate on a particular profile's posts given that profile's moderation practices. I found that there were several people who did an excellent job of this, and who were quite affective, in effect, salon hosts, which was how I saw the G+ moderation model over time. This differs from what you suggest in that every participant on those threads had the same moderation experience, but it was possible to choose moderation practices based on which profiles' threads you chose to participate on. And I'd definitely avoid poorly-moderated hosts.
One need only remember how easy it was to take over IRC channels with a few hundred bots to see the endgame of this rationale… it cannot be patched out, it’s inherent to the internet.
That which would make a vote valid; can (and will) be gamed.
In this setup having users elect the moderator leads to cases where small groups create their special interest group and then some trolls challenge the moderator.
Their may be some oversight on the large sub forum, but not all.
Does a subforum offer the same? Once the mod is elected, are you going to sit down with him each day to make sure he is doing the job to your wishes and expectations? I say (ish) in government because it often doesn't even work there, even where people have heavily invested life interests, with a lot (maybe even the vast majority!) of people never getting involved in democracy. A subforum? Who cares?
If there were to be elections, it is unlikely they could be anything other than authoritarianly, with the chosen one becoming the ultimate power.
Are you sure? My understanding is that accounts were only allowed to create two communities.
That limit wouldn't stop you creating more communities with more accounts anyway.
It was just a copy of reddit. How useful?
Every site that is driven by user posting seems to be headed towards being overrun by AI bots chatting with each other, either for sake of promoting something or farming karma.
And there’s really not much point in publishing good content anymore, since AI is just going slurp it up and regurgitate it without driving you any traffic.
Though it’ll be interesting to see what happens to ChatGPT and the like once the amount of quality content for them to consume slows to a trickle. Will people still use ChatGPT to get product recommendations without Reddit posts and Wirecutter providing good content for those recommendations?
This happens now on Onlyfans too. Content creators hire agencies which in the best case outsource chatting to "customers" to armies of cheap labour in Asia, and the worst case use bots.
The dead internet theory [1] is probably not just a theory anymore. HN recently made a policy to not allow AI posting and posters, but do you honestly think that's going to work? I would place a bet that a top HN poster within the next year is outed as using AI for posting on their behalf.
Perhaps not the worst thing in the world?
Yet people act like the internet is somehow different. The internet is a massive society. Social networks are very much like virtual countries, or even continents. We’ve all enjoyed the benefits of living in this society of zero consequence, but it’s now been overrun by the very worst people, just like the imaginary country above.
You claim we can’t solve this problem, but we already have solved it here in the physical world with identities, laws, and consequences. The real problem is that most people don’t want to let go of the very thing that is the problem: anonymity. Unfortunately, there won’t be a choice for much longer. The internet will certainly be dead without a system that ties IP addresses and online identities to real people.
No, it’s not the internet we all wanted, but humanity has ruined the one we have.
Verifiable credentials; services can get persistent pseudonymous identifiers that are linked to a real-world identity. Ban them once and they stay banned. It doesn’t matter if a person lets a bot post inauthentic content using their identity if, when they are caught, that person cannot simply register a new account. This solves a bunch of problems – online abuse, spam, bots, etc. – without telling websites who you are or governments what you do.
Anubis is one such answer [0]. Cryptocurrency and micro transactions are another.
In the last few decades, spam was a problem because the marginal transaction costs of information exchange were orders of magnitude lower than they had been. Note that physical mail spam was, and still, is an issue. Focusing on perceptual or fuzzy computation as the limiting factor, through captchas and other 'human tests', allowed for most spam to be effectively mitigated.
Now that intelligence is becoming orders of magnitude cheaper, perceptual computation challenges no longer work, but we can still do computation challenges in the form of proof of work or proxies thereof. Spam will never wholly go away but we can at least cause more friction by charging bot networks to execute in the form of energy or money.
You can already see it happening now - at least the bots that write like vanilla Claude/ChatGPT. Presumably there is a much larger hidden cohort of bots that are instructed to talk more naturally and thus are better adept at flying under the radar…
Which would be totally fine with me TBH.
Rather amusingly, invite-only torrent sites might be the only semi-public authentically human hangouts left on the internet!
This means that only sites which verify identity will have any value in the future. And by verified, that means against government ID and verified as real.
No amount of sign up fee works as an alternative.
Note that a site can verify identity, prevent sock puppets, ban bad actors and prevent re-registration, all while keeping that ID private.
You still get a handle and publicly facing nick if you want it.
The company which handles this correctly will have a big B after it. Digg actually has a chance at this.
It has no users, so the outrage won't exist in the same capacity. Existing platforms will be pummeled in the market if they try to convert to this type of site, as their DAU will likely drop a thousandfold, just due to the eliminated bots.
But Digg could relaunch this way. And as exhibited, this is now the only way.
The age of the anonymous internet is over, it's done. People not realizing this are living in the past.
Note, I don't like this, but acknowledging reality is vital. Issues with leaked databases, users, hacking of Pii are all technical and legislative issues, and not relevant to whether or not this happens.
Because it will happen, and is happening.
It should be noted that falsifying ID is a crime. Fake ID coupled with computer fraud laws will eventually result in hefty jail time. This is sensible, if people want a world where ecommerce, and discourse is online... and the general public does.
And has exhibited a complete lack of care about privacy regardless.
You just published good content knowing AI will slurp it up and not give you any traffic in return. I'm now replying to you with more content with the same expectations about AI and traffic. Why care about AI or traffic or recognition? Isn't the content the thing that matters?
It's like answering technical questions in an anonymous/pseudonymous chat or forum, which I'm sure you've done, too. We do it to help others. If an AI can take my answer and spread it around without paying me or mentioning one of my random usernames I change every month or so, I would be happy. And if the AI gives me credit like "coffeecup543 originally posted that on IRC channel X 5 years ago", I couldn't care less. It would be noise to the reader. Even if the AI uses my real name, so what?
The people who cared about traffic and money from their posts rarely made good content, anyway. Listicles and affiliate marketing BS and SEO optimizations and making a video that could be 1 minute into 10 minutes, or text that could've been 5 articles into a long book - all existed from before AI. With AI I actually get less of this crap - either skip it or condense it.
In the most simple sense - Yes, it is the content that matters.
In the more practical sense - cognitive and emotional resources are limited and our brains are not content agnostic.
We have different behaviors, expectations and capacities for talking to machines and talking to humans.
For example, if I am engaging with a human I can expect to potentially change their minds.
For a machine? Why bother even responding. It’s of no utility to me to respond.
Furthermore, all human communication comes with a human emotional context. There are vast amounts of information implied through tone, through what we choose not to say. Sometimes people say things in one emotional state that is not what they would say on another occasion.
To move the conversation forward, addressing the emotional payload behind the words used, matters more than the words used themselves.
There are a myriad reasons why humans are practically poorer for these tools.
They will try and OpenAI will sell favorable placement to manufacturers.
- You know who your online invitees are, but not your invitees-of-invitees-of-…
- You can create an account, get it invited, then create an alt account and invite it. Now the alt account is still linked to you, but others don’t know whether it’s your friend or yourself. (Importantly, you can’t evade bans with alts; if your invited users keep getting banned, you’ll be prevented from inviting more if not banned yourself)
I honestly believe it might not even be such a bad thing. People were arguably better without social networks and media, and it's perhaps better to let the cancerous thing just die and keep the internet just as a utility powering boring things like banking and academia.
I know this is going to sound horrible, but : how about asking money to contribute, period ? Maybe have a free tier of a couple comments, etc... But if you want to build a troll factory, sure... Show us the cash ?
Twitter is full of blue checks that are just bots and automated reply guys.
I'm treating now all these bots as a stressor on our defense systems, and we will end up having to learn how to build a real Web of Trust, and really up our game on the PKI side. We also need some good Zero Knowledge proof of humanity that people can tie to their Keyoxide profile, so that we can just filter out any message that is not provably associated with a human.
Creative loop moves inside the agentic chat room, where we do learning, work, art, research, leisure, planning, and other activities. Already OpenAI is close to 1B users and puts multiple trillion tokens per day into our heads, while we put our own tokens into their logs. An experience flywheel or extended cognition wheel of planetary size. LLMs can reflect and detect which of their responses compound better in downstream activities and derive RLHF-RLVR signalling from all our interactions. One good thing is that a chat room is less about posing than a forum, but LLMs have taken to sycophancy so they are not immune, just easier to deal with than forums. And you can more easily find another LLM than a replacement speciality forum.
The internet archive is my safe haven these days, i can go back and remember the old internet.
Digg.com Is Back - https://news.ycombinator.com/item?id=46671181 - Jan 2026 (10 comments)
Digg.com relaunch public beta is live - https://news.ycombinator.com/item?id=46623390 - Jan 2026 (18 comments)
Digg.com (Relaunch) - https://news.ycombinator.com/item?id=46524806 - Jan 2026 (3 comments)
Digg.com is back - https://news.ycombinator.com/item?id=44963430 - Aug 2025 (204 comments)
Digg is trying to come back from the dead with a reboot - https://news.ycombinator.com/item?id=43812384 - April 2025 (0 comments)
(context so people don't have to click links)
Damn, that didn't take long at all...
Now it's gone, again. Without a head's up or a way to get a backup out of it, it seems like. Can't say I am a fan of that.
They could at least put it in read-only mode for a short time and allow downloading of extant community content prior to a scheduled "reset day".
This smacks of flailing leadership and zero respect for their target user demographic.
Their plan is to make the internet what is was 22 years ago.
Example: https://0x0.st/8RmU.png
Next time try doing it in a way that you control it.
My main point wasn't that, though. It's simply a bad and low-effort way to handle the situation, and like one of the other replies points out, there are better options. They could have just as well disabled posting and maybe even viewing of submissions and communities for the time being. Just shutting it all down immediately without notice leaves a bad taste in my mouth, and I will not be among the people returning for their next relaunch. I am sure others feel the same way, and I don't think it is a wise decision to needlessly put off your early adopters if you're hoping for them to come back "next time".
I can see why the team got overwhelmed. I wouldn't want to have to deal with that.
:)
There was a lot in the new digg that I was concerned or at least not optimistic about but come on - are we even going to try anymore?
Two months, according to The Verge.
https://www.theverge.com/tech/894803/digg-beta-shutdown-layo...
This is particularly embarrassing since from what I recall they were all in on AI with the new website, so to shut it down so fast because of it…
There are subreddits within Reddit such as https://www.reddit.com/r/neutralnews/ that have strict rules around sourcing, etc. However, I think that’s not what most users want, and may not be quite what you’re looking for either, apologies.
In the same way people want to be fit.
There are 3 horsemen of Internet forums, one of them is topics with a low barrier to entry.
At that point anyone can speak up, and their opinion takes up as much screen real estate and reading time (often less reading time) than a truly informed take.
By putting effort barriers in place, it forces a fitness test that most users (and bots) fail.
Another subreddit which has strong rules is r/badeconomics. I didn’t know about neutralnews, so thank you for giving me another example to add to the list.
Topical forums tend to have a much higher SNR. My favorite forum of all time, johnbridge, had none of those issues. Sadly it died this year all the same, but many others still exist. When you have a forum dedicated to something that requires a minimum barrier to entry, the more useless folks get shunned away pretty early and easily.
- Users don't have to pay to post links/stories - Users have to pay to comment on links/stories - Users have to pay to "upvote" comments. Downvotes don't exist - Each link "lives" a certain amount of time before it is locked. - After lock time, users who posted the link get "paid" a % of the collected $ comments/upvotes. Comments that are upvoted also earn $ proportionally to the upvotes.
Hashcash was conceived to solve automated spam/email. Participating in a discussion must cost something, that's the only way bots and spam will get partially stopped. Or, if they start to optimize to get "the most votes", then so be it, their content will increase in quality.
If this were to exist today, I know I would be incredibly critical of it.
I kind of expected this. The way some of these people work, if the site isn't an instant unicorn, it's trash. But if the goal is a good community, that is something that takes time to build and should grow slow. The incentives are all backward.
It was fine, people talked about work, personal stuff, travel, until one person posted about their disappointment that their state was limiting various services or rights to gay people. For them this meant their rights were in question and they were understandably upset.
Immediately some folks cried politics and that they shouldn’t post about that sort of thing.
To the user posting it it was about their life…
I don’t think “no politics” rules really make much sense. For someone it’s more than politics, and IMO because a topic is touched by politicians or government shouldn’t make it disallowed.
As time goes on, more and more stuff is political, because politicians stick their grubby little hands in everything. What you drink is political, what you eat is political, who you fuck is political. It's exhausting for anyone even slightly outside the status-quo.
The vast majority of people do not want to get on a forum to escape their life to see every more or worse content about their daily lives.
You're right, there needs to be some outlet but when people propose this it's because they are sick and tired of politics and the injection of them into everthing is not helping those politics, it just makes it worse.
Tons of people aren't political creatures and want nothing to do with politicians. This notion that more politics will fix thing isn't born out by Reddit, X, the US Congress, Brexit, etc. It's too easy to divide and manipulate people.
No it wouldn't be. And if your definition of "politics" includes "literally every time a thing happens" then your definition is so broad as to be useless.
When people say that they want politics banned, they are talking about the extremely controversial arguments that are almost completely unrelated to whatever the community is about. IE, if you run a group about Cheese making, and someone comes in and starts arguing about if an ice shooting on the other side of the country was justified or not, that is... off topic. And everyone with a brain can understand that.
It really isn't that hard to figure out which topics are related to cheese making and which other topics have almost nothing to do with it, even if someone could make a bad faith argument that it is related (EX:, your response would probably go something like "Well what if someone knows a cheese maker who is here illegally, therefore thats why ice enforcement on the other side of the country is relevant!". You could say that but we all would know that you are being bad faith or have some sort of issue with determining what words mean to regular people)
Partial credit in this example could go to political issues that are very obviously and directly related to cheese making. A new tax on cheese that goes into effect in your local town, and very directly is related to the group topic. Stuff like that might be OK.
And your response to this example would go something like "Oh, so are you saying that politics should be allowed!?!? how do you tell the difference between a cheese tax and an ice shooting on the other side of the country? Hypocrit!"
And the answer to that is that we can use our brain. We all know that a cheese tax is more related to the local cheese making group than national politics. And we don't have to argue with clearly bad faith arguments that pretend otherwise.
To summarize, when people say that they want to ban politics, what they actually mean is that they want to ban completely off topic controversial issues that others are trying to shoe horn into a group that isn't related to that issue.
And people are saying that it is OK to compartmentalize things. Every group in the world doesn't have to talk about your pet issue. The cheese making group can just be mostly about cheese making and they don't have to argue every day about national immigration policies.
Basically incentivizing those who feel strongly about things to just pay up to talk about them in an exclusive area, which also keeps the site ad-free. Been apparently working for 25 years.
You thinking that astroturfing only happens for US politics is dangerously naive.
Am I completely off base or did they use AI to write the post complaining about AI?
Digg isn't just here again. It's gone again.
The LLM style is like nails down a blackboard, are people blind to it or do they just not even read the stuff they're posting?
The original Digg excepted, Kevin Rose's attention span is extremely limited. He will give something ~3-4 months of attention before (apparently) getting bored and wanting to move on to something else.
Up until that point, he will be an unrelenting hype man of whatever his attention is lasered on at that moment.
Then the hype posts start to drift. They show up once every few days, then once a week, then stop entirely. Any criticism or skepticism is considered a buzz kill in the cloud of good vibes only.
A few months later, a dramatic explainer post arrives (underestimating the cold start problem? Really??), outlining why the idea didn't work and why the next one will be better, for sure, for real.
This (AI generated) note from the current CEO paints an optimistic picture, but the most likely outcome will be that Digg simply doesn't launch. It's sustained on the nostalgic vapors of the old guard, not renewed by a replenished sense of purpose, or connection.
I'd say I'd love to be proven wrong, but I personally question the utility of a Web 2.0 social network phoenixing itself. We have endured a decade+ of originality being buffed out of web products, most now resembling variations of Bootstrap and shadcn in service of dev convenience and getting rich quicker.
Surely in the age of vibe coding, we can afford to take creative risks again, and think of something new.
Moonbirds
Digg
Too comfortable with money in the bank to give full attention to a new venture.
I'm done falling for the Kevin Rose hype train. Long time fan but this is just pathetic.
https://www.wired.com/2008/12/six-apart-pounc/
https://techcrunch.com/2012/03/14/kevin-roses-oink-shuts-dow...
I suppose bots could find forums that use the most popular software and still make accounts and spam, but it would be much more obvious and less fruitful for someone to spam deck builders in Vancouver (something I saw often on Digg) on a forum that is focused on aquariums owners in the midwest.
What is HN doing differently then?
Issue is we are seeing a ton of AI stuff getting posted so it's a losing battle.
Currently an unsolved problem - just stealthier on some platforms than others. Trigger the right topic on HN and the bots come out in-force together with humans sloppily copy/pasting LLM content.
Look at how many updoots it has. Look at how many vacuous, enthusiastic replies it got. That post is especially egregious, but you see stuff like that on a lesser scale every day here, now. My favorite bit is when they go out of their way to shill specific plans/pricing, e.g.:
> You really NEED the $200 Claude MAX plan.
I guess that in an ocean of upvote-based platforms, an island of hand-picked content was a welcome change -- at least for me.
The move (back) to a reddit-like site never made sense to me. Hopefully what comes next has real value to the users.
I'm a bit surprised with Alexis' involvement they didn't anticipate the bot problem. Alexis left reddit several years ago but I'm sure he's still in touch with the folks who run the place. It would've been worth it to talk to them about the threats they currently face and how they deal with them.
https://news.ycombinator.com/item?id=39046023
Apparently the reason why their articles were interesting was because... they copied all of their content from DamnInteresting. Once they were called out they stopped, and the quality went downhill.
I wonder if the "short-term" "fix" is people will start to migrate off the web and into mobile, though none of this stops agents from using phone emulators, so kind of pointless, but I imagine crawling the web is easier for AI.
To be fair, I don't know Kevin Rose personally, so maybe he knows more than the industry, but I highly doubt it.
Reddit has the same problem. They are fighting it more or less successfully. I would look more in that direction.
We really need some way to "verify as human" in the next coming years.
I don't believe there is any practical way to do it.
Sure, there are ways to verify a human linked to a specific account exists in a one-off fashion, but for individual interactions you'll never know that it isn't an LLM reading and posting if they put even a small amount of effort to make it seem humanish.
I don’t understand what kind of shenanigans transpired. But it seems there’s more to in than “bots”
If it truly is bots, maybe a private invite only social network is the way to go.
...I'm in.
So people would go through one hurdle in life, to get this id, and reuse it for every service.
Is this a worthwhile idea? I know many have tried, so help me poke holes in it.
2/ Spammer can hire real people to farm accounts
I think this idea might work if we
- create reputation graph, where valuable contributors vote for others and spread reputation
- users can fine-tune their reputation graph, so instead of "one for all", user can have his personal customized graph (pick 30 authorities and we will rebuild graph from there)
What's interesting is that every subsequent attempt to revive Digg has been a bet that brand nostalgia outlasts institutional memory of why it failed. It doesn't.
This 1000x times
Moderation was really hard. We didn't have AI posters, but there were persistent posters who were extremely annoying (mostly in their post volume and long-windedness) while still following the rules. I was really trying a hands-off approach with moderation, and it seemed to be working for the most part. It's all moot now though.
It's been over a decade now and reddit despite of own exodus that happen recently still remains the default social link aggregator, and a global multi-forum. Not mention that younger generations prefer different kind of platforms and this largely-text based site might be not attractive to them at all
Thanks for the fun this past year Digg.
Ironic, they use AI in their shutdown post that blames AI.
> Ironic, they use AI in their shutdown post that blames AI.
This… seems like regular prose to me. What makes you say so confidently it was written by AI?
And I will continue to die on the will die on the hill that Reddit only survived/became "successful" because of the legendary Digg slip up and exodus. Alexis Ohanian still doesn't seem to have any clue that it was right-place-right-time and Kevin Rose seems to have not learned much either. Can we stop giving either anymore credibility? As with any social site it's the user base/community that helps pull thru darkness. And no one was really asking for this.
Let sleeping dogs lie.
I wasn't a digg user, but this was done to combat 'voting rings' (bots), and the reddit migration was memed partially because it was/is far more open to manipulation. So at least their principles have been somewhat consistent.
I was an avid Slashdot user way back in the day, but the site was basically the same throughout the day, and I wanted faster updates. Digg did this perfectly for a time, but eventually I migrated entirely to Reddit (even before whatever that drama was that caused a big exodus from Digg).
I think Reddit right now is the sweet spot: up to date information, longer-term articles to read, and easy to catch up on things I missed. I was recently pressured to sign up for X (or Twitter or whatever), and I had to turn off all of the notifications since I was constantly spammed with "BREAKING: X RESPONDS TO Y ABOUT Z!!!!"
Right now having Reddit for scrolling and Hackernews for articles+discussion feels like it works for me.
There are decent small communities I'm a part of but the trash feels like it is encroaching.
And the notifications you describe are exactly reddit's notifications? "your comment received 10/20/50/100 upvotes!" "x responds to y about z" "News is trending"
I think the HN title needs adjusted
This. So much This.
> We're not giving up. Digg isn't going away.
Post title is misleading.
That could become a kind of a solution here but then again, some already mentioned that wouldn't stop bad actors from running AI on their accounts anyway.
Hmm...
> We underestimated the gravitational pull of existing platforms. Network effects aren't just a moat, they're a wall.
What does this even mean? How many metaphors can it mix up in one paragraph? Can't they write a blog post the old fashioned way, with feeling? Imagine reading a corporate blog post about being laid off which the founder couldn't even be bothered to write.
Amazing how close to corporate newspeak chatgpt can get (prompt was the headings of this blog post), it has the same sort of blank say-nothing feeling of this blog post: https://chatgpt.com/s/t_69b4890e54ac819193f221351ea900a7
The only website which became totally useless for me after the general availability of LLMs is OkCupid. It's indeed dead. The rest are fine.
What am I doing differently compared to everyone else?
I'm regularly using: telegram, whatsapp, wechat, hackernews, lobsters, reddit, opennet.ru, vk.com, pornhub, youtube, odysee, libera.chat, arxiv, gmail, github, gitlab, sourcehut, codeberg, thepiratebay, rutracker, Anna's archive, xda-developers.
facebook and twitter became broken for me, but not because of bots, rather because of the "smart feed" ("the algorithm"), which is hiding all posts of my friends and promotes incendiary garbage.
In other words, I am seeing enshittification full-scale, but not the bots.
YouTube comment sections are botted.
100% that entire page was written by an LLM. So fucking obvious and I’m so tired of reading the same awful writing style with all these corporate spiel rants. If you don’t care enough to write something yourself, just don’t even bother.
Step 1: Copy Reddit
Step 2: ?
Step 3: Profit!
Step 1: speed-run into the ground while loading it up with the debt of the purchase price and paying yourself management fees.
Step 2: close up shop, write down the loss and reduce tax liability for next year?
[0] https://techcrunch.com/2026/01/14/digg-launches-its-new-redd...
If they relaunch, I hope they develop something integrated with the fediverse. I believe the time to build walled gardens is over, plugging with the fediverse might give them a running start to build something g together with the wide fediverse community, maybe something easier to use for non-techies and well moderated.
We will see I guess…
What's an actual viable solution to this kind of thing?
CATPCHAs aren't it. Maybe micro-fees to actually post things would discourage bot posting? I really don't know.
Seems like it's just dead internet all over the place these days.
> Digg isn't going away.
Post:
> Digg is gone again.
i really enjoyed the new digg
I'm amazed it's still around. Metafilter too, although it seems to have a LOT fewer comments nowadays.
Dead internet theory confirmed, Digg the latest victim