I occasionally play a perpetually-in-alpha AAA+ game (I won't name it to avoid the flames) that recently asked users to fill out a questionnaire. At no point did it ask how they could make my time spent in the game more fun or awesome. They did explicitly ask, "What can we do to make you spend more time in game?". The focus was clearly on quantity, not quality. This made me realize that, perhaps, I should stop playing this game.
Social media and games use all sorts of dark patterns and engagement bait to keep you clicking, but no concern is given to giving back. There is a complete absence of awareness that the best forms of entertainment enrich and then end. If they were to provide an amazing but brief experience that changes regularly, people would come back again and again. They don't need to spend hours on it every single day to feel they're getting value and justify opening their wallets. Doom-scrolling and spending excessive time grinding in games will only make you feel stressed out and unfulfilled. Customers need to realize this and start voting with their wallets for experiences that end.
We need to turn things around and say, "The light that burns half as long burns twice as bright!"
I don't think it's that compelling to say "obviously no one wants to be on Instagram and they're getting manipulated into it." ...yeah they do! The question is can you make a compelling case that spending time on it is harmful.
I’ve been using the internet for longer than I care to admit, and I’ve never seen anything like it.
It was like 300 million junkies all lost their drug supplier at the same time.
Mass misery is still misery.
I can't say I know anyone who defends extended social media usage. Do you?
By that I mean- is the product addiction, with a shroud of media, or is it media which just happens to be addictive.
The entire revenue model is based on on engagement and clicks, the product is incentivized to maximize time spent on the service at any cost. Addiction is a core engineering requirement.
It's the former, by design:
facebook in the past has done tests on emotional implementation on their users without informing them
they're rotten from the head down
Now if only the dick heads running this complete rag could listen to the wonderful people who wrote that enlightened piece and let users unsubscribe: https://www.reddit.com/r/assholedesign/comments/rli0u9/how_t...
I've written more about this here: https://klemenvodopivec.substack.com/p/recommender-systems-n...
No Javascript, no CAPTCHA, no DDoS^1, no geo-blocking, other nonsense^2
echo '
url https://www.economist.com/by-invitation/2026/04/29/stop-big-tech-from-making-users-behave-in-ways-they-dont-want-to
user-agent "Mozilla/5.0 (Linux; Android 14) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/127.0.6533.103 Mobile Safari/537.36 Liskov"
header accept:
output 1.htm
'|curl -K/dev/stdin
firefox ./1.htm
1. https://gyrovague.com/2026/02/01/archive-today-is-directing-...2. What's up with the LinkedIn reCAPTCHA sitekey in the page source
The most important evidence was just internal research saying exactly what the plaintiffs wanted.
Google Chrome is trying hard to become a mandated technology, but hasn't quite succeeded yet.
How do you describe in a legal way the difference between a useful feature people want and an addictive feature they don’t want?
I don't know how you'd write it in a law either, but if you're in a meeting at your tech company, and the product owner or tech lead uses language like "We need to get users to do..." and "We need to incentivize..." and "It should be easy to do X and hard to do Y..." then do whatever is in your power to steer/stop. You're not really building a product users want, you're pushing a behavior-modification scheme onto users.
One famous case was Apple suing Samsung over patents. Hard to prove until internal comms surfaced showing intent to copy the iPhone.
For laws like this it always boils down to "I'll know it when I see it" which is such a shockingly poor way to write legislation that I'm flabbergasted it doesn't immediately fail any amount of rudimentary scrutiny. Not to mention the latitude it grants for selective enforcement. It's basically Washington asking (through the Economist) for a leash on platforms that host their critics that they can yank at any time the population gets too rowdy, with the convenient justification that the algorithm is too good and our attention spans are in danger or whatever.
Infinite scroll is one obvious one. As well as forcing algorithmic feeds of accounts we don't follow.
That would be a lot of extra work for the platforms, but I think the results would be interesting. It amounts to legislating that certain features have to be optional and configurable.
Edit: I know what network effects are, I was talking about steps individual users can (and should IMO) take. We should be helping our friends, family and neighbors find safe and health alternatives like Signal for comms. Build different networks that are actually social and not doomscrolling.
Still amazes me how engineers on HN are in awe of AI and LLMs knowing that 90% of us will be affected (we won't be able to bring money to the table) once the higher ups start to normalize even more the usage of AI to reduce headcount. Not everything is about the technical details people, grow up
It's deeply sad to see how our most beloved work (those side projects we pour ourselves into purely for the joy of it) will, at the end, be the very reason most of us lose our jobs (not all of us, but the majority). Openai/antrhopic/etc and others simply took all of that and turned it to their advantage. It's capitalism, sure, but it's heartbreaking... I wouldnt mind be out of job for another reason, but not for that one pls
I'm a mid programmer at best, like compared to top guys in the industry, who built stuff like OpenClaw or those prodigy 16 year-old coders who became millionaires, and yet I don't fear the LL assisted coding future. I'm at peace knowing that I will adapt to the LLM programming world using my knowledge in my favor, or adapt to a world where I will no longer be a SW engineer, but something else.
Also I find it ironic and poetic how some SW devs want us to rise up and fight LLMs and the companies making them for disrupting this profession, when the SW dev profession was so well paid precisely because the SW products they wrote, disrupted other peoples' professions, moving the savings from labor costs into the pocket of employers, who used SW to optimize processes and repetitive labor and not have to hire as many low-skilled people, yet they never saw an issue with other people losing their jobs. "Learn to code" eh?
Oh how the turntables.
After years of near monopoly status these companies have a lock on many people's social lives. To give up Instagram is akin to giving up text messaging. "Just stop using it" isn't helpful advice to those people.
If Instagram disappeared tomorrow it would be different, because everyone would be in the same position. But preaching personal responsibility in an area subject to network effects doesn't work.
Now, would it be inconvenient to stop, sure, but people need better self control. Put that cookie down!
That's a straw man argument. I never said they were.
> There are even studies that show that it makes their users depressed.
What percentage of the population do you think are in the habit of reading academic studies about the effects of the products they use?
It all feels reminiscent of cigarette smoking. The damage was very well known yet people continued to do it. It took extensive government regulation to wean people off their addiction, not a "buck up, chump" motivational message.
What works for you, and me actually, doesn't work for most people, humans are complex things
That's asking every company to prove a negative before rolling out new features.
Could we have a regulatory agency that keeps an eye on dark patterns and deals with them as evidence emerges that something is harmful.
Found this document:
https://www.economist.com/by-invitation/2026/04/29/stop-big-...
Headlines (quote):
Instagram is an inevitable and unavoidable component of teens lives. Teens can’t switch off from Instagram even if they want to.
Instagram has become the ID card of this generation. It is the go-to tool for both measuring and gathering social prestige.
Instagram sets the standards not only for how teens should look and act but also for how they should think and feel.
Teens feel themselves to be at the forefront of new social behaviours to which there is no consensus on how to behave or cope. They sorely lack empathetic voices to whom they can turn for support.
Teens talk of Instagram in terms of an ‘addicts narrative’ spending too much time indulging in a compulsive behaviour that they know is negative but feel powerless to resist.
The pressure to ‘be present and perfect’ is a defining characteristic of the anxiety teens face around Instagram. This restricts both their ability to be emotionally honest and also to create space for themselves to switch off.
Anxiety around what to post and the potential cost involved in posting the wrong thing means teens are switching from proactive to passive engagement with the platform.
Insert credit card and two forms of id to log on...
We're going to get better and better at hacking the human brain - for good and evil and we're going to have to trade some free will and personal liberty to really keep the worst of it in check. The dark pattern bullshit is the easiest thing to regulate but I don't have a lot of hope for even that.
This leads me to think about the idea of procrastination as a mechanism of gambling by the sub-conscious. A subversive way of "raising the stakes on the game" in an attempt to "make things a little bit more interesting."
(Not only in terms of tech, but also in terms of ways of living popularized by celebrities, thought leaders, etc.)
Facebook is not that.
Huh? Does anyone actually care any more? The kind of moralizing busybodies that spend their time shaming the tobacco industry are few and far between.
There is no market if you have no mechanism for price discovery, no meaningful alternatives, users are addicted, confused, and simply unable to switch.
Things WERE better before we combined skinner boxes with ad tech, before we preyed on users and applied every trick in the book to entrap them.
“We respect your privacy” banner, with a big green ok button and a “manage data collection” tiny print text that had consent for everything automatically approved
For example, infinite scroll is a product of a news feed and a news feed is algorithmic. What this produces and what it reinforces in the user is one thing but not really related to some small grey text in an Amazon Prime sign up.
So let's break it down. Some of the issues are:
1. Intent to sign up.
2. Difficulty in cancelling a service. This is what I call the "gym model". Easy to sign up, hard to cancel. This can be handled. California, for example, requires companies to offer online cancellation. Most other states don't. This is so much an issue you'll regularly find advice from people to change their address to California so they get that option. There's no reason why every state or the federal government couldn't do that.
3. Selling of your data. Not really touched here but it's going to be a big issue going forward;
4. Addictive behavior to maximize time spent on platform; and
5. What should we allow or disallow for minors. This is going to be a big issue. We're only at the start of the Age Verification Era (like it or not). But IMHO no company should be talking about how to maximize time spent for 13 year olds. And no advertiser should be able to advertise to minors; and
6. Not really touched here but I'm going to add it anyway. IMHO we give tech companies a free pass for algorithms as some kind of mystical, neutral black box. But everything an "algorithm" does represents a decision humans made to get a certain behavior from what training data is used, what they're optimizing for (eg interactions or time spent) and what features they create.
Platforms now essentially get liability protection from publishing content even though they elevate or suppress content based on what it contains. IMHO this is no different than someone deciding what to publish and being liable for it.
Social media is not making you behave in ways you don't want. On the contrary, it's giving you EXACTLY what you want. People want to doomscroll social media instead of engage reality, because the real world requires action, effort and social risk...doomscrolling is pure passive consumption.
If we're going to give people autonomy and freedom to choose how they spend their time, at some point we have to draw the line and hold people accountable for their own actions. Or we have to acknowledge we'd rather stay in a permanent state of adolescence and give full control of our lives to big brother.
This constant push by the urban monoculture to turn everything into an "addiction" and turn everyone into a "victim" is a terrible set of ideas to put in peoples heads and is equally as toxic as anything they claim smartphone apps are trivially doing with UI design.
Apps are not physically addictive like cigarettes or alcohol and never have been.
And if you're going to argue social media preys on reward systems in the brain, this is also true about everything that humans do. Reward systems in the brain govern every single action we take, so everything we do can turned into a victimization by some addictive outside force.
I can say with certainty that opioids are addictive. I can also say with certainty that doomscrolling is not. I have yet to meet someone who would steal copper pipes off of an abandoned building or sell their body on the street for a few scrolls of tiktok.
But why do you get out of bed at all in the morning? What drives you to exist...are those reward systems in the brain addictive? Why are you sitting at your keyboard right now arguing with a random stranger on the internet?
Are you procrastinating something else you should be doing instead...and is that Hackernews' fault or yours?