It was a great workplace, but I don't think it was adding much value to the world.
The constant rationalization around the office was basically something like: "They're paying for fun, it would otherwise be drinks/movies/real gambling/etc, we're just a different medium". This, of course, completely ignored the boatload of psychological exploitation tricks that we used to keep people playing (and paying). The game mechanics were literally a rigged slot machine. The meta-game was designed to constantly tantalize you with something just out of reach. The monetization was high-pressure, time-sensitive, and used false discounts to increase value ("Get this $20.00 pack for $1.99! Only for the next 60 minutes!")
It got very draining after a while, and I'm much happier now that I've left the industry.
When I left it was Candy Crush that was the top of the field; and we were in awe. They were _always_ rewarding players, and were pushing microtransactions everywhere. It was flashy like a slot machine, and provided a constant stream of "rewards" for continued engagement.
It was a disgusting industry then; it probably still is now. I went into game dev to tell stories and explore worlds, not to sell dope.
This is an accurate description. So many games where people have to pay for the chance to get something. It's even worse than actual gambling since the prize isn't even money. All people get is some numbers in a database.
Another oppressive feature found in most exploitative games is the timer. Through a timer, the progression of free players is rate-limited. There's almost always some kind of resource that's used up whenever players do anything and when they run out of it the timer reveals itself. The game refills the resource on a regular basis.
This creates habits in players. People start out playing normally and next thing they know they're setting alarms to wake them up at 3 AM to do game tasks because that's when the timer resets. This is done deliberately with the goal to increase the number of players who login every day through a mix of positive and negative reinforcement. The timed refills themselves act as a periodic reward which is simple enough. More insidious however is how usually the timers will not start counting down to the next reward until players play the game. Every second the timer is not running is wasted and delays the next reward.
Paying customers can reset this timer at will. They are therefore able to progress at an uncapped rate. This implies the game's true form is a spending competition: the whale who gives the most money to the company for the longest time will win.
Video game addiction is already a recognized medical diagnosis. I have no doubt this trash is at least partly responsible for that. The reputation of the games industry is only gonna go downhill from there.
I don't play them though, since I know I'd spend too much money and time on objects intangible.
The thing that strikes me the most about F2P is how societally acceptible it is. The people for whom the notion of "casinos" doesn't fit their interests can now have their own preferentially targeted equivalent of a casino in their pockets at all times, branded with things like television show franchises that ground them in a sense of familiarity.
You can disguise negative information giving it the appearance of a usual positive one (the fake discount).
It's more a property of information than of language, or rather a property of our information processor, so lazy it doesn't check every single piece 3 times but instead rely on habit and context
There's a lot of things that fall on a spectrum, and this fact can be used to pretend that motivations are different to what they really are.
For instance, where is the line between investing and gambling? A friend of mine was looking into setting up a spreadbetting firm once, so he went talking to insiders. It turns out 90% of people lose 90% of their money in 90 days, and that's what several firms told him. But of course they don't lead with this fact, they say they're offering training videos the teach you how to invest in the market, just like proper investors. And it's not even a lie, a lot of the content makes perfect sense and would be found in any "proper" investment house. The real deal is of course that they offer a buzz from massive leverage, and everything about the UI is to make you blow yourself up.
The same story applies for social media. A place to keep in touch with your old friends is a great thing, but realistically, how often are you going to write to your classmates on a classmates-only forum? And how is classmatesforum going to make any money doing that? So we get the ML-powered feed of news/funnies/cutesies reactions that seems to be the modern social web.
They do in the UK; the first one I googled is https://pepperstone.com/en-gb/trading/spread-betting/ and it has a banner accross the top of the page (like a cookie banner, above the header) saying "Spread bets and CFDs are complex instruments and come with a high risk of losing money rapidly due to leverage. 79.8% of retail investor accounts lose money when trading spread bets and CFDs with this provider.". This is typical here on sites and adverts for sites.
It's designed with childlike graphics and her followers are very young. It's marked as teen in google store.
God even the first review shows how blatant this is:
"I'm really not big on games but this game is the truth. I cant stop playing it. I go to sleep to it. Wake to it. My day is filled with ways of finding new ways to get coins and spins (mainly spins, lol)"
I'm all for legal, regulated online gambling. This not even disguised style of game should be called what it is. Some type of Pachinko in that you don't get money back but some other token. But even worse since you probably can't walk around the corner to exchange easily for real world value.
Call it gambling, verified 18+, and regulate it.
https://play.google.com/store/apps/details?id=com.moonactive...
If we're counting on adults "knowing better" which makes it okay to play these games while the industry profits on precisely those adults that don't then there's a problem.
For example, I always felt that League of Legends had a good F2P-with-purchases model. There are a lot of characters you can play as, but you have to either unlock them by playing or pay a couple dollars to unlock one. But the more expensive characters aren't necessarily more powerful, they're typically just newer, and it doesn't take much play to unlock a character.
Once a character is unlocked, there are skins you can purchases with money, but they're purely cosmetic.
this misses the core issue. a game like path of exile is completely playable for free - and they release new content quarterly and yearly (again, for free). a game like Raid Shadow Legends is not - you very rapidly hit a paywall and it would take years of free play to acquire power equivalent to spending $100.
the exploitative nature is pay to win, where youre simply playing a different game unless you spend money.
Yep, I've pasted that link in the past. It's now a proper classic.
Meanwhile ads displayed in the app offer me “real money” and “millions each month” playing cards or by “putting my finance at risk” or playing similar addictive games.
(Yes, I paid for the game but there are ads for extra scores)
- Upton Sinclair
The idea that you can engineer yourself an addictive game is a silly concept.
That games en masse are evolving for maximal "addictiveness" is unquestionable, but has entirely different implications.
I believe this statement can be applied to the vast majority of jobs, or capitalistic endeavors in general. Not to defend them in any sort of way, as I abhor these practices, but I think we should acknowledge that reality (and perhaps work our way out of it, somehow)
I attribute a significant part of the political rift and radicalization that has occurred in the western world in the past few years to specifically this feature, the algorithmically determined, engagement maximizing, feed.
You go from the most outrageous at the top to the most discussed at the top. All content gets a chance to be read. Even if it the first user find it uninteresting. On reddit, if what you post gets a downvote very quickly, you're basically done.
This is terrible for hobby boards because it means that content from long-timers gets drowned out by newbies asking the same questions over and over. In my experience hobby subreddits are completely dominated by inexperienced newbies giving one another the same cargo-cult advice. That, and straight up image posts with literally 0 value (for example the front page of /r/simracing at any given moment is usually about 30-50% pictures of just a steering wheel / rig from someone saying "got my new wheel/rig today").
The old days of forums were great for hobbyists because the long-running threads would stretch into the dozens or hundreds of pages of comments. Newbies could be directed to the longer threads or deep-linked directly to older comments. The format surfaced all posts, but did not favour recency - old threads could be revived (necro'd) or kept up top indefinitely. It wasn't perfect but it was a system that favoured long-timers over newbies.
But the rub with traditional message boards is, that they don't scale well. Some subreddits have many, many thousands of active regular participants. It's hard for a single subforum to operate at that scale.
Chronologically by last update, otherwise you could always read /r/yoursub/new, but that's only chronologically by last original post. By last original post doesn't help much. For slow-moving subs, it doesn't change anything much; for fast-moving subs, it's okay-ish if you care about posts more than comments, but you still lack curation (however imperfect).
Humans are so short-sighted, we don't even need a superintelligence to outsmart us. We happily stand by and collect profits while letting the AI do whatever it wants. Really, we need regulation that makes companies care about side-effects of their AI technology.
For instance, there are cases you can point to where some normal person on Twitter with a couple dozen followers posted the wrong thing, it blew up into an international story, and they eventually got fired from their real-world job for it. Who made the decision that this person's views should be widely scrutinized? In a lot of cases it was the algorithm: for whatever reason that tweet drew engagement, so a massive spotlight was put on this mostly-private individual.
In a real way that's an example of an AI system deciding to take away someone's livelihood.
For my part I think it's asynchronous written debates that are the source of these problems, so even earlier than facebook.
On a large massive forums, people write for themselves, not the others, and barely read the discussion. Oral debates or written rendering of them, had the advantage to be slow, less variate, and adaptative to a single argumentative line.
Nowadays when you discuss an issue online, everyone starts new train of thought to escape adaptation, and care little about an audience they d need to seduce or convince.
I suppose I just did the same anyway, you can reply but I wont read :)
E.g. Gandhi would have been called "radical" without implying any violent tendency.
And (to a lesser extent) the equivalent process for other domestic terrorist groups.
People will still understand what you mean if you use "radicalization" to refer to positive things, but they'll wonder if you're making a very dry joke.
* Anyone who drives slower than me is an idiot and anyone who drives faster than me is reckless.
* Anyone who's worse than me at me at Overwatch is a noob, anyone who's better than me has no life.
* Anyone who's views are less extreme than me is a sheep, anyone who's views are more extreme than me is a radical.
It takes a lot of effort to realize that I'm on the "radical" spectrum for a lot of my political views (I'm hella progressive) and that that isn't a bad thing, but neither are other people's, in my view less aggressive, ideas for tackling problems.
I like watching YouTube, I learn a lot from it. My wife likes watching tiktok, it is an exchange of culture. How exactly is this a waste of time? What would be a better use of my time? Making money for someone else? Watching TV? If I (and many others) enjoy it, why does it matter?
> distorted world view (mostly the posts with the highest engagement will be those that call for outrage, which leads to radicalization
This is propaganda. Do you think the perspective imposed on the public by an elite group of individuals, hand-fed information by government press conferences doesn't lead to a distorted world view?
At least for Twitter, it allows you to turn that algorithm crap off and it respects that choice - iirc even across devices.
Facebook, meanwhile, has entirely turned off that ability.
I'm sure they'll completely break it some day. They broke it briefly once already. Since I was working at Facebook at the time I filed a task on it. The response was the rudest I ever got in 3.5 years at the company. If it works, as long as it works, it's basically by accident (or because some front-end developer is resisting management's constant pressure to deprecate it).
I think of it this way. Lets say I walk past someone on the street and say "hi." Now lets say I punch them in the face. What creates the most engagement?
Obviously this is nothing new, and the cigarette industry has been profitting off addiction for decades, but I never expected the same thing (except the very immediate threat of lung cancer) to happen to tools that could have been optimized for so much good instead.
And the worst part is that some of it is just so incompetent. I've been trying to break away from Facebook for a while, and recently (the past year or so) I started realizing that my Facebook feed always shows me the exact same posts in the exact same order for days, to the point where I know exactly what post will be first when I open the website. Where is all the Facebook money going? What are these people working on? What are they doing?
Divorced from as much cynicism as possible, what % of level I-III software employees are just there for their resumes?
It had nothing to do with anything but minimizing liability in case the general public ever catches on to what they are doing to kids mental health.
What’s hilarious is they probably have some “data retention policy” and will get away with it.
I guess it’s why SV engineers want to care about social issues so much. When you’re up in your $300k+ a year ivory tower built on something you know is dirty, you are exactly the type of person to be angry and project frustration.
I saw on Reddit a screenshot of AOC dogging on old people in Congress for not understanding digital. She listed a bunch of modern day issues and this wasn’t one of them. If even she doesn’t know, we really are screwed.
Big TV’s goal is to broadcast as engaging as possible material to a general audience. This is broad and therefore not very addictive / effective.
Big Tech’s goal is to broadcast as engaging as possible material to an individual. This is specific and therefore very addictive / effective.
Regardless that the goal is the same, one is a heavier hitter, and we should have regulations to protect people against the bigger punches.
Habits we develop as children tend to persist through our lives. People who grew up very active tend to stay at least a little active, people who grew up eating healthy diets will tend to continue to eat well. The opposite also tends to be true.
We have record levels of obesity in many developed nations now. There's a lot to blame for that, but surely our sedentary lifestyles is a factor. That includes watching too much TV.
how has it NOT been harmful?
I'm more inclined to believe my parents did the right thing by limiting TV time and commenting that most of it was rubbish. And I presume like many parents in Silicon Valley, I'm going to fight tooth and nail to keep advertising and most of this stuff from my kids.
The funny thing is, if you're not brought up around it, if you learn some math and statistics, and then you suddenly go somewhere exposure is considered normal, you almost have a physical or psychological revulsion to it.
Ads become frustrating and unacceptable intrusions. casino floors are sad, boring, pathetic and dystopian. mobile games look like drugs foistered upon an underclass, etc.
At least with big TV at the time, programming was so limited (at least in my country), you had maybe 2 to 3 hours per week of anything that would fit a genre worth watching. But even with that restriction, I know a large number of people who deem that the TV should be on even if there's "nothing to watch". I just can't write that sort of behaviour off as not being dysfunctional, it's just that we've culturally got a general acceptance of very specific types of dysfunction...
And what about rates of obesity? I'm not saying TV is solely to blame, but are we going to pretend that screen time and such programming (on demand or otherwise) is a blameless part of our culture?
For instance, television screen time is correlated with increased obesity.
AOC is closer to a regular ol' populist. Note how she and Cruz were the first to join the ranks of angry people online in lambasting Robinhood before they could explain themselves in the GME debacle.
>Most alarming is the “internet points”. On Reddit, this is called Karma. On Twitter, it’s likes and retweets. Ostensibly, this simple numeric score displays the community’s overall attitude toward a given piece of content. On its face, this appears to be a radically democratic concept; Everyone can vote! The reality is very different. Reddit, for example, has always obfuscated the true Karma score (“to prevent vote brigading”), and the position of a piece of content within the feed can be purposely decided by the Reddit home office, not by the community. This is incredibly, deeply sinister.
Why is it 'deeply sinister' -- he just seems to assert it, reddit home office isn't putting other content there that isn't sponsored -- astro turfing exists and explains a lot of that but I don't think it's the reddit admins. The voting mechanism is pretty good at determining interesting content from uniteresting content.
This whole article reads like a conspiracy theorist rant like some luddite against tech. There are negative behviors associated with modern technology but this article just asserts that without really elaborating exactly why. It just blankly gestures "you know -- bad tech, feedback loops, doom srolling" all the keywords!
And stop flattering yourself by calling social media 'tech'. Every industry uses technology, calling it 'tech' doesn't distinguish it in any way. Social media is a medium for a exchange like a book, or a campfire. Different media have different effects on how information is conveyed and how the world is understood by the users of said medium.
Social media has been overwhelmingly influential to the way people communicate and understand one another and denying any questioning of it's effects outright is telling. Your tribalism is showing. Marshall McLuhan warned you about this, you should have listened.
As with anything in life, there is nuance. Social media isn't a great satan and it isn't god's gift, it's somewhere in between.
What does this even mean? Why would it be flattering himself?
Your whole comment is a rant. At least the article you're complaining about had some points to make.
Much of what the author mentions has its roots in behavioural addiction research. In particular, there are many parallels with the gambling industry, and how it profits from maximising user engagement and potential for addiction.
Social media is even worse. It’s an unlimited hit for hours and hours and hours with hardly any short-term consequence which causes you to break out of the loop.
The other point you are missing is that the incentives are all wrong. The incentive for posts on social media is engagement, which means polarizing content, which means radicalization.
At least with a fun video game, you might have fun playing it and learning new skills and stories! But I can admit that I often spend too much time captured in the game loop a because it’s unlimited — that’s problematic.
It’s still fair to point out that the incentives for $burger_chain are all wrong too — hence why fast food is so full of sugar and other ingredients which are bad for you, but taste good in the moment to many. It’s to drive addiction, which drives recurring sales. I think that’s messed up to.
I find it interesting that so many people in the world have a knee jerk reaction specifically against doing this. Especially when it comes to tactics around manipulation. The fact is, everyone is vulnerable to it. You don't spend all your time worrying about it mind, but it behooves one to evaluate and maintain a sense of perspective. No, Reddit probably isn't always actively engaging in some direct form of manipulation.
Oh wait, yes they are, they advertise, and they moderate. Both of those are active measures to shape discourse, the difference in whether this is positive or negative tends to come into play based on who the shaping of discourae benefits, and who it maligns, and where you are in respect to that divide. They aren't a hands off neutral platform.
The more interesting article, in my mind, would have been one less focused on particular tech, and moreso the role of manipulation in the modern world.
It's the same problem as we had 100+ years ago - and really even the same problems real (not stereotypical) Luddites rose against: business decisions. But talking about this is boring, we've been having this discussion for generations. Focusing on "tech" generates more clicks.
It's actually the mods. A select small number of people are mods on the top 50 or so subreddits by size. They can control what gets posted, what gets upvoted and what gets silently censored. It's also an (unpaid) full time job, it's naive to think they're not monetising their power in any way.
Mod means nothing -- I'm a reddit mod. Admin is paid employee of reddit with completely different access level. A planted story by a mod is still within the grounds of "astroturfing" accusing reddit admins is another order of magnitude of corruption higher. Like I said I don't think reddit admins are doing it but the author does seem to imply it.
This article does not address that all -- vote fuzzification apparently only serves "brigading" -- thast not what it's for. It's for spammers so they can't judge the effectivness of their bots.
The author misses the wood from the trees. I'm not seeing valid criticisms either which means he didnt' even Google to see what others said of reddit, e.g. poor moderation tools for large subs.
Also holds for the edit-compile-test cycle, I suppose.
Bad enough they are handing out free drugs. On top of that look at the behavior it produces. Algo amplification of all the content that generates polarization, misinfo, mob justice etc is not possible without a corral of people addicted to and conditioned by the Like/Click/Upvote/View/Follower count.
When their behavior is unproductive. They point at their Like counter as validation. Thats how Trump end ups thinking he is doing something productive.
I think it's valid to say the point system contributes to the formation of the echo chamber.
Some of my more effective methods have included: strict site-blocking via OpenDNS, apps to implement time based blocking of information-novelty sites (Twitter, HN, reddit, instagram, CNN), almost complete disabling of notifications, with the exception of text messages and async work chat during business hours.
All of these methods, I can undo, but it prevents or at least slows down the automatic, reflexive app/site opening when my reptile brain craves a dopamine hit.
I did not see any evidence of malicious intent towards our users but a genuine belief we were enabling them to be more productive.
Maybe a good analogy here is building a calculator. People buy your calculator and use it (the assumption is because it saves them time). Initially your calculator only does basic arithmetic and you add square root, exponential function, etc. Now people buy and use your calculator for even more. They spend more “time” with the calculator but still save even more “time” not having to use pen and paper, so it’s a productivity gain. Soon a movie on Netflix comes out called “The Calculators” which blames the fact that people are no longer able to do basic arithmetic in their head on a calculated decision by your company to profit from people while making them dumber. It paints your company as making people dependent on your calculators and robbing them of their cognitive abilities.
This was never your intent when creating the calculator. Now it’s something you will have to address, but the narrative here is one of unintended consequences not a deliberate plan.
The Banality of Evil, this is just a version of "We were just following orders"
I don't think any of the ideas in this article have any evil intent, even if they have evil results.
1. Relative timestamps are just a more useful way of telling people when something happened. It's how most humans communicate time to each other.
2. Infinite scrolling is just what happens when you're not constrained by physical pages. The only reason we used to see paged content was because it was technically easier. Facebook and Instagram even put up a big "You're all caught up" sign when you hit the end of where you were last.
3. Before Facebook had likes people would just add a comment like "+1" or "This." It didn't add anything to the conversation and just made things worse. Internet points are useful for filtering content and showing people more useful information.
Yes, all of these things have negative consequences but that doesn't mean they're inherently bad.
It's just a matter of language (how they communicate what the company is trying to do), with slogans like "making the world a better place by doing X". But if you think that at the end of the day, all they care about and all the trouble is just for the mighty Dollar, with a little change in perspective you can elucidate:
"time spent in our app": time lost in our app.
"user must be getting value": while getting even more value out of an unaware user.
"feature ... making it “better” (used even more)": more addictive? costing less for the company? costing more for the user?
"to be more productive": productive as in "making money faster but for someone else, while being paid the same".
Data analysts are interested in different things than end consumers, who are interested in different things than admins or sponsors...
If you can't split a generic abstraction of 'users' down further, you're either umplementing a very generic tool, or you're simply not standing up a system where the idea of access control is important.
The rub is how nefariously sneaky that access control abstraction can be employed as a fundamental tool.
Say we have something that shows all expanded comments immediately: keeps me on that site
Then they do an "engaging" redesign where only 10% of the comments actually show and requires lots of clicking to expand them, and lots of unrelated animated images show below: makes me want to close that tab and do something else
That's... not an argument?
Society is punishing drug users and vendors, which might be the right thing to do or not, I am not a decision maker. But the addiction should be measured in dopamine emissions and withdrawal symptoms. Does not matter if the product is a recreational drug or a virtual saas. Do not let the addiction merchant hide behind marketing terms like social media. The corporations use social to tick a compliance box and mislead the customers. To make it sound harmless or less dangerous. Similar to vegan, bio labeled food. Social media is targeting the most vulnerable and young. Sounds drastic? I do not think so
What's with organic vegan food? It wrongly looks harmless? It targets most vulnerable and young?
Vegan and organic are criteria/characteristics, not things meant to be addictive. It's not similar. Sure, they can be highlighted for marketing purposes, but what characteristic seen as positive can't?
To illustrate these points, consider HN itself. Yes, HN uses an algorithm to determine story prominence. Yes, it has upvotes. Yes, it uses relative timestamps. OTOH, the upvotes are not shown to others and it doesn't have infinite scroll. I also wouldn't consider it a "feed" like Twitter or Facebook. On the third hand, even the features HN does have arguably contribute to echo chambers and audience pandering. I think this well illustrates that it's not easy to determine good vs. bad social media, and it would be hard to argue here that all social media are bad. ;)
Most social media platforms do the exact opposite, and intentionally at that. The whole point is that there's (practically) no end to how much you can consume, you can ruin your life on one site alone. Notably, HN and Metafilter don't have ads, but the incentives are clearly very different for almost any other social network.
Today I'm blessed to work for an electric utility. Whenever I'm having a bad day I remind myself that I'm helping to provide the foundation of modern civilization. It's much better for your soul than knowing that in the end you're just working for a drug dealer and contributing to the destruction of people's lives. I'm glad I'm close to retirement and don't have to do that kind of work just so I can put food on the table.
This is a low quality assertion.
> “How could they have just scrolled and scrolled all day? Didn’t they know what it was doing to them?” Social media is the new cigarettes. Everyone does it, it’s addictive, it’s harmful, and you should quit.
Scrolling is not different that sitting in front of your TV or consuming other types of information. I think social media is bad, but that's just my opinion, this article sounds like an unsubstantiated opinion, too.
It is good, in moderation, to be detail-oriented and skeptical. But don't let these qualities run amok and prevent you from considering the overall claims.
The upvoting of submissions is needed to 'crowdsource' an interesting front page.
You might do without the karma reputation system, but that will have negative impact on the quality of discussion. After you passed the first treshold (downvote privilege at karma 500) there seems to be no value to the system (no further privileges). But (forgot the game theory name) the 'punishment' of a downvote + karma decrease is probably a great help against shitposting.
Is there a right way to set things up, and maintain high community standards and comment quality?
Edit: Was it Loss aversion?
(Yes, ads are designed to get you to interact with them, but on their own, I wouldn’t go as far as saying, “psychological manipulation”. I’ll admit the industry has tended in that direction, however. Your view may differ)
The rise of small-time influencers, the "nanoinfluencers", is particularly worrying trend. There are degrees to how antisocial advertising is, but nanoinfluencing is near the top, along with MLMs - people literally get paid to cheat their family and friends out of their hard-earned cash. It does damage to the fundamental relationships in the society, and to the minds of everyone involved.
If you think I'm overdramatizing, consider if it's a healthy state of things, when asking a family member for advice or having a casual conversation, when you have to wonder whether they're being honest, or just shilling you some crap. That's a fundamental assault on basic interpersonal trust.
Where is examination of the person doing the scrolling? Isn't the flaw there?
(The following applies to adults):
Ostensibly, the person should be allowed to spend their time as they like if they're engaging in legal activity. If people thought that engaging in social media was bad for them, they would stop. Are we really saying that people cannot stop using social media? Are we saying that people don't think it's bad for them?
If people can't stop doing an activity that is definitely hurting them, don't they need professional help? Do all these social media users need professional help?
What amount of time on social media is sufficient for it to have a negative effect? Is it anything besides zero?
To be meta: if hackernews is social media and social media is bad, are we all hurting ourselves?
This is the absolute cornerstone of your argument, and I gotta say, I think it is 100% incorrect.
This is, in fact, how we often define addiction: A behavior that a person repeatedly and compulsively engages in despite zero or negative side effects.
People will absolutely do things, and continue to do things, that they know for certain are bad for them.
I mean, seriously -- ask anyone in your life whether they think social media is actually good for them. I genuinely think every single person I know would respond to the effect of "I wish I used it less, but it's tough to quit."
That said, my follow up questions stand: If people can't stop doing an activity that is definitely hurting them, don't they need professional help? Do all these social media users need professional help?
The general narrative implies (but would never say) that humans lack agency and the ability to act in their own self-interest. And if only we could remove the social media site/system, would we see an enlightened, more perfect human.
Don't get me wrong, I'm strongly in favor of designing systems to bring out the best in humanity. But I think a lot of what we see on social media is a reflection of who we are as humans, and what we desire and want to see. And I don't see enough discussion at the individual level, and why we act the way we do.
If the discussion was at the individual level, we'd start to see our common humanity, and the conversation about social media would be less about moderating out the 'bad content' (which always just so happens to be the content that our outgroup likes), and more about the philosophical considerations with designing complex systems that help humanity thrive (and what it means to thrive).
Agreed.
I'm curious why there aren't deep discussions about WHY people want to __insert convenient social media action__. Isn't the flaw that we have a fear of missing out, want to appear to have high status, want to belong to a group whose identity is tied into being against another group, etc.?
I just see most of these discussions giving the above a pass as if that's just who we are and we bear no responsibility in engaging in such activity.
The reality, however is that social media companies with their curated feeds, dopamine buttons, echo chambers, and sheer size can push products to the market that cause detriment to the overall cultural environment. The average Joe does not understand that his Twitter feed is _designed_ to keep him hooked. I bet you, if he knew, he'd have quit long ago.
Your argument essentially posits that opioid addicts be held responsible for their addictions rather than our legal system clamping down on big pharma.
Given the nature of the internet, any entity can bring a product to the market that could be negative for users. What's the solution beyond the current legal framework?
>Your argument essentially posits that opioid addicts be held responsible for their addictions rather than our legal system clamping down on big pharma.
There's a wide gulf between consuming and engaging with intangible media content and abusing biologically powerful physical substances. Someone becoming addicted to a controlled substance like opioids is different from social media in a profound way. I think society recognizes this given that opioids are largely controlled substances.
> There is something about social media that human beings are not psychologically prepared for.
I would state it: Engineers and business people have distilled what creates craving and then satisfies it by creating more craving.
No different than distilling the sugar, heroin, or cocaine from otherwise healthy, innocuous plants. We regulate some of that distillation. Even what we don't regulate, as a society, we generally look down on people that addict others. Why we reward programmers as we did the Sacklers doesn't make sense to me. I would think we would consider employees and investors of antisocial media companies villainous pariahs.
On Facebook.
And that kind of turned me off, and I intentionally stayed away from Facebook for a bit, and now whenever I log in, there’s a ton of fake notifications clamouring for my attention, and my “feed” is 30-50% ads, and the content that isn’t ads is 30-50% memes. There’s very little actual content, and you have to work way too hard to find it.
In an attempt to further force me to “engage” Facebook has turned their platform into something I can’t stand to use.
I wonder if this is what social media looks like to anyone who “gets out” for a bit. Maybe we’re all like frogs in water that’s slowly getting hotter and hotter, not realizing that the water is boiling; Facebook keeps pushing more and more forced “engagement”, and no one who is in it realizes it’s turning into all ads and garbage.
I've never seen it work in practice, so why are we not able to do that?
What is the difference between self-reinforcing and habit-forming? I see three differences.
1. Who is in control? A runner is in relatively more control, because the act of running does not require a gamified online experience.
2. What are the ethical ramifications? These are subjective. To be clear, subjective certainly does not mean unimportant. It means people may disagree. In my view, people largely disagree on the "margins" of ethics -- meaning there is a wide swath of common ground.
3. There are some physiological differences; I'm not an expert. It seems that both the runner's high [1] and social media involve dopamine, but they appear to use different mechanisms. ("Runner's high" seems to involve endorphins as a precursor.)
[1] https://archives.drugabuse.gov/blog/post/chasing-runners-hig...
Chicken breast, deadlifts and philosophy takes a lot of effort and the positive effects are delayed. The brain has to be trained to tolerate it. It will always prefer the latter.
Exercise addiction: https://en.wikipedia.org/wiki/Exercise_addiction
It's very rational. The user needs a good user interface, the best that can be built. What is the best? The one the user likes to use, day in day out. Is that a problem? Obviously not for the platform.
When FB CEO announced that he want's to make users happy, he acknowledged his real intents: mass, aggregate control of behavior. I cannot deny that it carries a purely instrumental view of humans - but that's again only rational in the context of business.
No.
The road is most used when there's a traffic jam.
The danger with mobile gaming is that you never really get to leave the arcade and you don’t have to intrinsically budget the way you used to at arcades. At the arcade you usually have a fixed number of tokens to spend and you always have the allure of other games pulling you away.
The former creates value while the latter extracts and concentrates it while overall creating net negative value. Addicting people to Skinner boxes is destroying hours of what otherwise might be productive, rejuvenating, or enriching time. It's macroeconomically indistinguishable from killing people.
One of the central problems of modern Western capitalism is that we fail to distinguish between the two. A businessperson is successful an a genius if they make money; nobody bothers to distinguish between those that make money by creating value and those that make money by merely extracting it and leaving a path of destruction in their wake.
Maybe we can figure out a way to re-channel the impulses of "cancel culture" in this direction, cancelling those that promote addictive net value destroying products and services. Since the algorithmic timeline and other personalized recommendation engines are by far the largest pushers of fascist and neo-racist ideology, the original goals of "cancel culture" might still be indirectly achieved.
In a capitalist world, if you want to change what companies are doing, make it more expensive to do so (alternatively, less lucrative). So I guess tax and fine companies, like GDPR does.
I wish there was a simple way to add more metrics like pollution, social impact, health impact, etc. But those are rather hard to measure, usually not immediately available, and can be hidden in some way or another. It might be doable, though, as we were more or less able to do so with capital (track monetary exchanges, etc); which is a completely artificial metric.
Disclaimer: I know nothing about actual economy theories.
Everything we are told about building modern services is about optimisation. You have metrics, you experiment, you tweak things, you see what helps and what doesn't. This is as true for business models as it is for interface design.
Some of this is good - it is positive to improve your UI to reduce friction and make it more usable by your customers. Some of it is fine - choosing a landing page design that gets more people to sign up. Some of it is bad - things you do that mean people don't close the app as readily.
All these changes are the result of the same process, that is built into how modern businesses operate. How do you draw the line between optimising for good UI and optimising for addiction? If you're writing legislation, do you outlaw specific practices? If you're trying to operate companies ethically, do you just avoid certain metrics?
Everytime we get a new customer we have to discuss to explain to them that the real value is in providing the info people need as fast and simple as possible. Unfortunately we have to fight against some major forces, such as google ranking sites with longer content just because of that and not metrics related to value.
> it harms our brains in a way that we don’t yet fully understand
He even mentions "we don't yet fully understand" so why keep building on this narrative just to try to prove a point? That should have been the last sentence of this post but I guess that would have made for a pretty mediocre post one that doesn't tap into FUD of the reader.
What's interesting to me is, earlier in the pandemic I would hear more lofty plans for organizing online games and having more video calls between friends and family. It's amazing how quickly the motivation for this dissipated. Maybe because it's plainly cumbersome and awkward and just doesn't feel the same, and everyone I know still works.
For my part I think I've retreated from video/audio correspondence considerably. I still write to strangers online, which is social media, but feel like this experience has made me retreat into my shell. No substitution for seeing people in person.
I used to play in a few city-wide social sports leagues, before covid, which were all canceled most of last year and aren't back yet due to the winter weather. I filled my time by hiking and trying out some new hobbies.
So while not quite the same as being force inside and/or forced on social media, I can see where the pandemic upended various gatherings/hobbies and social media is an easy alternative for many people. Social media is free, easy to access, convenient, can be a giant time-sink, etc. along with all sorts of negative consequences.
The pandemic means classes are virtual/online. That means kids need computers with internet connections and modern web browsers. That means games, stupid videos, games, chat, games, and more games.
I'm really at a loss here.
I've gone to demanding a cold-turkey moratorium on the junk, with computer use only when supervised. This solution sucks up too much parental time and doesn't allow enough computer time for the homework.
If the classes didn't require web video, I could get an old VT510 terminal (no graphics) or an e-ink display, and then the gaming would be limited to a few things like online chess. The addiction problem wouldn't be so severe.
A man was recently arrested in my city for a scheme where he was trading sexual images from underage boys for gift cards for popular video games.
TIL... I've always thought this was SO annoying on Twitter, because I want to see the exact time (I'd even be ok if I could hover and see the actual timestamp), and thought they were just 'dumbing it down'.
But it makes a lot more sense with that clue.
Ugh
Sounds like the HN karma system ;-)
It would help if your article used more than just the middle 25% of my screen, thus requiring scrolling to read.
or
'Thought Reform and the Psychology of Totalism: A Study of “Brainwashing” in China' by Robert Jay Lifton
(both readily available as PDF online)
This can easily apply for many other services. The purpose of an email client is to surface the important information (filtering and maybe inbox zero?) and let the user act on it (reply and write with the write context). The purpose of a dating website is to find a match and create a connection. The purpose of a social network, on the other hand, is much more debatable.
it's no wonder we have seen a decline in cognitive ability, as seen by various world events (election of the 45th president of the USA, "Brexit", climate denial, rise of neo-nazis, vaccination denial, Rohingya genocide in Myanmar, ...). we are just endlessly scrolling and spamming each other with our running commentary bullshit that masquerades as a modicum of insight.
what happened to making the world a better place? the tech industry is no better than the tobacco or fast food industry.