So it's confirmed? Gaming > Socializing? /s
They state
> Socializing via social media, text, and video chat
Is socializing via social media mean talking in a chat? Or does scrolling your instagram feed and liking your friend's posts count as socializing?
Or scrolling Twitter and liking/replying to your friend's tweets? That could be considered socializing too?
So that's why they described it as " ocial media, text, and video chat". Their data comes from the ABCD study. I didn't look much further. https://abcdstudy.org/
Intelligence does not directly correlate with either success in life or happiness (I believe intelligence is negatively correlated with happiness).
Also, be aware that Scientific Reports is, if not quite a predatory journal, a very low bar. They publish tens of thousands of articles every year, while charging vast fees.
In general, these guys have correlations, not causation. Children's IQ - and gaming habits etc. - develop as they age, so controlling for baseline IQ is not enough to make a correlation with later IQ and gaming causal. It seems much more likely that smarter kids game more, e.g. because they live in richer households. (No, controlling for SES isn't enough to rule this out, for much the same reasons of measurement error as for the genetics.)
If you wanna believe that your hours on COD have made you a genius, go ahead, I won't stop you. Just don't imagine that this research proves it.
The Porsche comment is snide, but actually exposes a similar error in your critique. Sure, a tax return-derived measure of income would be superior to measuring if someone owned a luxury car. But, if you found yourself in a situation where all you had to go on for measuring economic wellbeing was (luxury) car ownership, your analysis is likely to improve by including it rather than excluding it, unless the measure itself had serious other issues with its accuracy.
Likewise, for SES, it is an imperfect measure, but it is the best we have for measuring social position in a concise way.
Having worked in research and universities for a while, the type of critique presented in this post is one you often see of new graduate students. They are able to tear down problems with research very well, but tend to overlook whether the study itself was still informative, or whether the opposite finding is likely to be true.
For example, suppose we wanted to know if video games or watching videos on the internet are making you dumber. A study like this may not convince you it's making you smarter, but it presents decent evidence they're not making you dumber. You can point out how the measures aren't perfect, but that is far from saying the opposite is true or the observed trends are completely spurious.
That's correct, it is a flaw of the entire analysis, not the PGS in isolation. Yes, the polygenic score, when used to define 'genetic intelligence', will be biased towards zero and will miss a lot of the genetic intelligence. What then happens is the video-game playing becomes a measure of intelligence (genetic or otherwise), capturing what the polygenic score (and other covariates) miss. The logic then works in reverse: the reduced correlation is precisely why the residual confounding works. The worse your 'measurements' are at measuring the underlying trait, the more wiggle room there is for your 'outcomes' to actually be correcting the 'measurements' and not vice versa. See "Statistically Controlling for Confounding Constructs Is Harder than You Think" https://journals.plos.org/plosone/article?id=10.1371/journal... , Westfall & Yarkoni 2016. (More examples: https://www.gwern.net/notes/Regression )
What OP shows is not that video game playing causes IQ, but IQ causes video game playing. The choice to play video games (or not play them, because you are bad at learning) is an additional 1-item long IQ test and helps corrects for the error.
(And we do in fact know that video gaming & IQ correlate, so nothing new there. We also know from all the brain training randomized experiments that the causal arrow doesn't run in the direction they want it to run. OP is very wrong, including in claiming that the Flynn effect justifies believing in their effect - it actually is a criticism of their claimed causal relationship between IQs have been steady or falling even as video gaming increased massively.)
I disagree that this study presents decent evidence of anything. I don't claim that the conclusions are false. But they haven't backed them up. There are lots of ways that the observed trends can be spurious. I mentioned some. The study is very weakly informative.
I think you’re misunderstanding how they’re being used (or I am). I think they’re trying to control for genetics via polygenic scores, not trying to establish a relationship between those scores and intelligence. The analogy is that you’re measuring the effect of the price of kids’ socks on their intelligence, and saying the observed effect isn’t due to parental income in some other way, because you’ve controlled for parental income(by controlling for whether there’s a porche in the driveway).
Video games, like all things, should not all be treated equal. I could certainly see problem solving skills developing from world building or highly complex games (Civ, PoE, etc.). In fact, most (but not all) highly successful games have depth, which requires time investment and problem solving. The difference in games can be as varied as comparing a marketing pamphlet to Asimov's novels.
I don't dispute your take on the quality of the research though. I would even go further and speculate it would be really really hard to come up with meaningful tests due to game variance. So most anything on the subject is likely fluff.
So I decided to start again. I noted down the cost of all the necessary items: Residential, city, industrial zones, cost of building roads, power plant, and utility lines, and of course water. I put the game on pause, took out a notebook, and started calculating a somewhat optimal city with the initial budget I was provided.
I built the city very quickly, and this time round I didn't run out of money, and took the game all the way to archologies. I did skimp on fire stations and a disaster destroyed most of the city, but it still survived overall.
I don't think I could have succeeded without that level of planning.
I’m a big grand strategy fan, mostly Paradox games rn, and I almost feel like these are worse for me because the depth keeps me engaged longer (and honesty waste a lot of time) compared even to something like a shitty copy/paste mobile game employing dark patterns because those get so boring so quick. Whereas if I start and eu4 or ck3 campaign and actually play it, it’s almost certain my brain will be shot to hell to for a few days.
As far as scientific reports goes its a fine journal, its run by nature. It's not on the same planet as the predatory journals that spam inboxes. I worry that people will read your comment, assume you speak from authority, and discount any work they might see coming from that journal when we both know that good science can be found in scientific reports, and that impact factor is more strongly correlated with "sexy" or expensive science than good science anyhow.
The reason the method is not robust is that the typical polygenic score explains only 10% or less of the variance of its target phenotype. That leaves 90% of the variance unaccounted for by the control, which means your error term will be correlated with your focal dependent variable, violating the requirements for regression to give an unbiased estimate. I don't think these claims are controversial. We know polygenic scores are noisy. We know what happens when your control variables are noisy.
The fact that lots of people do it doesn't, sadly, make it work. Lots of social psychologists run trials with an N of 35 (though they're addressing this critique, to their credit). Lots of historians fail to specify their hypotheses and to search for disconfirming data. Economists spent the 80s and 90s running cross-country regressions, before realizing that they had, in aggregate, more independent variables than cases. And so on.
The commenter may have a bias, but most prior research shows us the opposite of the study.
I also saw the poor experimental design and had a similar thoughts. Basically, this research looks poorly done and like an effort to prop up the gaming industry (and / or validate the authors pre-suppositions).
There are toys, and games, that make you dumber. Especially games designed to emphasize the addiction loop and monetize inconvenience. Case in point: Angry Birds.
It used to be a very fun, silly, physics-based game. Now, it is infested with pay-to-unlock consumables that in some cases are required to get all three stars on a level (because you can't knock everything down without an explosion and the default roster of birds for the level doesn't give you that).
The simpler a physical toy is (a ball, simple blocks) the more likely it is to contribute to a child's development. The insidious "I-need-another-outfit-Barbie" on the other hand only trains frivolous spending. Even Lego sets vary in the kind of play they foster.
Playing with toys and games can have cognitive benefits, but, digital or otherwise, there's a quality spectrum parents have to be aware of.
Sounds like we're training kids--through video games--to be fantastic CAD engineers and plumbers.
”Scientific Reports is an open access journal. To publish in Scientific Reports, all authors are required to pay an article-processing charge (APC) of $1,495.”
The New Journal of Physics, a respected open access physics journal, charges $2225.
https://publishingsupport.iopscience.iop.org/journals/new-jo...
*Not really a problem if you correctly believe science should be more than publishing sexy results.
You know there are more video games styles than FPS right? Strategy games teach patience and discipline, EVE online teaches economics, even the much dismissed 'mindless' fps teach teamwork.
I think it's likely that at least some games do increase intelligence relative to other activities (i.e. mindlessly watching tv) but less so than others like reading.
I think this is probably more key than you think.
Mindlessly watching junk food TV is not going to help you a lot. It's not very "nutrient rich" (to continue the expression) in terms of knowledge gained.. but you will probably gain some.
Watching documentaries and, crucially, actively watching them is probably very good in terms of how much you learn.
You can say that for almost every human study that's not drug based, or very short-term.
* There are tons of randomized controlled trials of policy measures (malaria bednets, minimum income). Many measure long-run outcomes.
* Natural experiments can measure long-run effects. In economic history, sometimes that means centuries.
* Many other designs are plausibly causal. The right instrumental variable, or a regression discontinuity design. In some cases, even a simple diff-in-diff with panel data. This design, nope.
Obviously though the benefits aren't there if people just mindlessly play the same game all day.
Do we think this study would be improved if it did not control for genetics at all?
In any case, it's a weird thing to control for in a panel study. Why not just use per-person fixed effects? That would eliminate all effects that are constant across individuals.
Or, a randomized controlled trial. If it has an effect worth caring about, then it's worth running an experiment on. A real positive or negative effect would be a big deal for policy.
Maybe it's the games you play (CoD) that make imagine that game-playing develops no reasoning skills.
Play something else (starcraft, for example).
The arguments included in it are no different than many of these new studies coming out. If you grew up to be a latchkey kid and had your fair share of TV/video games/etc, it might give you perspective into things you never thought about.
One of the things he mentions is that TV is passive, and puts you in an alpha state where your brain stops trying to respond because there’s no point in responding.
My partner and I do watch a few hours of TV every night now. But we don’t do this “alpha” thing, at least not exclusively. We pause frequently to comment on or joke about what we’re seeing. To the point that I think sometimes a 30 minute show will take us an hour to get through.
The way it works is one of holds the remote and pauses whenever they want to, and if the other wants to pause they just say “pause!”
I wonder how that changes Mander’s analysis. For us it make TV a pretty fun interactive experience.
And this way of watching was largely impossible when Mander did his work, because you simply couldn’t pause TV. Although you could pause a VCR or DVD.
I’m curious how widespread pausing is. I certainly feel that even solo TV watching is a more interactive experience than TV watching was when I was a kid. Alone, I’ll pause to Wikipedia things or to go find related media.
Ridding myself of TV over the last 2 years has been extremely hard. I'm down to watching a maximum of an hour a night. Most times I don't even try nor care to. I have seen significant improvements to my own life, but most importantly I see the effects it has had on my young kids and how much more creative they are because they aren't sitting in front of a TV watching shows like I was when I was their age. It was the only thing I knew as a parent taught by mine and I had to challenge that for myself.
Moderation is the key to everything, but this book woke me up to things I wasn't aware of and figured I'd share.
"A latchkey kid, or latchkey child, is a child who returns to an empty home after school or a child who is often left at home with no supervision because their parents are away at work."
My point is there is (at least) another important category of program that the researchers missed: creator software. I've also made simple songs with them with garage band, but the UI is still rather difficult for them to use it on their own. I was inspired to take this approach because my first introduction to computers was Logo on an Apple IIe, and Seymore Papert's beautiful work left a lasting impression.
I believe this research is severely lacking.
Where I grew up there wasn't anyone around I could ask those kinds of questions of. I know that's not the Netflix / ipad world that the study is talking about or nessecarily exists today. But I suspect that bifurcation still exists.
Some people try to bring this modification friendly things back, with BBC Microbit, RasPi and so on. But in the end you need to be motivated - and playing better was huge motivation for me!
The docs were so sparse and the communities so small that it really was a much different experience than today. I have fond memories of it, but that might just be me looking back with rose colored glasses.
I would have killed for stack overflow though! But there is a sense of self directed mastery that you don't get when you are so much more familiar with how fast the bodies of knowledge are.
The closest I get to that these days is trying to hack code on a plane :)
"We believe that studies with genetic data could clarify causal claims and correct for the typically unaccounted role of genetic predispositions. Here, we estimated the impact of different types of screen time (watching, socializing, or gaming) on children’s intelligence while controlling for the confounding effects of genetic differences in cognition and socioeconomic status."
"The contradictions among studies on screen time and cognition are likely due to limitations of cross-sectional designs, relatively small sample sizes, and, most critically, failures to control for genetic predispositions and socio-economic context. Although studies account for some confounding effects, very few have accounted for socioeconomic status and none have accounted for genetic effects. This matters because intelligence, educational attainment, and other cognitive abilities are all highly heritable."
To me, it’s pretty obvious. The kids problem solve when gaming, and are obviously engaged. When watching TV, they look like zombies.
I think my wife’s biggest hang up with games is that she was always told that they rot your brain. Also, our kids talk about games, but never about TV which she interprets as games being more addictive. I interpret it as games being more interesting and engaging.
So I wouldn't call games "not addictive". If anything, watching something on TV is often less adictive because you are told a story with its introduction, climax and ending(up until the Netflix ruined everything with it's endless shovelware).
IMHO, the key is moderation. A day with diverse activities is a day well spent, kid or adult.
Today, if I play Sid Meier's Civilization, a day or two would be completely gone and I will be disconnected from the reality and I will need to re-adapt to the real world. I suspect, excessive gamings primary risk is developing unhealthy understanding of the world in the area where the game simulates the real thing.
Addiction is a peculiar thing. Anything that makes you feel good is inherently addictive. People get addicted to biting their fingernails.
Is it bad to be addicted to reading? Or working out? If gaming is making your synapses fire faster, if for nothing more than to increase your IQ score (which is based on speed), is it a bad addiction?
Addiction is a compulsion to do something you would not chose to do. It really depends on that something whether it's good or bad for you. Addiction is something everyone will have to deal with at some point in life. Learning it from gaming is probably not a bad thing.
The moment games include advertising they optimize for all the wrong things. I won’t let my kids get free games on iPad, etc for this reason alone.
The downside is the games never go on sale so you aren't getting any deals but the upside is that you can give your kid just about any Nintendo game and not worry about shenanigans around loot boxes, ads and other crap they stick inside games now.
Until now I hadn't realized how sensible Apple's rules around children's games had been, notably: no internet connection required, and no ads.
That was ten years ago, and I still don't know if I struck the right balance. Parenting is hard. If you have young kids today it's a good idea to understand what games are popular and what their business models are.
It depends strongly on kind of game, and especially on the business model involved. Gacha games, free-to-play, and similar models are very much optimised around addictiveness. Personally I've found that story-based finite games, i.e. the ones you play through and then you've experienced it, are much better in those respects. Unfortunately those seem to become much more rare these days.
If I had a kid they'd have a laptop running Linux with all the open source games (and some of the older Nintendo ones on emulators) and probably a collection of older films on DVD.
Are you not just seeing what you want to see? Maybe from your wife's perspective, the kids are carefully to observing and learning a wide range of human emotions, social dynamics, new idioms and music from Disney+, whereas in their games they're learning a few tricks that they repeat ad nauseam to get some trivial rewards from their digital Skinner boxes?
People used to say that about TV too.
I always feel a bit personally attacked when people claim videogames are just bad for you full stop. I'm really passionate about games, I grew up playing NES, and just never stopped. I almost always have a new game waiting to play for when I'm finished with what I'm currently playing.
Sample size of one, but I don't think my brain is rotten. I have a pretty successful career in software, I have a close partner, I have a social life. I have other hobbies too, but it's my main one.
Don't get me wrong, I know my gaming takes time away from other stuff I could/should be doing, like building side projects or getting enough exercise. But TV does the same, so in a choice between the two I pick gaming any day
One rule I made to myself is simply don't encourage anything just for dopamine rewards. I try to mix things with effort, contemplation, or interaction. Doing something like this specific rule: No more than 1h playing Minecraft alone. If you want to keep playing, ask Dad, Mom, or your cousins to play together; the same for Watching videos.
in what may be somewhat more of a hot take, i would argue that there is an incredible amount of educational value to be found even within the most meritless garbage games as they are still fundamentally systems to be dissected and solved and learning the maximally efficient way to do something worthless is a skillset that transfers quite handily to the valuable things in life. also unlike tv, games have a lot of potential to be immensely collaborative [or competitive] and social. some of my fondest childhood memories were going to my friend's house and co-oping diablo with one of us controlling the mouse and the other controlling the keyboard. there are many far less ad-hoc ways for kids to share games and even singleplayer is a highly rewarding shared experience.
For the record, my kids do play games, I never did complete ban. But, the gaming does not seem to be superior, does not zombify them less nor leads to more inventive play after session finished.
By comparison, I've never had a local game of Mario Party remotely as heated as the average experience in a competitive, online game. Not even after Chance Time did the unthinkable.
The downsides of Disney are a little more subtle, too.
Both have obvious benefits: Grimm Brothers tales are still culturally relevant for a reason, even if you can probably find useless brain fodder on Disney+; and video games can totally teach some stuff to your kids.
Just let them choose what they want I guess?
Reasoning through combat strategy, even in the age-inappropriate context of gun battles, exercises higher level thinking that clearly remains off when watching Ryan’s world.
To be fair, the article does say watching videos had a positive effect as well.
That's all they really are when you remove the theming and look at the actual mechanics of the game. the mechanics aren't screwed up or morally bankrupt or anything like that, they are the same games we've always played of racking up points or moving an object across a playing field or something like that. It's probably the reason why the fps formula is so successful, it plays almost like the games we already like kinda like basketball (with deathmatch) or football/soccer (with capture the flag or assault style gamemodes moving some object offensively or defending as a team).
They spend a decent amount of the week doing after school clubs/sports - so sometimes they fancy an hour of playing Mariokart or wii sports or something - I prefer that because then we can join in as well.
(Otherwise they love watching Ninjago - which tbf I don't mind that much!)
(A) Polygenic scores for behavioral traits may be estimated in GWAS where the null assumptions (e.g., that mating is not conditioned on the trait being estimated) may not be valid[1]. That is on top of the issues that we usually face for other phenotypes (e.g., more routine population stratification due to geographic history).
(B) The authors did not describe the (genetic) ancestral background of the children being studied. Current techniques are biased across ancestries, for most traits, when using polygenic scores[2]. Certainly adjusting for 20PCs in the final model, as the authors seemingly did, would not be expected to make the scores comparable unless all of the children are from a close ancestral group.
With these sources of stratification, the polygenic score represents more (and less) than the trait that you're hoping it estimates; it also encodes population stratification.
As such, I hardly think this study can be interpreted.
That makes them more noisy, not less. PGS predictive power for EDU/IQ is always maximized at use of all SNPs. Restricting to the arbitrary subset of genome-wide statistically-significant SNPs in Lee would drive it from the 7% or so they have to <1%, IIRC.
Also, neither of your two problems are the problem here, as the biases there would not be expected to drive a correlation between video game playing & IQ (what sort of within-ethnic interaction would you need for that and why is it plausible?), and would mostly serve to simply not control for intelligence (and quantitatively, because the PGS here is a small fraction of the variance, even gross biases which somehow did manage to drive correlations between those two variables, would still be unable to meaningfully affect the estimates).
Using only genome-wide significant SNPs reduces the amount of variance explained by the polygenic score, which is what you describe and I agree with. My comment about the concern about "noise" is with respect to a sibling comment ("Polygenic scores are powerful, but they contain very large amounts of noise compared to the true genetic effects.") That is the "noise" that I was addressing. And just as you say, the noise is, essentially, a worthwhile cost to pay since it should not be directional, and so we use various approaches to include thousands or millions of SNPs in these scores.
> Also, neither of your two problems are the problem here
I don't agree. These problems occur very clearly in any mixed-ancestry analyses, and they have to be carefully accounted for or else they induce between-ancestry bias. It's not a function of the phenotype itself (i.e., I'm not making a comment about intelligence); this is true for all polygenic scores.
That being said I do wonder about how I would have turned out had my childhood not been spent hiding under the covers at night with a small light devouring novel after novel and instead been inundated with social media and other distractions.
Today it's hiding and using the "smart"phone
I would love to know what "watching videos" means here. There's a big difference between educational YouTube (Kurzgesagt, Physics Girl, Vertitasium, etc) and TV.
Having kids do projects is super helpful. 1) It builds their confidence in their ability 2) It shows the world (e.g. college admission boards) how they are valuable 3) It can become a way for them to be their own boss 4) It allows them to figure out what they want to be 5) It keeps them busy and out of trouble
Of course, if you push too hard the other way, your kid may just hate the skill and drop it. I knew a lot of people who were forced into piano lessons, and got very good at it too. Many quit over the years due to that resentment once their parents gave them the choice, and today never play any music in adulthood. Such a shame.
TL;DR - the complexity of media has been increasing over the past decades, which means that children spending time with digital media are benefiting from it relative to past generations.
As an example, playing a modern AAA video game is much more mentally stimulating than playing Pong. But also, watching an hour of a modern TV show, or even a modern reality show, is more challenging mentally than watching classic TV from the 60s and 70s—there are many, many more plots and relationship dynamics to track and speculate about.
It's interesting with the internet too, even though there should be a lot more stories in the zeitgeist at once given its wide reach, sometimes due to its virality, one storyline is able to dominate everything at once and suck the air out of the room. Did we really need a dozen article about Will Smith slapping Chris Rock in everything from Reuters to The Atlantic for seemingly two whole days? If you only got news from twitter that might consume your entire feed. If you got your news from the newspaper, that would at most just consume one article or two out of many others of pages of printed material. It would also be limited to probably one section of the many different sections of the paper all covering different news topics.
Long hours of gaming and imbibing this translated over into the real world where it became harder for me to put in effort because I had to see rewards accumulate as a score somewhere and that wasn't happening.
Walking into arcade filled me with joy. Trying to decide which game to play. Imagining if I'm going to finally beat Wonderboy on one coin. Playing Kabal with my brother.
I was a good kid, but I would STEAL money from Mom's purse occasionally to satisfy my arcade craving.
Wow!
Ad absurdum: if watching the Kardashians for two hours a day doubled your intelligence (however you decide to measure that) would you do it? Would you have your kids do it?
But I'm not really concerned about general intelligence. As a parent I feel I have some input into their intellectual growth (to the extent environment allows). I'm far more concerned about the impact of social media on their emotional wellbeing. How they interact with others, and how they view themselves.
In Australia the Government is trying to regulate social media companies. For example, last year it introduced an "anti-trolling" bill, which would require companies to reveal the identities of anonymous trolls. And this is only the beginning of what in my opinion is heavy-handed Government overreach that will not improve the online experience of young people.
Despite being a fairly libertarian person, I'm open to a discussion on banning social media for people under a certain age (16? 18?). And then getting rid of all/most regulations on content.
Not saying this is something we should do right now, or that it's definitely a good idea. I'm just saying I think it's a discussion worth having.