It is, probably more than is apparent. See:
https://en.wikipedia.org/wiki/Philosophy_of_science
https://en.wikipedia.org/wiki/Sociology_of_scientific_knowle...
There is an interesting recursive problem here, though: what tools do you use to scientifically analyze the scientific process? Whatever tools you use will themselves be hobbled by the same systemic flaws you are trying to understand.
Also, as any sociologist will be happy to tell you, incentive structures and other human group behavior gets in the way. It's probably hard to get funding for a study that shows that all the other departments at your university aren't quite the flawless seekers of truth they appear to be.
Iterating on existing systems to see if you can get results to converge and also testing new systems to see if they also result in known good values.
All of these ideas that you tacitly take for granted are itself mutable parts of the scientific process:
* That iterative improvement and hill-climbing is an effective process for improving results.
* That replication of experiments and convergence is a truth-generating enterprise.
* That truth can be expressed numerically.
* That there are some values that are "known good". By what process? According to whom?
To be clear, I don't disagree with those. However, these rules aren't baked into the firmament of the universe. They are processes we humans have chosen to apply in our social process of reaching conensus on truth. In other words, this list here isn't physics, it's technology.
It's entirely possible to imagine a culture whose truth finding bodies don't take for granted one or more of these rules at all. That culture might be more or less effective (again, according to what metrics?), but it would still be well-defined.
Engineering. If you can build something that works based on the rules theorized by scientists, they are on to something. e.g. building a skyscraper proves we know the properties of steel to a pretty good margin of error.
Sociology in particular should always be approached highly critically, because applying those theories and reasoning in its terms often means mass control over people's free will.
It all just feels so 'loose' compared to the physical sciences.
Beyond that those involved in sociology seem to believe that a study is the same thing as an experiment and like to believe that constitutes proof.
Ultimately we can't really run AB experiments on society at large because we are living in; however humanity has at its disposal all of history as a case study. My point is if you really want to understand how societies interact and form, and react, and live ask a historian, not a sociologist.
I also would apply most of these comments to economics except there seems to be more diversity of viewpoints, and studies are used less than math to try and provide a veneer of respectability.
EDIT:
If someone feels that history is inferior to sociology for understanding how societies act and behave please tell me why. I want to understand where I am wrong. But I see a lot of our arguments that we are having in society nowadays the same as one's had a thousand years ago, the discussions over Social Media are basically the exact same ones people had over the printing press in Europe, I recently read "The Republic" and there were the exact same arguments I see repeated here.
So if you feel contrary please tell me why, I admit I could be wrong, but want to understand where my reasoning is flawed.
I'm an economist. If I threw away the half of the data that didn't support my findings, and got caught, I'd lose my job and never publish again. I'm pretty sure the same is true in other social sciences, such as psychology. This is true irrespective of the well-documented problems that the article describes, which certainly also apply in economics and elsewhere, to varying degrees.
By contrast, when historians are caught cutting sentences in half to prove their point, they don't lose their jobs. They don't even lose their Pulitzers: https://davidhughjones.blogspot.com/2020/07/can-we-trust-his...
C. Wright Mills The Sociological Imagination is great (should have been taught in college to you). Thorstein Veblen's Theory of the Leisure Class is good as well. These really seemed to me like attempts to approach truth, and perhaps that's because of the time they were written in vs the time we live in now.
I don't think building "universal models" or observing recurring patterns through analysis of 'experiments over wider demographics and in different points in time' require the ambition to predict a single individual behavior or actions as a corollary.
The problem lies - like you said - with the policymaker. And well more generally with people who extrapolate the results of a paper inadequately.
Like, for example, I just made two universal statements, didn’t I?
"As such, he was a key proponent of methodological anti-positivism, arguing for the study of social action through interpretive (rather than empiricist) methods, based on understanding the purpose and meanings that individuals attach to their own actions."
https://en.wikipedia.org/wiki/Max_Weber
edit: This is then further developed by the so called Frankfurt School as Critical Theory.
The example discussed in OP seems to fall in the category of low rigor/high popularity. I am not 100% on my history of psych research, but it seems to me that the stereotype threat was all the rage in the late 90s following the publication of Steele and Aronson (1995). OP study seems to follow a similar experimental setup as S&A with a new group of people (Asian-American women).
As far as meta-science is concerned, I think that it remains mostly a part of philosophy (as in epistemology) and the focus of a few (senior?) scholars in each field. There is really no space to publish meta-scientific papers that "shake up" the field and call out established researchers, as editors that publish those pieces could come under similar criticism for their work. I think that it is not an accident that the discussion of the replication crisis in psychology started from blog posts and other non-academic avenues and then found its way to more "established" publications in the field (again, if I remember the context of those conversations).
I really wish that the review process was open. It would be interesting to see the reviewers comments to this specific paper and how the editor decided to pick up and engage with them. All those conversations are usually locked up in some editorial management system and are seldom made public. I don't know if we can really have open science without having open peer reviews.
Similarly, the replication crisis was being discussed in a lot of areas, especially in psychology, throughout this time, but was largely ignored until after the Bem ESP study. Registered replications aren't new, nor is concern about meta-science; it's just had renewed focus in recent years for various reasons.
It's not all that surprising to me that meta-science is associated with psychology. After all, not only is psychology often sort of fuzzy (by necessity of its subject matter), but it's the science of human behavior, which I think can lay claim to scientist behavior as well.
I think it's arguably the greatest contribution of psychology to the sciences in general.
In a sense we do have this: engineering and finance. Engineering turns good hard science into new tools, machines and weapons, and Finance turns good (predictive) soft science into new ways to make money.
I think this is a common critique, but I also think it is missing the point. What if the question of interest isn't so easily verifiable like in Engineering? Do we just throw up our hands and give up on those questions? [The alternative to good social science is not no social science, it’s bad social science](https://statmodeling.stat.columbia.edu/2021/03/12/the-social...).
Finance is also a bit tautological in this regard. It seems that often prediction models are impossible to disprove (e.g., our arbitrage method doesn't work anymore, the market updated). Yes good for putting skin in the game, but doesn't seem like it does much to advance our long-term understanding of humans.
Some things may well be complex enough that it's simply impossible, with the amount of resources available to the average university, to conduct a thorough enough study on a representative enough sample that accounts for enough confounding factors to make a statistically sound prediction that generalises. If this were the case for a significant proportion of the subjects of study of a particular field, then it might well be better to "give up" and admit we don't and cannot know, otherwise we're essentially creating a factory for bad science (as the available resources relative to the scope of the problem aren't sufficient to create good science, and there's no negative feedback to stop the bad science).
In the long run it usually comes out, but the run can be longer than you think, and you may not be where you think in it with regard to any particular current theory. I wonder what things we know all "know" are proven by science will be dismissed by later generations. (I personally guess a lot of genetics-related stuff will be).
(Note that something doesn't need to be verifiable, reliable, or true to be "actionable". You can act on anything...)
Also, the difference between a bad method and a good method, is that the good method makes more accurate, better calibrated predictions (that is, using it makes us better gamblers).
Another related discussion was about the grievance studies scandal, which also touches on peer review and academic rigor in journals: https://news.ycombinator.com/item?id=18127811
Engineering organizations inside major corporations usually actively engage in process improvement because they are resourced to do so.
I only had a good introduction to it when I took it as an optional course in a humanities college.
If that's going to happen, it has to come from outside the government-science complex.
Good luck with that...
Most science has very little market value.
Also dont want to point fingrers but some scientists come from places where cheating is the norm.
For example, here’s a scholarly article on the exact question you mention - how and where peer review came to be seen as a guarantor of scientific quality: https://www.journals.uchicago.edu/doi/abs/10.1086/700070 (tldr: it wasn’t the 17th century Royal Society; it’s much more recent.)
https://scholar.harvard.edu/files/shapin/files/shapin-pump_c...
RCTs are good when they can be done and I'm all for doing more of them and too often there's no good excuse for not doing them. But at some level things just get impractical.
For example, our PI didn't want to include a subject in our study because his scores weren't elevated enough, and our PI was worried that his score wouldn't drop enough which would adversely impact our results.
Another example: our active treatment therapists knew exactly what they were treating for, and our study was measuring improvements in the condition that was being treated. However, the control therapists had no idea what they were treating for, and we purposefully kept this information from them!
The screening should be disclosed in the methods (and, ideally, pre-specified), but you do need to account for floor/ceiling effects somehow.
https://advances.sciencemag.org/content/7/21/eabd1705
Science journalists probably are vulnerable to the same thing influences that lead scientists to do this, except they have even less review on their claims and so they become pop culture sound bites.
As for why non-replicable results are cited more, I'd speculate that non-replicable results are often more unintuitive and surprising, and per the above link, reviewers apply lower standards on these papers in the hopes of finding something truly interesting and/or exciting. Not just in the results mind you, sometimes papers also apply a novel methodology that might be worth wider discussion. I'm not sure that's worth the reduction in credibility though.
This seems the likely explanation; I saw a paper recently that showed that lay people can predict what will replicate with above-chance accuracy[1]. I imagine scientists are even better than lay people at this.
So non-replicable results are almost by definition surprising (i.e. they are hypotheses that don't match our current model of how the world works), and surprising results are definitely better news than unsurprising results.
[1](https://journals.sagepub.com/doi/full/10.1177/25152459209196...)
I mean, a kind of well known thing is that in general, something false can be more interesting than something true, since it has more degrees of freedom. You can make up anything you want - the truth has to conform to what's actually real.
* saturated fats are bad (lies told us by a crappy Ancel Keys study, promoted for decades by processed food companies (like Kellogs) ran by Seventh Day Adventists who were convinced "meat led man to dangerous impulses and temptation")
* polyunsaturated fats are good. The American Heart Assocation had an article up for years that went as far as to claim Omega6s are heart healthy. They only recently took it down this year. But we know they're inflammatory and we know we're consuming 25-100x more Omega6s than we ever would before the industrial invention of seed-oils being shoved into every product imaginable (bread, cereal, granola, anything that comes in a box, feed given to animals meant for meat production) here's a webmd article on it: https://www.webmd.com/heart/news/20090126/expert-panel-omega... \
* sunlight creates high cancer risk (ignoring that cancer is unlikely, treatment if caught early has a high survival rate, and the risk of not having vitamin d throughout your life risks far more likely autoimmune issues, depression, anxiety and even certain cancers and inflammatory disease).
* sugar is good for you. Sure, they'll specify processed sugars are bad for you or "added sugar", but common wisdom will accept a NET (subtract fiber) 200-300g carb diet as acceptable. Grain is still often listed as the most important and largest part of the food pyramid.
The reality is - all mainstream health advice, including that which you'll get from your doctor who got a whole single nutrition class in school, ensures that processed foods don't loose business on the front end and the medical/pharma industries don't lose money on the back end.
even in the push for a more vegetarian/blue-zone diet world - they're doing so by promoting meat alternatives like "Beyond Meat" which is chock full of so much seed oil and other processed substances, it's mainstreaming vegetarianism-as-fast-food. McDonald's burger.. is still a McDonald's burger and you shouldn't be eating it.
If a factory isn't making it at scale, shoving it in a box, branding it and ensuring you don't have to spend any time making/cooking/preparing whole, fresh foods (those pesky things that tend to have short shelf lives and are costly to Ag businesses), then your PCP, the government, most food businesses, your medical insurance company, absolutely no one of any kind of "authority" isn't going to promote it highly.
They'll do ANYTHING except remove seed oils. They'll make your potato chips out of broccoli and carrots and still drench them in sunflower or canola oil. They'll reduce the salt. they'll make shit out of beets. And still manage to make it horrible for you.
The MSM regurgitates "health" info regarding diets in a way that acts as advertising for these orgs.
Another reply in this thread suggests that "a large number of participants may have been aware that the actor wasn't really suffering when they administered the punishment." I've studied the topic and found no evidence of this point. In addition, the claim is hard to square with many subjects' reactions -- for example, their nervous laughter and their frequent protests, even as they continued to deliver what they thought were harmful electric shocks.
See https://news.ycombinator.com/item?id=25928569 for more on efforts to replicate Milgram's results.
Tl;dr: there is no equivalence between the Stanford Prison Experiment and Milgram's work on obedience. Milgram's work was superior.
There is definitely something to learn from such an experiment, albeit not what was intended.
This may be true, but I don't think the evidence you give supports your assertion.
>It was a researcher who wanted to prove a point and created the conditions to collect the data to prove that point
"prove a point" is the hypothesis
"created the conditions" is the experiment
"collect the data to prove that point" is the observation
This has been criticized since 1999 and far longer back. How long ago is the Rosenhan Experiment again?
Dare I say that a majority have always held a dismissive, critical view of such matters. But of course, those that hold dismissive views of it are not the ones who work in such fields, and certainly not at the top ready to implement changes, so it can continue to persist and go on despite of being highly criticized.
At least when I studied physics at a university around what must have been 2005, most of the students and professors there were highly critical of softer science and it often came up that some of these papers popped up and were viciously criticized for clear and obvious systematic errors in the methodology.
if you have the weight of peer review or at least a well documented study, then the media runs wild with it's claims, it gets shoved into textbooks, then governments shape policy on those claims, corporations and medical practices sell gimmicks, books, supplements, therapy and plans of action to heal you... it all becomes lies, half-truths, bad data all just repeating itself ad naseum until "truth" is established in the public consciousness. Quacks on the web, the American Heart Association, your local doctor's office will all pedal garbage based on the bad data. And once it's well established as true, backing away from it is hard because it's become so woven, institutionally.
This is why people think saturated fat and sunlight are bad or at least a net-negative.
Even modern medicine, psychology and nutrition sciences all have horrible replication crisis's and we're no better in rejecting the nonsense now than we were then.
It also opened up the idea the Japanese were like us too.
It allowed us to say what we believed and build on that.
It's not science. But science isn't the only way forward.
When people asked around, informally what was said was that the grad students in the other areas (especially one area, in the experimental molecular biosciences) would leave after having to "redo" their dissertation over and over again. Essentially what would happen is they would propose a dissertation study, it would be approved by the area committee, the student would do the study, and it would produce null results. So they would be told to redo it a different way, or to pick a different topic, it would get approved, and the same process would happen again. After this happened a few times, with the student being told they had to produce significant results, the student would grow despondent and leave the program.
What's sad about this is that it's formally reinforcing p-hacking basically, as part of the degree program. But it's even more absurd than what's often alluded to in meta-science writings, because in these cases you would have a formal graduate committee, composed of faculty, deciding that the dissertation thesis is a good one -- that the hypothesis and design are solid, and formally approving the dissertation proposal -- and then because the results are null, it's unacceptable. If this was being done so casually in that forum, I can't imagine what goes on behind the scenes.
Getting a null result doesn't invalidate that in any way.
If you have a committee of experts who carefully evaluate a proposal and decide it's good, the results are as they are.
Broadening the discussion a bit, it seems one feature of science, as opposed to, say philosophy, is that the conclusions regarding a hypothesis are not knowable a priori. I think in contemporary academics there's some implicit idea that the quality of a researcher lies in their ability to identify hypotheses that are "correct", as opposed to simply following through with good but ultimately "incorrect" hypotheses. There's a bit of a roll of the dice involved with science; if there isn't, it's not science.
A medical student worked hard to analyze, say, 40 x-rays out of hundreds available. He found no significant evidence for some hypothesis. When he told his supervisor, the reply was: "Well then you should just analyze some more x-rays. I'm sure you'll have a statistically significant result at some point."
Simon Sinek
Cal Newport
Charles Duhigg
Mark Manson
Ryan Hollyday
Malcolm Gladwell
The list is endless to be honest, they are each different on their own way but they have the following common points:
- They are on this for the money, so expect them to be always pushing their books, products, next book, next tour, next program.Hustling, hustling,hustling.
- Their grandiose pronouncements with little or not serious backing.
- Their unwarranted sense of speaking from a position of authority
- The over-simplication and stupid generalization of what it is messy, complex and very much unique.
Even biology outside of a cellular level is already above 50% BS for me. It is just insanely difficult to have the necessary controls.
There is a pattern to these books
* Pick a topic
* Decide on a narrative
* Pick a collection of studies to demonstrate that narrative
* Take study conclusions (which are often dubious extrapolations of data) and summarize vaguely adding additional unsupported projections, a handful per chapter
* Publish and promote
It isn't just "self-help" but nearly everything in the nonfiction section that you hear people talking about.
[1] https://www.npr.org/sections/thetwo-way/2016/09/13/493739074...
In that sense I find him way better than other authors listed: he actually makes good use of the tools he recommends as a professional (as opposed to making a living spouting bullshit about other people's work).
In legal research services, like Lexis Nexis or Westlaw, many cases are "flagged" when a later case or statute reverses, narrows, or otherwise affects the earlier case. This system warns lawyers that they may not be able to cite the flagged case in their current work. Of course legal research services also come with their own issues and costs; some of which are likely associated with this system.
The website Retraction Watch[1] aggregates these retractions and provides a database that you can query. Reference management software like Zotero[2] can use this to monitor your collection of papers and notify you when one is retracted.
[1] https://retractionwatch.com/
[2] https://www.zotero.org/blog/retracted-item-notifications/
However, as a counter example, in my very narrow specialty there is a well known lab that has produced highly cited bogus studies. I've personally published opposing results and said, "these studies are wrong for these reasons" using almost exactly those words. Should they be retracted? Absolutely. Will they ever be? No. Because, of course, the publisher and the authors just point the finger back at me and say "no, you're wrong!" and that's more than enough to keep the vague debate going.
I’m in the half that doesn’t know, apparently.
I considered that with hard sciences, the cost, time, equipment, conditions, etc were limiting factors. I think this paper on subatomic particles changing into antimatter is interesting, it was observed in a unique facility, once, under some condition that can never repeat, etc. Just kinda have to take your word for it.
As to soft sciences… I really don’t know there. Give me a hint?
Aren't the two most important aspects of the research the data set and the study methodology? Why on earth would you skimp so heavily one of them?
I don't work in the sciences, but this kind of nonsense doesn't exist in the "actual" sciences. Physicists spend loads of money producing just the right experiment conditions and documenting the manner the experiment was created in. The dataset is incredibly important and very rigorously examined.
But in psych, the dataset is basically an after thought. "Oh by the way, we chose a small handful of kids who happened to be free at that time, with no reason to believe there's any geo, social, educational, political, or ethnic background diversity, it probably cost us like $200 plus some pizza. Now let's print the results in $5 million worth of textbooks for a few decades!"
I don't buy the funding argument. A professor probably costs the university 100-150k/yr and will be working on a small handful (2-6, ish?) of projects. Buying an hour of a subject's time for a study must cost, what, $30/hr? Shouldn't they be allocating a minimum of $50k in funding for the actual research, and dropping at least $10k for a good dataset?
I don't buy the argument that most experiments don't yield good results so the university is wary of funding them. At a minimum they should follow up a cheap test with promising results with a real experiment that has actual funding before everyone gets all excited about it.
If someone is from Brazil and is of second or third generation Japanese descent, how much of the questions are 'salient' to Brazilian identity vs Japanese? There's an unspoken implication that part of the 'good at math' stereotype relies to some degree on speaking a non-English language at home, which I don't think is a safe assumption at all.
> In the Asian identity-salient condition, participants (n = 16) were asked (a) whether their parents or grandparents spoke any languages other than English, (b) what languages they knew, (c) what languages they spoke at home, (d) what opportunities they had to speak other languages on campus, (c) what percentage of these opportunities were found in their residence halls, and (f) how many generations of their family had lived in America.
I know it's often repeated, but is there evidence that "psychological science" is worse (by some measure and to a significant degree) than other social sciences? As long as we're talking about science, let's look at evidence! ;)
Anyone studying human behavior, on any scale, has the added challenge that it's so complex that you can't really isolate simple mechanics, as you can in physics or chemistry. People are far more complex than a molecule, or even a billion molecules.
BTW, I'm aware of the well-known cite, the NY Times article on the 'reproducability crisis' from a few years ago. Few seem to have read the article: The results were reproduced, but many were at lower strength than the original research. That's important, but it's not like the researchers just complete missed and results were arbitrary.
I took a philosophy class filled with people in science degree programs and a few of my classmates were often vocally upset about how nothing was certain in philosophy and everything had multiple sides to it. That was very eye opening, many of these people were soon to be graduated and through their entire educational career they had only been exposed to Truth to the extent that being shown debate and disagreement on a topic made them upset.
You're not supposed to "trust the science" you're supposed to trust the process to approach the truth. If you can't read multiple arguments on the same topic and analyze them, you really don't get it at all (and waaay too many people with degrees can't do this).
Agree entirely. I’ll add that I go slightly further. If you can’t take your pet topic, and can’t make even a slightly good faith argument against yourself, you have no business with strong feelings on it.
I have a wedge issue topic I am an expert on. I could argue against myself, both effectively, and in an actual compromise that no one wants.
Yet… people who argue against “my side” are constantly using complete bullshit science from the 1980/1990s when governments literally weaponized depts and ivy leagues to push for “evidence” to support their desired policy changes.
These people now tell me to “trust the science”, “I’m sure this researcher at Harvard is wrong and you’re right”, and ”this article from CNN / FOX / VOX / WAPO proves you are wrong and I refuse to consider they have an agenda”.
Worst part that there is no shame of willful ignorance, they “trust” the people they claim can’t be wrong, simple, done. Why should they bother to acknowledge another side - if they do it means everything else needs reevaluation too.
The profs like to say that 18-22, mostly white/asian, mostly rich kids are the most studied group of people in the USA.
Or course we now know that a ton of psychology research doesn't actually apply to people outside that small narrow window of people.
One of the fascinating concepts in abnormal psychology is the notion of a “culture-bound syndrome”—a mental illness that only occurs in a specific cultural milieu. I wonder how many mental illnesses are actually culture-bound syndromes of WEIRD culture?
Indeed. It is a scale problem. We have too many producers of research, too few destroyers of research, like Gelman. Show me the incentive and I can tell you the outcome. Encourage the whole world to become “experts” and then be amazed as the reverence and trust in expertise is devalued. That’s us.
The only distinction I care about is exactness and non-exactness.
Is the research based upon formulating a theory that is capable of forecassting not-yet observed events a nonexistent, exact margin or error, and are the conditions then re-created to see if the forecast is within the margin of error that the instruments that measure it have?
Some say biology is “hard”, and some say it is “soft”; some say many parts of cosmology are “hard” but they certainly aren't “exact”.
In exact science there are typically multiple ways to derive the same answer within one theory, and they all result into the exact same result.
“colloquial”, “roughly”, ”perceived”. — these are not the terms that definitions are made of.
The point is that there is no actual hard distinction between “hard science” and “soft science” but there is a hard distinction between an exact theory, and an inexact theory.
I don’t think ”they use propaganda and everyone who disagrees is just wrong” is a good argument when the topic is “we know in other areas there are issues with the scientific community so why not this one”.
We can still have the right answer and have gotten there the wrong way.
Academics are thoroughly out of the loop on this one, as are we all incidentally.
You trade 'shot-in-the-dark lab experiments' for 'a clear and obvious agenda'. The trick is to just make sure the agenda isn't morally reprehensible.
Any getting-started pointers for this (or your other suggestions)?
In terms of the why, and the who's paying, Dark Money (Mayer) and Democracy in Chains (MacLean) touch on it: psychographics is huge for anyone doing manipulation, not just commercial ads, also political ones. Probably even moreso.
Maybe if the author mentioned that this article was highly regarded back then, there would be a point, but for all we know, the article was thought poorly of at the time and contemporary scientists just thought it slipped through the cracks.
It also doesn't talk about new controls in place today that would prevent a similarly poor article from being published, or even a system of "retracting" poor articles. I don't really trust that everything being published today is without flaw. After all, the other examples of bad science given are fairly recent.
I think psychology and sociology are legitimate and worthy studies, but they run into issues with the scientific method itself due to the ambiguous and “high-level” nature of their concepts and theories. It’s hard to create meaningful, repeatable experiments. So perhaps it should be emphasized how important it is to put effort into constructing experiment... and in particular keeping the subjects unaware of what it being tested. There are probably many great examples of experiments done.
There is not one but two psychological sciences at the moment. One is public, publicly funded and in very bad shape, with most of its results being not reproduceable (reproduction crisis), and a partial destruction happening via neuro-sciences.
Who destroy, but do not offer large-scale replacement theories, that could encompass the whole species and are not in contradiction to other neuro-science results.
And then there is the second faction (Disclaimer: I can not proof what is deduced after this disclaimer.)
There are several cooperations and at least one government, which had the chance to large scale collect data on the population.
This data is a psychological gold-mine, if explored properly. One could query such a behavioral database and more important - enact virtual experiments.
Out of all male humans, who curse in front of the tv in the evening, filter out those who get into a car accident, plot the increase in cursing in front of the tv.
If taken to the extreme, this new, data-mining behavior sciences, could create a agent based model of the species in all variations and collect data only to check the expected outcome of a societal change against the real outcome, with spot samples.
I have my own little pet theories, how humanity would look to this privatized psychology, but i digress.
I think, academic psychology should have full access to all cooperation databases that contain behavioral data.
No thanks. We already have quack science from the 1950s still driving policy discussions on wedge issues today. I don’t want more convincing shit, I want less shit.
"If you think phrenology is bad, imagine how bad it was in 1810".
Rinse & repeat for the appropriate time frames for alchemy, astrology, or any other field that tried to misapply science. Just because a field is studied for a long time or tries to apply the scientific method doesn't lend credence to the approach. All it means is, at best, we've managed to toss some things that are now obviously wrong/flaws. In 20 years we'll be doing the same to things we "know" today OR the flaws will remain because we don't have the math/science to demonstrate the flaws more obviously & there's social pressure to keep "building" (even if the foundation is flawed). However, as we should all be aware, false knowledge grows exponentially more quickly than our true understanding of the universe because our imagination is limitless.
The general premise with these studies is that if an effect size is real, then a preliminary study would show something interesting. To my knowledge, statistically that is a nonsense argument. Small sample sizes suffer from various small sample effects to the point that you can't predict either way (otherwise there wouldn't be a point in doing a larger study). To add insult to injury, all of these kinds of studies are only on local college students, which further invalidates any potential information gleamed from a preliminary study.
TLDR: The way science is done in the social sciences is fundamentally flawed & the fact that limited funding ensures that's the case doesn't excuse that a significant enough part of the body of knowledge isn't reliable.
What I remember most is that the clinical psychologist kept fishing as to why I covered my face with my hair and I kept saying that there is no reason other than gravity and that I cannot control that my hair obscures parts of my face and the report contained that I did it on purpose to hide my face, which I'm fairly certain I did not, but it seemed that this was really what the clinical psychologist settled on early and continued to search evidence in support of.
Still, they don't hold a candle to Gregor Mendel failing to incorporate DNA in his genetic work, if you can believe it.