So like, the scientists themselves?
> If in doubt, read the study critically yourself.
I cannot believe the author manages to say this with a straight face. “Hey you average person (with maybe a college degree), go read the original academic paper yourself. Doesn’t matter that you don’t have the background, struggle with basic math (much less statistics), can’t evaluate the claims, and don’t know which questions to ask.”
The age of the polymath is long dead, we’re living in The Great Endarkenment. You trust your pilot to do their job, you trust your civil engineers with the bridge you driver over, and the mechanical engineers with the controlled explosions happening in your car, but when it comes to cutting edge scientific articles, here is where you, average Joe, will be able to know better that the experts in the field who specialized in this and do it every day.
In this case, the "pilot" (the combined media and researcher science communication system) is deliberately steering the plane into the side of a mountain. A coin flip would do a better job. They've burned their credibility to the ground, and you're trying to repair it by invoking other professions that haven't done so.
For example, the study showing that having a white doctor increased mortality of black babies didn't correct for birth weight - once that was done, the effect disappeared (and media interest waned): https://www.wsj.com/opinion/justice-jacksons-incredible-stat...
And if you do so, one of two good outcomes will hopefully happen:
1. You find the result is bogus
2. You learn something new and update your internal model of the world.
There is no accurate heuristic which is a good short.
The positive side to "bias" is intuition. This is where a bias ("I'm pretty sure it'll turn out to work like XYZ, so I'll do this experiment next, rather than getting bogged down in some other area.") massively shortcuts the amount of resources required to come to a scientific conclusion.
During my PhD, I made many such shortcuts, following my nose. If I didn't, and tried to do everything objectively, I'd still be optimising buffers, and other such things.
To be clear I'd be very much in favor of scientific studies and their data having to be publicly available.
But on any controversial area, which is most of the areas anyone cares about, there will be 2+ sides of the issue and any vetting body will be compromised to some degree for one of those sides.
I'm not sure it was ever actually much better... and it may just be my pessimistic Gen X nature. But I've personally seen too many misrepresentations about too many studies where the body and available data in fact don't match the headlines or the numbers themselves are deceptive in a way that is much less significant than represented.
200% the risk of X... when in sample A of 10000, 1 had X, and in sample b of the same size, 2 had X... while it's a real relative stat, the absolute values are all but meaningless in context.
And nobody has enough time or desire (or likely money to subscribe to the journals) to read the details of the papers and grok the nuances. Humans think in simple narratives for a reason.
We shouldn’t have blind faith in science, but we also shouldn’t have to go back to first principles and do our own version of every experiment. The repeatability crisis is a thing and we know about it. P value hacking is a thing we know about.
The problem described in the article is that we shouldn’t believe headlines or short summaries created by writers who aren’t incentivized to add the nuance. And nobody should believe a headline anyway - in addition to necessarily being lossy, for any for profit organization they are likely written by someone other than the writer and probably A/B tested for clicks.
The trust in Science is about the system the produces it; not a single paper or whatever, but that's being erradicated because of the needle in haystack problem.
So even if you think going to original sources makes you safe, think again.