On the other end there's distrust of broad scientific consensus across different professions, countries, etc. It's the distrust at these levels that is the increasing problem we are facing today.
It's more than that, I think. Sibling-thread poster hit the nail on the head when he complained of politicised science.
The social sciences have this dominating and silencing effect on the rest of the sciences.
There's always been junk science, and when found out it gets discredited. This is still happening and is a good thing.
What's new is that any research that might produce results counter to the what the PC-mob deems acceptable is attacked. Whether or not there is consensus amongst researchers in that field is irrelevant when the mob calls for the firing of any researcher who doesn't toe the current political party-line.
Sure, we're not actually in the dark ages, but a trend of silencing voices in the name of purity of thought is particularly troubling, especially as the mob asking for this is unashamedly attempting to implement NewsSpeak[1].
[1] See the argument in yesterdays threads about what "man" and "woman" mean, and should dictionaries be changed, etc.
> No this is not new.
I don't recall a PC-mob being used to silence any and all non-supportive voices until quite recently.
> You have always have a direction set by political views, even if we have decided they are wrong they are still hard to kill like: smoking is good, white people are superior. There is still "science" being done to bolster those political views.
I don't see what that has to do with that I said - that a very vocal bunch of non-science people seem to have successfully lobbied into silencing specific topics.
A problem here is that there are fields of science that are almost certainly bogus in themselves. One very likely candidate is nutrition, which seems to be fumbling in the dark and has a long history of producing worse recommendations than doing nothing (e.g. replace fat with sugar). More controversially, the entire field of economics is seen by some to be very suspect from a basic foundations view.
It's not the field that is bogus in that case. People were quite literally bribed to push this. This could happen anywhere, anytime, in any field.
Basically the digestive system is far too complex for us to understand from first principles at this time. Your diet has a very complex and very slow effect on your body, with some exceptions. Numerous diseases and environmental factors impact how this plays out exactly. So, to do real research in nutrition, the only chance right now would be to conduct massive studies over long periods of time with rigorous controls on subjects' nutrition and activities - which is basically impossible, or at least prohibitively expensive.
Instead, we get conclusions drawn from studies of a few dozen people over a few months or at best a year (in "long-term" studies). Or, we get conclusions drawn from comparing diet across huge populations ("the Mediterranean diet", "the Japanese diet", "the American diet" etc) with no possibility to control for obvious differences in nutrients, environmental factors, lifestyle differences, access to healthcare etc. Both of these are worthless conclusions, they don't tell you anything at all.
The only successes of nutrition science have been identifying the most basic nutrients we need to survive at a basic level (protein, fat, carbohydrates, and the various vitamins and minerals). Basically nothing beyond that should be trusted.
As a fun historical note, after the discovery of the macro-nutrients there was a budding field of nutrition scientists confidently recommending optimal diets using scientific methods. Unfortunately, they had no idea about the existence of micro-nutrients, so actually following some of their diets you could actually end up getting scurvy or other serious malnutrition diseases. The current slew is not that bad, but I wouldn't be surprised if in the future we will look back similarly at some common diet advice of today.
Put simply, science is advertised as self-correcting but in reality it's not. Representative experience documented here: http://crystalprisonzone.blogspot.com/2021/01/i-tried-to-rep...
So, the reasons people learn a generalized distrust of science are that often the sausage doesn't get made. Bad science is published, applauded, cited, breathlessly covered in the media and may even be replicated, yet the first time outsiders to the field actually read the paper they realize it's nonsensical. But then they realize nobody cares because careers were made through this stuff, so why would anyone inside the field want to unmake them?
The degrading trust doesn't come from bad results per se, but rather the frequent lack of any followup combined with the lack of any institutional mechanisms to detect these problems in the first place beyond peer review, which is presented as a gold standard but is in no way adequate as such.
For example, consider how programmers use peer review. We use it, and we use lots of other tools too because peer review is hardly enough on its own to ensure quality. Now imagine you stumbled into a software company that held a cast-iron policy that because patches get reviewed by coworkers you simply don't need a test suite, nor manual testing, nor a bug tracker, code comments, security processes, strong typing, etc. And their promotion process is simply to make a ranking of developers by commit count and promote the top 10% every quarter, and fire the bottom 10%. Moreover they thought you were nuts for suggesting that there was any problem with this. You'd probably want to get out of there pretty fast, but, that's pretty much how (academic) science operates. So of course this degrades trust.
The way I see the self-correcting nature of science: the truthiness of our view about specific set of topics increases over time (in some approximation).
What's reasonable, well, probably not years or decades. Average people cannot make major errors that destroy the value of their job output and then blow it off with "well but the company self corrected eventually so please don't fire me". When they judge science, they will judge it by the standards they are themselves held to in normal jobs.
And what's too often, well, probably papers that don't replicate should be a clear minority instead of (in some fields) the majority. Recall that failure to replicate is only one of many things that can go wrong with a study. Even if the replication rate was 100% many fields would still be filled with unusable papers.
Who says there is anyone who can be trusted? People keep looking for leaders they can trust and it takes only a brief look at history to see that the search won't stop despite the jaw dropping futility of the exercise.
The important thing is to check that people have incentives to tell the truth and no conflicts of interest. I'd trust someone untrustworthy if they were making money off my well-being. The only thing to watch out for is them not being forthright about their incentives.
We shouldn't trust that skyscrapers stay up because engineers are trustworthy. They stay up because the engineer goes down with the building.