I presented a case for why I think it is wrong. And I presented a case for why people have reason to push it despite it being wrong: it gives them more power. Considering peoples motivations is important, and when someone stands to gain we should be suspicious and have to go over everything extra carefully.
I really don't see what these researchers or journalists have to gain, beyond what they would gain from doing any research or journalism. I'm just not seeing any credible counter-arguments.
If you look at rational as a binary, then people aren't rational. But people are a little bit rational. Sufficiently rational, to eventually find the truth.
I like the parallel with machine learning. Many, many bright minds have tried to formalize our intuitions into automatic systems. Gradually they make progress. But it's plain to see that it's not as easy as "just incorporate the new fact". We have systems that can deal with facts, and systems that can learn from experience. But systems that do both, that can learn from experience and express that in terms of facts, or use facts to guide it's exploration, that's an open problem.
As for who has to gain, I do think journalists, and editors, and newspaper owners have something to gain. Their role transforms from giving people "just the facts", to manipulating people into the right beliefs. What the right beliefs are? That's for the journalists and their benefactors to decide.
The majority is too stupid to make up their own minds and needs to be educated at a young age to accept the work product of prior generations experts because they are just too unintelligent to evaluate it for themselves. This is literally most people.
The fact that this is unpleasant doesn't make it untrue.
Right, which is pretty much exactly what the article says. It shows what forms the flaws in our rationality take, and suggests procedural methods we can use to help the process of rational analysis along.
I suspect it effectively functions as minor anti-mindhacking measure as bizzare and dumb as it sounds. It would protect against adversarial input.
If there was no mental inertia then "false facts" which check out to all verification measures could prove quite dangerous as input which outweighs all past known could be easily exploited by bad actors to change what they "know" and exploit it from there.
To summarise: for reasons, it's possible for somebody to endlessly drive up and down a computationally-limited agent's credence estimate for mathematical conjecture X (so long as the agent hasn't found a proof yet) just by stating implications of the conjecture, if the agent is trying to use ideal Bayesian reasoning. It's possible to make this "trolling" impossible, by basically turning your "mental inertia" up to 11 – but that proves it's possible to be untrollable.
Kind of like Bayesian inference. You have a prior and the different evidence moves the probability.
And it works a lot. For example, if you see a magician levitating, the evidence of your eyes seems to suggest that they are actually levititating. However, your priors tell you that the chance of that is still pretty low even with the new evidence. So this way, you don’t go run across a street magician and suddenly believe humans can levitate.
Oh you don't have the credentials to run experiments and talk about your findings? Oh well that's too bad, I guess we'll all live in the dark and you'll have your truth.