That being said, I do think it’s important to avoid fallacious reasoning — it helps in making arguments clearer. As mentioned above, just because someone uses a fallacy doesn’t mean their conclusion is wrong, but it does mean that you can prove their argument wrong. (And any reply should ideally be phrased in these terms: ‘your argument has problem X’ is easier to respond to than ‘your argument has fallacy Y’.) Furthermore it means they are thinking less clearly than they perhaps should be.
The fact that an argument is fallacious doesn't necessarily mean that the conclusion is wrong, but there does (logically, by necessity) exist (at least) a non-fallacious argument for a correct conclusion.
Echoing OP’s sentiment: How one uses/weaponizes such knowledge depends on one’s goals: to win petty ego battles in debate or seek the truth?
One way this can go wrong is if the two people are thinking at different levels of abstraction or dimensionality (variables), especially when one or both participants don't understand what that means (which seems to be "most" people in the general public).
But then, you're just doing an ad hominem...
It’s only a prism that deforms our perspective of the world.
Fallacies are a different beast altogether.
"An appeal to false authority is a fallacious argument that relies on the statements of a false authority figure, who is framed as a credible authority on the topic being discussed."
https://effectiviology.com/false-authority/#:~:text=An%20app....
An argument doesn't get any more or less fallacious based on who believes in it (the authority, in this case) – that's not an argument in itself.
It might be reasonable (a useful heuristic) to lean to the side of the expert, but the fact that an expert believes something doesn't in itself make the conclusion correct.
... according to linguistics researcher with a website. He sounds like as much a false authority on philosophy as anyone else.
(There are also hyperlinks to the corresponding wikipedia pages)
Most cognitive biases are effective ways of fooling your self.
It takes a lot of work to think critically, and the hardest work is thinking critically about the things that align with your biases. It's amazing how many people can see it in other people but not themselves.
The sheer number of cognitive biases presented in TFA, and the overlap of the categorisations had me bamboozled.
But yeah, if these are System 1 staples then I can imagine how they might have made do for regular people for a long time. Some biases and prejudices affirm survival in a less civilised setting eg prehistoric tribes that compete for land and food with neighbouring tribes.
So perhaps it is the compact of civilisation that opens up the vistas for System 2 thinking to yield benefits.
For the vast majority of our time evolving the biggest issue was spotting a stalking predator or hidden danger before it could hurt us or spotting food better than other chimps/homo */lizards (at various points on the evolutionary history). For that quick efficient pattern matching is what you need not solid reasoning about large groups so that's what we've got.
First, 'bias' is defined with respect to an ideal rational actor with perfect information and infinite computing power. The reasons for this are mostly historical (rational animal, Homo economicus, etc), but it also serves as a useful baseline 'ideal observer' model to compare human behavior against.
In the 1950s, Herbert Simon coined the term 'bounded rationality' to describe rationality within a set of computational bounds. For example, if we have finite working memory and limited computing time, but were still trying to make optimal decisions within those bounds, what behavior would we see? In this case, decision making turns from unconstrained to constrained optimization. What may LOOK like a 'bias', with its connotation of sub-optimality, may actually be optimal behavior given constraints.
More recently, people like Gerd Gigerenzer suggested that human decision making is largely composed of heuristics and tricks that enable 'fast and frugal' responses to scenarios. They don't need to be perfect, just good enough - and 'cheap' enough with respect to time that they are worth developing. This is probably true to a certain extent, but to me it's scientifically unsatisfying, as there is no general principle (except for 'cognitive miserliness') to explain behavior - and specifically, there is no longer a way to specify 'normative' or expected behavior in any given situation.
More recently still, there is a trend to revive Simon's perspective under the name 'Resource Rationality'. Tom Griffiths is one of the active researchers in this field. The idea is the similar to Simon's - we have limited 'cognitive resources' and strive to be rational. Griffiths and others have attempted to show that many behaviors that are traditionally called 'cognitive biases' are actually predicted if we are behaving optimally but with constrained cognitive resources.
From the resource rationality perspective, a cognitive bias is a way that a solution to unconstrained optimization differs from a solution to a corresponding constrained optimization problem. Roughly speaking, any combination of limitation on (memory, computation, time, energy, information) will produce a 'bias', and different scenarios we encounter push up against these boundaries in different ways, leading to a plethora of 'biases'.
That is a narrow and incorrect definition IMHO. Two resource constrained solutions can also differ wildly in their rationality (or reality approximation if you will) and there is no resource unconstrained general intelligence in the universe so we wouldn’t even really know what that normativity looks like.
Biases are more of a classification of errors specific to our cognitive machinery; framing errors, recall errors, precision errors, proportionality errors, inference errors etc wrt the best we could have done with it.
[1]: Setting an initial price in order to later favourably negotiate a price.
Take pareidolia, for instance – it's likely been more evolutionary advantageous to see faces where there are none (and thus flee tot often), as opposed to not seeing faces where there are faces (and thus be eaten).
And in evolutionary terms, not everything that exists has some benefit. Some features just haven't been subject to evolutionary pressures; Not selected for, just not selected against. Vestigial features are prime examples of this.
In all other situations, they are working as expected.
e.g. risk avoidance is very important because one single mistake in risk assessment (underestimating a real threat) can wipe you out.
The short but very very accurate documentary above (2 min) explains why.
Basically, evolution does not reward survival of the "smartest" or survival of the person with "less cognitive biases."
A lot of them could frankly be considered versions of the same bias, or combinations of some of them.
Others are not bias at all, just a result of bad thinking.
Many of these are used as ammunition for arguments vs. clarifing reasoning.
Read "Thinking Fast and Slow" (which, to their credit, they do list as a resource) and skip the rest of their grant seeking.
Using shields (against self-sabotaging traps) as ammo to destroy others is very dumb, but yes it happens.
Also you just listed their "use" in your very comment. People build upon these to convince others, and if you would like to be less manipulated, then they are useful to know.
There is a bit of a dunning-kruger effect to be sure: accusing other of cognitive biases that one just stumbled into on wikipedia is itself going off-topic. It takes a bit more effort to actually internalize it.
On a semi-related topic, I find it a bit weird the way people think about "winning an argument". People usually prize being "right", but if you think about it from a strictly selfish perspective, you don't really gain anything from that outcome, whereas learning seems like a better outcome.
Similarly, there are some people (a much smaller number I would expect) that have moved from atheism to "spirituality", for the same reasons.
A lot of what i know turns out to be wrong or disproven. I do just as well acting emotionally and impulsively as i do when i think logically.
Not to mention following what everyone is doing is fine most of the time. Starting to feel that knowledge is a trap. Only way out is to not play.
Only exception is for basic math/physica that I can grok.
Ok.
Clicking through to the first one:
> The Barnum effect is a cognitive bias that induces an individual to accept a vague description of personality traits as applying specifically to themselves.
Then they give an example story of something reading like a horoscope.
The story could reasonably describe a large swath of people so I fail to understand how this is an "error" on the part of someone thinking that it describes them.
Specifically: in a way that is exact and clear; precisely.
The story can specifically refer to multiple people. There is no definite error here as I see it.I am very confident that there is a phenomena whereby someone can do this, but that in doing so their mind is in a certain "mode" of some kind (abstract, for starters), but when the mind is in a different mode (discussion or arguing about object level ideas, culture war topics tend to work best), the knowledge that they formerly had becomes inaccessible, often even if reminded of it.
My armchair theory is that this plays a part in it (but there are surely other things going on):
She is a co-founder of the Center for Applied Rationality & host of the Rationally Speaking podcast.
https://www.urbandictionary.com/define.php?term=Apex%20Falla...
Doing a DuckDuckGo for "apex fallacy" turns up a bunch of incel and mgtow sites.
Is this even a _real_ fallacy?
It seems like it's a real mistake that people make. I guess one could categorize it as a more specific case of misunderstanding a distribution of values. (In the same vein as mean vs median, outlier skewing, assuming unimodal vs bimodal distribution, etc.)
I can see why that's not included in most lists of fallacies. It's just a specific example of cherry picking, a well known fallacy that is already included in the lists.
Knowing what your bias is and which ones you fall into help you cope and guard yourself against them.
(Just kidding.)
Also, among the generaly sane page, there is this bit:
> Conspiracy theories regarding the COVID-19 pandemic are plentiful and varied. One of them suggests that the authorities declared a health emergency in order to force the population to accept a vaccine that they do not need, in order to promote the economic domination of the pharmaceutical industry. Several types of information can be presented in support of this theory, such as proposals for alternative treatments to COVID-19, some data taken out of context from vaccine approval protocols, or annual death rates from seasonal influenza.
> While this information may hold some veracity, it is not sufficient to support the conspiracy theory put forward when compared to scientific studies carried out by both the pharmaceutical industry and public health authorities.
Hardly playing the Devil's advocate, this is the crux of the problem : once you're sincerely convinced that scientists are wrong (which is bound to happen faced with novelly) and governement are lying to you (which is bound to happen because governements are full of politicians), isn't it _rational_ not to believe scientists, and not to trust governements ?
So the rules of the game for scientists are to never ever be wrong, and governments to never ever lie.
But of course, it's not the case, and the "right" thing is to believe scientists because they're right "in general", and to trust politicians "to a degree". But how is advocating that not falling in "appeal to authority" ?
Etc, etc, etc...
At least I'm having those brain farts in a quiet office with a cat on my nap and two jabs in my arms, instead of in a ICU :/ ...
This is the crux. How did those beliefs form? I would argue that in many cases, it wasn't a dispassionate weighing of evidence and calculation of probabilities. If not that, then what? We might also ask - in whose interest is it that citizens don't believe scientists or the government?
Every medical scandal (For example, in the French Antilles, the scandal around "chlordecone", a pesticide, is supposed to be fueling anti-vaccine fear today.) Earlier uncertainties about masks, vaccinne efficiency, vaccines from different countries, possible early treatments, etc...
Even though the "consensus" settles quickly, the dust of the discussion remains in the air for very long. Media is playing a part here, ironicaly because "not blindly trusting the authority" and "presenting a balanced view of facts" is good for ratings.
> ... and governement are lying to you
Do we really need to test this hypothesis :) ? Of course it is a generalization that "all governments are lying all the time". But it is such an easy one to make...
Here's a full list of every page I saw (no selection bias!) and my evaluations.
Anchoring heuristic: this category is not accurate. Anchoring takes two forms: as a priming effect, or as insufficient adjustment from a starting point. Instead, this category lists "Confirmation bias, Echo chamber, Effort justification bias, Escalation of commitment bias, Hindsight bias, Illusion of transparency, Self-fulfilling prophecy". None of these relate to anchoring either directly or through mechanisms. The omission of anchoring, a major effect, is glaring.
Automation bias: the effect as described in the article is not even correct. There is an automation bias, but there is also the opposite anti-automation bias, where humans unfairly disregard the opinions of machines in some contexts, such as algorithmic recipe recommendations. Their CDS example is poorly chosen and barely illustrates the subject. There is a better example from the literature, where humans accept the results of a blatantly wrong calculator over their own estimates. In addition, automation bias is frequently justified even when compared to human "rational" thinking, as summarized in Thinking, Fast and Slow, Ch 21.
I recall this quote from Gelman: "Duncan notes that many common sayings contradict each other. For example, The early bird catches the worm, but The early worm is eaten by the bird." This page on automation bias is no better than one of two contradictory sayings.
I like the three meters, of literature, impact, and replication. Their existence is a well-thought and marvelous insight on science communication. It's a giant improvement over resources from 10 years ago, when such meters were barely considered by psychologists, much less communicators.
I like that they cited references to research papers. I like that they describe how the experiments measure the effect. These are major advantages over comparable websites.
Representativeness heuristic: this category is not accurate. Base rate and conjunction fit. The rest do not.
Base rate neglect: the explanation is quite bad. Kahneman's theory is much more careful and requires huge contortion in "just-so" stories to fit experiments, about how statistical base rates are not always statistical. (I believe his contortions are correct.) But this page doesn't even attempt to describe what qualifies as a "base rate".
The Trump example is a loose fit and the test example is so vague as to be meaningless. The examples in the literature, with criminal identification and test positivity rates, have better writing.
Conjunction fallacy: don't pick that famous Linda example if you have to explain all its linguistic caveats. Your explanations are not convincing, just assertions which the reader can't tell the truth of. (And there are better explanations, like replication under single evaluation vs joint evaluation, or replications with clarifying language, which are not given.) Nevertheless, this page is broadly correct. It is over-specific, focusing on the direct violation of conjunction rather than the representativeness process that underlies the probability estimation, but perhaps the specificity is justified if one wants to hew to the literature.
About->Our team: oh my god, the authors are psychology PhD students. I think psychology needs more academic interest from students, so that graduate programs can have stricter filtering. This website does not give hope that psychology will cast off its bad reputation soon.