Over 500,000 people visited Farnam Street last month to expand their knowledge and improve their thinking. Work smarter, not harder with our free weekly newsletter that's full of time-tested knowledge you can add to your mental toolbox.
Daniel Kahneman and Amos Tversky spent decades in psychology research to disentangle patterns in errors of human reasoning. Over the course of their work they discovered a variety of logical fallacies that we tend to make, when facing information that appears vaguely familiar. These fallacies lead to bias – irrational behavior based on beliefs that are not always grounded in reality.
In his book Thinking Fast and Slow, which summarizes his and Tversky’s life work, Kahneman introduces biases that stem from the conjunction fallacy – the false belief that a conjunction of two events is more probable than one of the events on its own.
Probability can be a difficult concept. Most of us have an intuitive understanding of what probability is, but there is little consensus on what it actually means. It is just as vague and subjective a concept as democracy, beauty or freedom. However, this is not always troublesome – we can still easily discuss the notion with others. Kahneman reflects:
In all the years I spent asking questions about the probability of events, no one ever raised a hand to ask me, “Sir, what do you mean by probability?” as they would have done if I had asked them to assess a strange concept such as globability.
Everyone acted as if they knew how to answer my questions, although we all understood that it would be unfair to ask them for an explanation of what the word means.
While logicians and statisticians might disagree, probability to most of us is simply a tool that describes our degree of belief. For instance, we know that the sun will rise tomorrow and we consider it near impossible that there will be two suns up in the sky instead of one. In addition to the extremes, there are also events which lie somewhere in the middle on the probability spectrum, such as the degree of belief that it will rain tomorrow.
Despite its vagueness, probability has its virtues. Assigning probabilities helps us make the degree of belief actionable and also communicable to others. If we believe that the probability it will rain tomorrow is 90%, we are likely to carry an umbrella and suggest our family do so as well.
Most of us are already familiar with representativeness and base rates. Consider the classic example of x number of black and y number of white colored marbles in a jar. It is a simple exercise to tell what the probabilities of drawing each color are if you know their base rates (proportion). Using base rates is the obvious approach for estimations when no other information is provided.
However, Kahneman managed to prove that we have a tendency to ignore base rates in light of specific descriptions. He calls this phenomenon the Representativeness Bias. To illustrate representativeness bias, consider the example of seeing a person reading The New York Times on the New York subway. Which do you think would be a better bet about the reading stranger?
1) She has a PhD.
2) She does not have a college degree.
Representativeness would tell you to bet on the PhD, but this is not necessarily a good idea. You should seriously consider the second alternative, because many more non-graduates than PhDs ride in New York subways. While a larger proportion of PhDs may read The New York Times, the total number of New York Times readers with only high school degrees is likely to be much larger, even if the proportion itself is very slim.
In a series of similar experiments, Kahneman’s subjects failed to recognize the base rates in light of individual information. This is unsurprising. Kahneman explains:
On most occasions, people who act friendly are in fact friendly. A professional athlete who is very tall and thin is much more likely to play basketball than football. People with a PhD are more likely to subscribe to The New York Times than people who ended their education after high school. Young men are more likely than elderly women to drive aggressively.
While following representativeness bias might improve your overall accuracy, it will not always be the statistically optimal approach.
Michael Lewis in his bestseller Moneyball tells a story of Oakland A’s baseball team coach, Billy Beane, who recognized this fallacy and used it to his advantage. When recruiting new players for the team, instead of relying on scouts he relied heavily on statistics of past performance. This approach allowed him to build a team of great players that were passed up by other teams because they did not look the part. Needless to say, the team achieved excellent results at a low cost.
While representativeness bias occurs when we fail to account for low base rates, conjunction fallacy occurs when we assign a higher probability to an event of higher specificity. This violates the laws of probability.
Consider the following study:
Participants were asked to rank four possible outcomes of the next Wimbledon tournament from most to least probable. Björn Borg was the dominant tennis player of the day when the study was conducted. These were the outcomes:
A. Borg will win the match.
B. Borg will lose the first set.
C. Borg will lose the first set but win the match.
D. Borg will win the first set but lose the match.
How would you order them?
Kahneman was surprised to see that most subjects ordered the chances by directly contradicting the laws of logic and probability. He explains:
The critical items are B and C. B is the more inclusive event and its probability must be higher than that of an event it includes. Contrary to logic, but not to representativeness or plausibility, 72% assigned B a lower probability than C.
If you thought about the problem carefully you drew the following diagram in your head. Losing the first set will always, by definition, be a more probable event than losing the first set and winning the match.
As discussed in our piece on the Narrative Fallacy, the best-known and most controversial of Kahneman and Tversky’s experiments involved a fictitious lady called Linda. The fictional character was created to illustrate the role heuristics play in our judgement and how it can be incompatible with logic. This is how they described Linda.
Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.
Kahneman conducted a series of experiments, in which he showed that representativeness tends to cloud our judgements and that we ignore the base rates in light of stories. The Linda problem started off with the task to estimate the plausibility of 9 different scenarios that subjects were supposed to rank in order of likelihood.
Linda is a teacher in elementary school.
Linda works in a bookstore and takes yoga classes.
Linda is active in the feminist movement.
Linda is a psychiatric social worker.
Linda is a member of the League of Women Voters.
Linda is a bank teller.
Linda is an insurance salesperson.
Linda is a bank teller and is active in the feminist movement.
Kahneman was startled to see that his subjects judged the likelihood of Linda being a bank teller and a feminist more likely than her being just a bank teller. As explained earlier, doing so makes little sense. He went on to explore the phenomenon further:
In what we later described as “increasingly desperate” attempts to eliminate the error, we introduced large groups of people to Linda and asked them this simple question:
Which alternative is more probable?
Linda is a bank teller.
Linda is a bank teller and is active in the feminist movement.
This stark version of the problem made Linda famous in some circles, and it earned us years of controversy. About 85% to 90% of undergraduates at several major universities chose the second option, contrary to logic.
What is especially interesting about these results is that, even when aware of the biases in place, we do not discard them.
When I asked my large undergraduate class in some indignation, “Do you realize that you have violated an elementary logical rule?” someone in the back row shouted, “So what?” and a graduate student who made the same error explained herself by saying, “I thought you just asked for my opinion.”
The issue is not constrained to students and but also affects professionals.
The naturalist Stephen Jay Gould described his own struggle with the Linda problem. He knew the correct answer, of course, and yet, he wrote, “a little homunculus in my head continues to jump up and down, shouting at me—‘but she can’t just be a bank teller; read the description.”
Our brains simply seem to prefer consistency over logic.
Representativeness and conjunction fallacy occur, because we make the mental shortcut from our perceived plausibility of a scenario to its probability.
The most coherent stories are not necessarily the most probable, but they are plausible, and the notions of coherence, plausibility, and probability are easily confused by the unwary. Representativeness belongs to a cluster of closely related basic assessments that are likely to be generated together. The most representative outcomes combine with the personality description to produce the most coherent stories.
Kahneman warns us about the effects of these biases on our perception of expert opinion and forecasting. He explains that we are more likely to believe scenarios that are illustrative rather than probable.
The uncritical substitution of plausibility for probability has pernicious effects on judgments when scenarios are used as tools of forecasting. Consider these two scenarios, which were presented to different groups, with a request to evaluate their probability:
A massive flood somewhere in North America next year, in which more than 1,000 people drown
An earthquake in California sometime next year, causing a flood in which more than 1,000 people drown
The California earthquake scenario is more plausible than the North America scenario, although its probability is certainly smaller. As expected, probability judgments were higher for the richer and more detailed scenario, contrary to logic. This is a trap for forecasters and their clients: adding detail to scenarios makes them more persuasive, but less likely to come true.
In order to appreciate the role of plausibility, he suggests we have a look at an example without an accompanying explanation.
Which alternative is more probable?
Jane is a teacher.
Jane is a teacher and walks to work.
In this case, when evaluating plausibility and coherence there are no quick answers to the probability question and we can easily conclude that the first one is more likely. The rule goes that in the absence of a competing intuition, logic prevails.
The first lesson to thinking clearly is to question how you think. We should not simply believe whatever comes to our mind – our beliefs must be constrained by logic. You don’t have to become an expert in probability to tame your intuition, but having a grasp of simple concepts will help. There are two main rules that are worth repeating in light of representativeness bias:
1) All probabilities add up to 100%.
This means that if you believe that there’s a 90% chance it will rain tomorrow, there’s a 10% of chance that it will not rain tomorrow.
However, since you believe that there is only 90% chance that it will rain tomorrow, you cannot be 95% certain that it will rain tomorrow morning.
We typically make this type of error, when we mean to say that, if it rains, there’s 95% probability it will happen in the morning. That’s a different claim and the probability of raining tomorrow morning under such premises is 0.9*0.95=85.5%.
This also means the odds that, if it rains, it will not rain in the morning, are 90.0%-85.5% = 4.5%.
2) The second principle is called the Bayes rule.
It allows us to correctly adjust our beliefs with the diagnosticity of the evidence. Bayes rule follows the formula:
In essence the formula states that the posterior odds are proportional to prior odds times the likelihood. Kahneman crystallizes two keys to disciplined Bayesian reasoning:
• Anchor your judgment of the probability of an outcome on a plausible base rate.
• Question the diagnosticity of your evidence.
Kahnmenan explains it with an example:
If you believe that 3% of graduate students are enrolled in computer science (the base rate), and you also believe that the description of Tom is 4 times more likely for a graduate student in computer science than in other fields, then Bayes’s rule says you must believe that the probability that Tom is a computer science student is now 11%.
Four times as likely means that we expect roughly 80% of all computer science students to resemble Tom. We use this proportion to obtain the adjusted odds. (The calculation goes as follows: 0.03*0.8/(0.03*0.8+((1-0.03)*(1-0.8)))=11%)
The easiest way to become better at making decisions is by making sure you question your assumptions and follow strong evidence. When evidence is anecdotal, adjust minimally and trust the base rates. Odds are, you will be pleasantly surprised.
Want More? Check out our ever-growing collection of mental models and biases and get to work.