Tag: Gambler’s Fallacy

Mental Model: Misconceptions of Chance

Misconceptions of Chance

We expect the immediate outcome of events to represent the broader outcomes expected from a large number of trials. We believe that chance events will immediately self-correct and that small sample sizes are representative of the populations from which they are drawn. All of these beliefs lead us astray.

***

 

Our understanding of the world around us is imperfect and when dealing with chance our brains tend to come up with ways to cope with the unpredictable nature of our world.

“We tend,” writes Peter Bevelin in Seeking Wisdom, “to believe that the probability of an independent event is lowered when it has happened recently or that the probability is increased when it hasn't happened recently.”

In short, we believe an outcome is due and that chance will self-correct.

The problem with this view is that nature doesn't have a sense of fairness or memory. We only fool ourselves when we mistakenly believe that independent events offer influence or meaningful predictive power over future events.

Furthermore we also mistakenly believe that we can control chance events. This applies to risky or uncertain events.

Chance events coupled with positive reinforcement or negative reinforcement can be a dangerous thing. Sometimes we become optimistic and think our luck will change and sometimes we become overly pessimistic or risk-averse.

How do you know if you're dealing with chance? A good heuristic is to ask yourself if you can lose on purpose. If you can't you're likely far into the chance side of the skill vs. luck continuum. No matter how hard you practice, the probability of chance events won't change.

“We tend,” writes Nassim Taleb in The Black Swan, “to underestimate the role of luck in life in general (and) overestimate it in games of chance.”

We are only discussing independent events. If events are dependent, where the outcome depends on the outcome of some other event, all bets are off.

 

***

Misconceptions of Chance

Daniel Kahneman coined the term misconceptions of chance to describe the phenomenon of people extrapolating large-scale patterns to samples of a much smaller size. Our trouble navigating the sometimes counterintuitive laws of probability, randomness, and statistics leads to misconceptions of chance.

Kahneman found that “people expect that a sequence of events generated by a random process will represent the essential characteristics of that process even when the sequence is short.”

In the paper Belief in the Law of Small Numbers, Kahneman and Tversky reflect on the results of an experiment, where subjects were instructed to generate a random sequence of hypothetical tosses of a fair coin.

They [the subjects] produce sequences where the proportion of heads in any short segment stays far closer to .50 than the laws of chance would predict. Thus, each segment of the response sequence is highly representative of the “fairness” of the coin.

Unsurprisingly, the same nature of errors occurred when the subjects, instead of being asked to generate sequences themselves, were simply asked to distinguish between random and human generated sequences. It turns out that when considering tosses of a coin for heads or tails people regard the sequence H-T-H-T-T-H to be more likely than the sequence H-H-H-T-H-T, which does not appear random, and also more likely than the sequence H-H-H-H-T-H. In reality, each one of those sequences has the exact same probability of occurring. This is a misconception of chance.

The aspect that most of us find so hard to grasp about this case is that any pattern of the same length is just as likely to occur in a random sequence. For example, the odds of getting 5 tails in a row are 0.03125 or simply stated 0.5 (the odds of a specific outcome at each trial) to the power of 5 (number of trials).

The same probability rule applies for getting the specific sequences of HHTHT or THTHT – where each sequence is obtained by once again taking 0.5 (the odds of a specific outcome at each trial) to the power of 5 (number of trials), which equals 0.03125.

This probability is true for sequences – but it implies no relation between the odds of a specific outcome at each trial and the representation of the true proportion within these short sequences.

Yet it’s still surprising. This is because people expect that the single event odds will be reflected not only in the proportion of events as a whole but also in the specific short sequences we encounter. But this is not the case. A perfectly alternating sequence is just as extraordinary as a sequence with all tails or all heads.

In comparison, “a locally representative sequence,” Kahneman writes, in Thinking, Fast and Slow, “deviates systematically from chance expectation: it contains too many alternations and too few runs. Another consequence of the belief in local representativeness is the well-known gambler’s fallacy.”

***

Gambler’s Fallacy

There is a specific variation of the misconceptions of chance that Kahneman calls the Gambler’s fallacy (elsewhere also called the Monte Carlo fallacy).

The gambler's fallacy implies that when we come across a local imbalance, we expect that the future events will smoothen it out. We will act as if every segment of the random sequence must reflect the true proportion and, if the sequence has deviated from the population proportion, we expect the imbalance to soon be corrected.

Kahneman explains that this is unreasonable – coins, unlike people, have no sense of equality and proportion:

The heart of the gambler's fallacy is a misconception of the fairness of the laws of chance. The gambler feels that the fairness of the coin entitles him to expect that any deviation in one direction will soon be cancelled by a corresponding deviation in the other. Even the fairest of coins, however, given the limitations of its memory and moral sense, cannot be as fair as the gambler expects it to be.

He illustrates this with an example of the roulette wheel and our expectations when a reasonably long sequence of repetition occurs.

After observing a long run of red on the roulette wheel, most people erroneously believe that black is now due, presumably because the occurrence of black will result in a more representative sequence than the occurrence of an additional red.

In reality, of course, roulette is a random, non-evolving process, in which the chance of getting a red or a black will never depend on the past sequence. The probabilities restore after each run, yet we still seem to take the past moves into account.

Contrary to our expectations, the universe does not keep accounting of a random process so streaks are not necessarily tilted towards the true proportion. Your chance of getting a red after a series of blacks will always be equal to that of getting another red as long as the wheel is fair.

The gambler’s fallacy need not to be committed inside the casino only. Many of us commit it frequently by thinking that a small, random sample will tend to correct itself.

For example, assume that the average IQ at a specific country is known to be 100. And for the purposes of assessing intelligence at a specific district, we draw a random sample of 50 persons. The first person in our sample happens to have an IQ of 150. What would you expect the mean IQ to be for the whole sample?

The correct answer is (100*49 + 150*1)/50 = 101. Yet without knowing the correct answer, it is tempting to say it is still 100 – the same as in the country as a whole.

According to Kahneman and Tversky such expectation could only be justified by the belief that a random process is self-correcting and that the sample variation is always proportional. They explain:

Idioms such as “errors cancel each other out” reflect the image of an active self-correcting process. Some familiar processes in nature obey such laws: a deviation from a stable equilibrium produces a force that restores the equilibrium.

Indeed, this may be true in thermodynamics, chemistry and arguably also economics. These, however, are false analogies. It is important to realize that the laws governed by chance are not guided by principles of equilibrium and the number of random outcomes in a sequence do not have a common balance.

“Chance,” Kahneman writes in Thinking, Fast and Slow, “is commonly viewed as a self-correcting process in which a deviation in one direction induces a deviation in the opposite direction to restore the equilibrium. In fact, deviations are not “corrected” as a chance process unfolds, they are merely diluted.”

 

***

The Law of Small Numbers

Misconceptions of chance are not limited to gambling. In fact, most of us fall for them all the time because we intuitively believe (and there is a whole best-seller section at the bookstore to prove) that inferences drawn from small sample sizes are highly representative of the populations from which they are drawn.

By illustrating people's expectations of random heads and tails sequences, we already established that we have preconceived notions of what randomness looks like. This, coupled with the unfortunate tendency to believe in self-correcting process in a random sample, generates expectations about sample characteristics and representativeness, which are not necessarily true. The expectation that the patterns and characteristics within a small sample will be representative of the population as a whole is called the law of small numbers.

Consider the sequence:

1, 2, 3, _, _, _

What do you think are the next three digits?

The task almost seems laughable, because the pattern is so familiar and obvious – 4,5,6. However, there is an endless variation of different algorithms that would still fit the first three numbers, such as the Fibonacci sequence (5, 8, 13), a repeated sequence (1,2,3), a random sequence (5,8,2) and many others. Truth is, in this case there simply is not enough information to say what the rules governing this specific sequence are with any reliability.

The same rule applies to sampling problems – sometimes we feel we have gathered enough data to tell a real pattern from an illusion. Let me illustrate this fallacy with yet another example.

Imagine that you face a tough decision between investing in the development of two different product opportunities. Let’s call them Product A or Product B. You are interested in which product would appeal to the majority of the market, so you decide to conduct customer interviews. Out of the first five pilot interviews, four customers show a preference for Product A. While the sample size is quite small, given the time pressure involved, many of us would already have some confidence in concluding that the majority of customers would prefer Product A.

However, a quick statistical test will tell you that the probability of a result just as extreme is in fact 3/8, assuming that there is no preference among customers at all. This in simple terms means that if customers had no preference between Products A and B, you would still expect 3 customer samples out of 8 to have four customers vouching for Product A.

Basically, a study of such size has little to no predictive validity – these results could easily be obtained from a population with no preference for one or the other product. This, of course, does not mean that talking to customers is of no value. Quite the contrary – the more random cases we examine, the more reliable and accurate the results of the true proportion will be. If we want absolute certainty we must be prepared for a lot of work.

There will always be cases where a guesstimate based on a small sample will be enough because we have other critical information guiding the decision-making process or we simply do not need a high degree of confidence. Yet rather than assuming that the samples we come across are always perfectly representative, we must treat random selection with the suspicion it deserves. Accepting the role imperfect information and randomness play in our lives and being actively aware of what we don’t know already makes us better decision makers.

Erik Hollnagel: The Search For Causes

A great passage from Erik Hollnagel‘s Barriers And Accident Prevention on the search for causes:

Whenever an accident happens there is a natural concern to find out in detail exactly what happened and to determine the causes of it. Indeed, whenever the result of an action or event falls significantly short of what was expected, or whenever something unexpected happens, people try to find an explanation for it. This trait of human nature is so strong that we try to find causes even when they do not exist, such as in the case of misleading or spurious correlations. For a number of reasons humans seem to be extremely reluctant to accept that something can happen by chance. One very good reason is that we have created a way of living that depends heavily on the use of technology, and that technological systems are built to function in a deterministic, hence reliable manner. If therefore something fails, we are fully justified in trying to find the reason for it. A second reason is that our whole understanding of the world is based on the assumption of specific relations between causes and effects, as amply illustrated by the Laws of Physics. (Even in quantum physics there are assumptions of more fundamental relations that are deterministic.) A third reason is that most humans find it very uncomfortable when they do not know what to expect, i.e., when things happen in an unpredictable manner. This creates a sense of being out of control, something that is never desirable since – from an evolutionary perspective – it means that the chances of survival are reduced.

This was described by Friedrich Nietzsche when he wrote:

[T]o trace something unknown back to something known is alleviating, soothing, gratifying and gives moreover a feeling of power. Danger, disquiet, anxiety attend the unknown – the first instinct is to eliminate these distressing states. First principle: any explanation is better than none … The cause-creating drive is thus conditioned and excited by the feeling of fear.

Hollnagel, continues:

A well-known example of this is provided by the phenomenon called the gambler's fallacy. The name refers to the fact that gamblers often seem to believe that a long row of events of one type increases the probability of the complementary event. Thus if a series of ‘red' events occur on a roulette wheel, the gambler's fallacy lead people to believe that the probability of ‘black' increases. … Rather than accepting that the underlying mechanism may be random people invent all kinds of explanations to reduce the uncertainty of future events.

Predicting the Improbable

One natural human bias is that we tend to draw strong conclusions based on few observations. This bias, misconceptions of chance, shows itself in many ways including the gambler and hot hand fallacies. Such biases may induce public opinion and the media to call for dramatic swings in policies or regulation in response to highly improbable events. These biases are made even worse by our natural tendency to “do something.”

***

An event like an earthquake happens, making it more available in our mind.

We think the event is more probable than evidence would support so we run out and buy earthquake insurance. Over many years as the earthquake fades from our mind (making it less available) we believe, paradoxically, that the risk is lower (based on recent evidence) so we cancel our policy. …

Some events are hard to predict. This becomes even more complicated when you consider not only predicting the event but the timing of the event as well. This article below points out that experts, like the rest of us, base their predictions on inference from observing the past and are just as prone to biases as the rest of us.

Why do people over infer from recent events?

There are two plausible but apparently contradicting intuitions about how people over-infer from observing recent events.

The gambler's fallacy claims that people expect rapid reversion to the mean.

For example, upon observing three outcomes of red in roulette, gamblers tend to think that black is now due and tend to bet more on black (Croson and Sundali 2005).

The hot hand fallacy claims that upon observing an unusual streak of events, people tend to predict that the streak will continue. (See Misconceptions of Chance)

The hot hand fallacy term originates from basketball where players who scored several times in a row are believed to have a “hot hand”, i.e. are more likely to score at their next attempt.

Recent behavioural theory has proposed a foundation to reconcile the apparent contradiction between the two types of over-inference. The intuition behind the theory can be explained with reference to the example of roulette play.

A person believing in the law of small numbers thinks that small samples should look like the parent distribution, i.e. that the sample should be representative of the parent distribution. Thus, the person believes that out of, say 6, spins 3 should be red and 3 should be black (ignoring green). If observed outcomes in the small sample differ from the 50:50 ratio, immediate reversal is expected. Thus, somebody observing 2 times red in 6 consecutive spins believes that black is “due” on the 3rd spin to restore the 50:50 ratio.

Now suppose such person is uncertain about the fairness of the roulette wheel. Upon observing an improbable event (6 times red in 6 spins, say), the person starts to doubt about the fairness of the roulette wheel because a long streak does not correspond to what he believes a random sequence should look like. The person then revises his model of the data generating process and starts to believe the event on streak is more likely. The upshot of the theory is that the same person may at first (when the streak is short) believe in reversion of the trend (the gambler’s fallacy) and later – when the streak is long – in continuation of the trend (the hot hand fallacy).

Continue Reading