Tag Archives: Insensitivity to base rates

Countering the Inside View and Making Better Decisions

Countering the Inside View And Making Better Decisions

You can reduce the number of mistakes you make by thinking about problems more clearly.

In his book Think Twice: Harnessing the Power of Counterintuition, Michael Mauboussin discusses how we can “fall victim to simplified mental routines that prevent us from coping with the complex realities inherent in important judgment calls.” One of those routines is the inside view, which we’re going to talk about in this article but first let’s get a bit of context.

No one wakes up thinking, “I am going to make bad decisions today.” Yet we all make them. What is particularly surprising is some of the biggest mistakes are made by people who are, by objective standards, very intelligent. Smart people make big, dumb, and consequential mistakes.


Mental flexibility, introspection, and the ability to properly calibrate evidence are at the core of rational thinking and are largely absent on IQ tests. Smart people make poor decisions because they have the same factory settings on their mental software as the rest of us, and that software isn’t designed to cope with many of today’s problems.

We don’t spend enough time thinking and learning from the process. Generally we’re pretty ambivalent about the process by which we make decisions.

… typical decision makers allocate only 25 percent of their time to thinking about the problem properly and learning from experience. Most spend their time gathering information, which feels like progress and appears diligent to superiors. But information without context is falsely empowering.

That reminds me of what Daniel Kahneman wrote in Thinking, Fast and Slow:

A remarkable aspect of your mental life is that you are rarely stumped … The normal state of your mind is that you have intuitive feelings and opinions about almost everything that comes your way. You like or dislike people long before you know much about them; you trust or distrust strangers without knowing why; you feel that an enterprise is bound to succeed without analyzing it.

So we’re not really gathering information as much as trying to satisfice our existing intuition. The very thing a good decision process should help root out.

Ego Induced Blindness

One prevalent error we make is that we tend to favour the inside view over the outside view.

An inside view considers a problem by focusing on the specific task and by using information that is close at hand, and makes predictions based on that narrow and unique set of inputs. These inputs may include anecdotal evidence and fallacious perceptions. This is the approach that most people use in building models of the future and is indeed common for all forms of planning.


The outside view asks if there are similar situations that can provide a statistical basis for making a decision. Rather than seeing a problem as unique, the outside view wants to know if others have faced comparable problems and, if so, what happened. The outside view is an unnatural way to think, precisely because it forces people to set aside all the cherished information they have gathered.

When the inside view is more positive than the outside view you effectively have a base rate argument. You’re saying (knowingly or, more likely, unknowingly) that this time is different. Our brains are all too happy to help us construct this argument.

Mauboussin argues that we embrace the inside view for a few primary reasons. First, we’re optimistic by nature. Second, is the “illusion of optimism” (we see our future as brighter than that of others). Finally, is the illusion of control (we think that chance events are subject to our control).

One interesting point is that while we’re bad at looking at the outside view when it comes to ourselves, we’re better at it when it comes to other people.

In fact, the planning fallacy embodies a broader principle. When people are forced to look at similar situations and see the frequency of success, they tend to predict more accurately. If you want to know how something is going to turn out for you, look at how it turned out for others in the same situation. Daniel Gilbert, a psychologist at Harvard University, ponders why people don’t rely more on the outside view, “Given the impressive power of this simple technique, we should expect people to go out of their way to use it. But they don’t.” The reason is most people think of themselves as different, and better, than those around them.

So it’s mostly ego. I’m better than the people tackling this problem before me. We see the differences between situations and use those as rationalizations as to why things are different this time.

Consider this:

We incorrectly think that differences are more valuable than similarities.

After all, anyone can see what’s the same but it takes true insight to see what’s different, right? We’re all so busy trying to find differences that we forget to pay attention to what is the same.

How to Incorporate the Outside View into your Decisions

In Think Twice, Mauboussin distills the work of Kahneman and Tversky into four steps and adds some commentary.

1. Select a Reference Class

Find a group of situations, or a reference class, that is broad enough to be statistically significant but narrow enough to be useful in analyzing the decision that you face. The task is generally as much art as science, and is certainly trickier for problems that few people have dealt with before. But for decisions that are common—even if they are not common for you— identifying a reference class is straightforward. Mind the details. Take the example of mergers and acquisitions. We know that the shareholders of acquiring companies lose money in most mergers and acquisitions. But a closer look at the data reveals that the market responds more favorably to cash deals and those done at small premiums than to deals financed with stock at large premiums. So companies can improve their chances of making money from an acquisition by knowing what deals tend to succeed.

2. Assess the distribution of outcomes.

Once you have a reference class, take a close look at the rate of success and failure. … Study the distribution and note the average outcome, the most common outcome, and extreme successes or failures.


Two other issues are worth mentioning. The statistical rate of success and failure must be reasonably stable over time for a reference class to be valid. If the properties of the system change, drawing inference from past data can be misleading. This is an important issue in personal finance, where advisers make asset allocation recommendations for their clients based on historical statistics. Because the statistical properties of markets shift over time, an investor can end up with the wrong mix of assets.

Also keep an eye out for systems where small perturbations can lead to large-scale change. Since cause and effect are difficult to pin down in these systems, drawing on past experiences is more difficult. Businesses driven by hit products, like movies or books, are good examples. Producers and publishers have a notoriously difficult time anticipating results, because success and failure is based largely on social influence, an inherently unpredictable phenomenon.

3. Make a prediction.

With the data from your reference class in hand, including an awareness of the distribution of outcomes, you are in a position to make a forecast. The idea is to estimate your chances of success and failure. For all the reasons that I’ve discussed, the chances are good that your prediction will be too optimistic.

Sometimes when you find the right reference class, you see the success rate is not very high. So to improve your chance of success, you have to do something different than everyone else.

4. Assess the reliability of your prediction and fine-tune.

How good we are at making decisions depends a great deal on what we are trying to predict. Weather forecasters, for instance, do a pretty good job of predicting what the temperature will be tomorrow. Book publishers, on the other hand, are poor at picking winners, with the exception of those books from a handful of best-selling authors. The worse the record of successful prediction is, the more you should adjust your prediction toward the mean (or other relevant statistical measure). When cause and effect is clear, you can have more confidence in your forecast.


The main lesson we can take from this is that we tend to focus on what’s different whereas the best decisions often focus on just the opposite: what’s the same. While this situation seems a little different, it’s almost always the same.

As Charlie Munger has said: “if you notice, the plots are very similar. The same plot comes back time after time.”

Particulars may vary but, unless those particulars are the variables that govern the outcome of the situation, the pattern remains. If we’re going to focus on what’s different rather than what’s the same, you’d best be sure the variables you’re clinging to matter.

Fooled By Randomness

fooled by randomness

I don’t want you to make the same mistake I did.

I waited too long before reading Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets by Nassim Taleb. He wrote the book before the Black Swan and Antifragile, which propelled him into intellectual celebrity. Interestingly, Fooled by Randomness contains semi-explored gems of the ideas that would later go on to become the best-selling books The Black Swan and Antifragile.

Hindsight Bias

Part of the argument that Fooled by Randomness presents is that when we look back at things that have happened we see them as less random than they actually were.

It is as if there were two planets: the one in which we actually live and the one, considerably more deterministic, on which people are convinced we live. It is as simple as that: Past events will always look less random than they were (it is called the hindsight bias). I would listen to someone’s discussion of his own past realizing that much of what he was saying was just backfit explanations concocted ex post by his deluded mind.

The Courage of Montaigne

Writing on Montaigne as the role model for the modern thinker, Taleb also addresses his courage:

It certainly takes bravery to remain skeptical; it takes inordinate courage to introspect, to confront oneself, to accept one’s limitations— scientists are seeing more and more evidence that we are specifically designed by mother nature to fool ourselves.


Fooled by Randomness is about probability, not in a mathematical way but as skepticism.

In this book probability is principally a branch of applied skepticism, not an engineering discipline. …

Probability is not a mere computation of odds on the dice or more complicated variants; it is the acceptance of the lack of certainty in our knowledge and the development of methods for dealing with our ignorance. Outside of textbooks and casinos, probability almost never presents itself as a mathematical problem or a brain teaser. Mother nature does not tell you how many holes there are on the roulette table , nor does she deliver problems in a textbook way (in the real world one has to guess the problem more than the solution).

Outside of textbooks and casinos, probability almost never presents itself as a mathematical problem” which is fascinating given how we tend to solve problems. In decisions under uncertainty, I discussed how risk and uncertainty are different things, which creates two types of ignorance.

Most decisions are not risk-based, they are uncertainty-based and you either know you are ignorant or you have no idea you are ignorant. There is a big distinction between the two. Trust me, you’d rather know you are ignorant.

Randomness Disguised as Non-Randomness

The core of the book is about luck that we understand as skill or “randomness disguised as non-randomness (that is determinism).”

This problem manifests itself most frequently in the lucky fool, “defined as a person who benefited from a disproportionate share of luck but attributes his success to some other, generally very precise, reason.”

Such confusion crops up in the most unexpected areas, even science, though not in such an accentuated and obvious manner as it does in the world of business. It is endemic in politics, as it can be encountered in the shape of a country’s president discoursing on the jobs that “he” created, “his” recovery, and “his predecessor’s” inflation.

These lucky fools are often fragilistas — they have no idea they are lucky fools. For example:

[W]e often have the mistaken impression that a strategy is an excellent strategy, or an entrepreneur a person endowed with “vision,” or a trader a talented trader, only to realize that 99.9% of their past performance is attributable to chance, and chance alone. Ask a profitable investor to explain the reasons for his success; he will offer some deep and convincing interpretation of the results. Frequently, these delusions are intentional and deserve to bear the name “charlatanism.”

This does not mean that all success is luck or randomness. There is a difference between “it is more random than we think” and “it is all random.”

Let me make it clear here : Of course chance favors the prepared! Hard work, showing up on time, wearing a clean (preferably white) shirt, using deodorant, and some such conventional things contribute to success— they are certainly necessary but may be insufficient as they do not cause success. The same applies to the conventional values of persistence, doggedness and perseverance: necessary, very necessary. One needs to go out and buy a lottery ticket in order to win. Does it mean that the work involved in the trip to the store caused the winning? Of course skills count, but they do count less in highly random environments than they do in dentistry.

No, I am not saying that what your grandmother told you about the value of work ethics is wrong! Furthermore, as most successes are caused by very few “windows of opportunity,” failing to grab one can be deadly for one’s career. Take your luck!

That last paragraph connects to something Charlie Munger once said: “Really good investment opportunities aren’t going to come along too often and won’t last too long, so you’ve got to be ready to act. Have a prepared mind.

Taleb thinks of success in terms of degrees, so mild success might be explained by skill and labour but outrageous success “is attributable variance.”

Luck Makes You Fragile

One thing Taleb hits on that really stuck with me is that “that which came with the help of luck could be taken away by luck (and often rapidly and unexpectedly at that). The flipside, which deserves to be considered as well (in fact it is even more of our concern), is that things that come with little help from luck are more resistant to randomness.” How Antifragile.

Taleb argues this is the problem of induction, “it does not matter how frequently something succeeds if failure is too costly to bear.”

Noise and Signal

We are confused between noise and signal.

…the literary mind can be intentionally prone to the confusion between noise and meaning, that is, between a randomly constructed arrangement and a precisely intended message. However, this causes little harm; few claim that art is a tool of investigation of the Truth— rather than an attempt to escape it or make it more palatable. Symbolism is the child of our inability and unwillingness to accept randomness; we give meaning to all manner of shapes; we detect human figures in inkblots.

All my life I have suffered the conflict between my love of literature and poetry and my profound allergy to most teachers of literature and “critics.” The French thinker and poet Paul Valery was surprised to listen to a commentary of his poems that found meanings that had until then escaped him (of course, it was pointed out to him that these were intended by his subconscious).

If we’re concerned about situations where randomness is confused with non randomness should we also be concerned with situations where non randomness is mistaken for randomness, which would result in signal being ignored?

First, I am not overly worried about the existence of undetected patterns. We have been reading lengthy and complex messages in just about any manifestation of nature that presents jaggedness (such as the palm of a hand, the residues at the bottom of Turkish coffee cups, etc.). Armed with home supercomputers and chained processors, and helped by complexity and “chaos” theories, the scientists, semiscientists, and pseudoscientists will be able to find portents. Second, we need to take into account the costs of mistakes; in my opinion, mistaking the right column for the left one is not as costly as an error in the opposite direction. Even popular opinion warns that bad information is worse than no information at all.

If you haven’t yet, pick up a copy of Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets. Don’t make the same mistake I did and wait to read this important book.

(image via)

Just In Time Information Gathering

We’re becoming more like factories.

Just-in-time is a production strategy aimed at, among other things, reducing the need for excess inventory. Parts are supplied only when needed in the amount needed. While it makes a business more capital efficient, it also makes it more fragile.

We’ve adopted a similar strategy for information gathering. We’re so consumed by noise and busy work that the only time we really seek out signal is when we need it the most: right before we make a decision.

This creates a host of problems.

The worst time to look for information is when we need it to make a decision. When we do that we’re more likely to see what’s unique and miss the historical context. We’re also more likely to be biased by what is available. And searching for information at the time of need is an indication that you have no idea what you’re doing.

“Pattern recognition,” says Alice Schroeder, “creates an impulse always to connect new knowledge to old and to primarily be interested in new knowledge that genuinely builds on the old.” It helps knowledge snowball.

If we can’t connect the current situation to something we already understand, we might reason that it is not in our circle of competence and thus we shouldn’t be drawing conclusions. If we can, however, connect it to something we previously understood then we’re less likely to draw conclusions on the basis of “this time is different.”

There is merit to thinking about information gathering as a logistics problem.

This Time is Different

When we look at situations we’re always looking for what’s unique. We should, however, give more thought to similarities.

“This time is different” could be the 4 most costly words ever spoken. It’s not the words that are costly so much as the conclusions they encourage us to draw.

We incorrectly think that differences are more valuable than similarities.

After all, anyone can see what’s the same but it takes true insight to see what’s different, right? We’re all so busy trying to find differences that we forget to pay attention to what is the same.

Imagine sitting in a meeting where people are about to make the same mistake they made last year on the same decision. Let’s say, for example, Jack has a history of messing up the annual tax returns. He’s a good guy. He’s nice to everyone. In fact, he buys everyone awesome Christmas presents. But the last three years—the period he’s been in charge of tax returns—have been nothing short of a disaster causing more work for you and your department.

The assignment for the tax return comes up and Jack is once again nominated.

Before you have a chance to voice your concerns, one of your co-workers speaks up: “I know Jack has dropped the ball on this assignment in the past but I think this time is different. He’s been working hard to make sure he’s better organized.”

That’s all it takes. Conversation over — everyone is focused on what’s unique about this time and it’s unlikely, despite ample evidence, that you’ll be able to convince them otherwise.

In part, people want to believe in Jack because he’s a nice guy. In part, we’re all focused on why this time is different and we’ll ignore evidence to the contrary.

Focusing on what’s different makes it easy to forget historical context. We lose touch with the base rate. We only see the evidence that supports our view (confirmation bias). We become optimistic and overconfident.

History provides context. And what history shows us is that no matter how unique things are today there are a lot of similarities with the past.

Consider investors and the dotcom bubble. Collectively people saw this as unprecedented and unique, a massive transformation that was unparalleled.

That reasoning, combined with a blindness to what was the same about this situation and previous ones, encouraged us to draw conclusions that proved costly. We reasoned that everything would change and everyone who owned internet companies would prosper and the old non-internet companies would quickly go into bankruptcy.

All of a sudden profits didn’t matter. Nor did revenue. They would come in time, we hoped. Market share mattered no matter how costly it was to acquire.

More than that, if you didn’t buy now you’d miss out. These companies would take over the world and you’d be left out.

We got so caught up in what was different that we forgot to see what was the same.

And there were historical parallels: Automobiles, Radio, Television, and Airplanes to name a few. At the time these innovations completely transformed the world as well. You can consider them the dotcoms of yesteryear.

And how did these massively transformational industries end up for investors?

At one time there were allegedly over 70 different auto manufacturing operations in the United States. Only 3 of them survived (and a few of those even required government funds.)

If you catch yourself reasoning based on “this time is different” remember that you are probably speculating. While you may be right, odds are, this time is not different. You just haven’t looked for the similarities.

Avoiding Ignorance

This is a continuation of two types of ignorance.

You can’t deal with ignorance if you can’t recognize its presence. If you’re suffering from primary ignorance it means you probably failed to consider the possibility of being ignorant or you found ways not to see that you were ignorant.

You’re ignorant and unaware, which is worse than being ignorant and aware.

The best way to avoid this, suggests Joy and Zeckhauser, is to raise self-awareness.

Ask yourself regularly: “Might I be in a state of consequential ignorance here?”

They continue:

If the answer is yes, the next step should be to estimate base rates. That should also be the next step if the starting point is recognized ignorance.

Of all situations such as this, how often has a particular outcome happened. Of course, this is often totally subjective.

and its underpinnings are elusive. It is hard to know what the sample of relevant past experiences has been, how to draw inferences from the experience of others, etc. Nevertheless, it is far better to proceed to an answer, however tenuous, than to simply miss (primary ignorance) or slight (recognized ignorance) the issue. Unfortunately, the assessment of base rates is challenging and substantial biases are likely to enter.

When we don’t recognize ignorance the base rate is extremely underestimated. When we do recognize ignorance, we face “duelling biases; some will lead to underestimates of base rates and others to overestimates.”

Three biases come into play while estimating base rates: overconfidence, salience, and selection biases.

So we are overconfident in our estimates. We estimate things that are salient – that is, “states with which (we) have some experience or that are otherwise easily brought to mind.” And “there is a strong selection bias to recall or retell events that were surprising or of great consequence.”

Our key lesson is that as individuals proceed through life, they should always be on the lookout for ignorance. When they do recognize it, they should try to assess how likely they are to be surprised—in other words, attempt to compute the base rate. In discussing this assessment, we might also employ the term “catchall” from statistics, to cover the outcomes not specifically addressed.

It’s incredibly interesting to view literature through the lens of human decision making.

Crime and Punishment is particularly interesting as a study of primary ignorance. Raskolnikov deploys his impressive intelligence to plan the murder, believing, in his ignorance, that he has left nothing to chance. In a series of descriptions not for the squeamish or the faint-hearted, the murderer’s thoughts are laid bare as he plans the deed. We read about his skills in strategic inference and his powers of prediction about where and how he will corner his victim; his tactics at developing complementary skills (what is the precise manner in which he will carry the axe?; what strategies will help him avoid detection) are revealed.

But since Raskolnikov is making decisions under primary ignorance, his determined rationality is tightly “bounded.” He “construct[s] a simplified model of the real situation in order to deal with it; … behaves rationally with respect to this model, [but] such behavior is not even approximately optimal with respect to the real world” (Simon 1957). The second-guessing, fear, and delirium at the heart of Raskolnikov’s thinking as he struggles to gain a foothold in his inner world show the impact of a cascade of Consequential Amazing Development’s (CAD), none predicted, none even contemplated. Raskolnikov anticipated an outcome in which he would dispatch the pawnbroker and slip quietly out of her apartment. He could not have possibly predicted that her sister would show up, a characteristic CAD that challenges what Taleb (2012) calls our “illusion of predictability.”

Joy and Zeckhauser argue we can draw two conclusions.

First, we tend to downplay the role of unanticipated events, preferring instead to expect simple causal relationships and linear developments. Second, when we do encounter a CAD, we often counter with knee-jerk, impulsive decisions, the equivalent of Raskolnikov committing a second impetuous murder.

Bringing you Farnam Street took thousands of dollars and nearly 1,500 hours in 2013. If you find any value in it, I’d greatly appreciate your support with a modest donation. For extra Karma points, become a member with a monthly donation.

References: Ignorance: Lessons from the Laboratory of Literature (Joy and Zeckhauser).

A Decision-Making Magic Trick

Two important nuggets from an interview with Chip Heath, co-author of Decisive (more here), on improving our ability to make better decisions:

A decision-making magic trick

The closest thing to a decision-making magic trick that I’ve found is the question, “What would you advise your best friend to do if they were in your situation?” So often when I ask that question, people blurt out an answer and their eyes get wide. They’re shocked at how easy it is when you just imagine you’re advising someone else.

This time isn’t so different

businesses make decisions about mergers and acquisitions that are hundreds of millions of dollars, and to the senior leader it seems like, “Well this is a different situation than the last acquisition we made.” And yet in that room making the decision is a set of people who have probably seen a dozen acquisitions, but they don’t take the time to do even the equivalent of the three-out-of-five-stars rating that we would get from Amazon.com.

The kind of decisions that senior people make always present themselves as though they are completely different than anything else. Big decisions are subtle in a way because they all seem to come one at a time. The advantage of smaller decisions is we realize we are in a repeated situation where we’re going to see the same thing a lot.

Yet lots of big decisions end up having that property as well. If we take the time to move our mental spotlight around, we can always find other decisions that are similar to this one. We think the chief financial officer we’re trying to hire is in a unique position in the company, and that this is a unique position in time with unique demands. But the fact is, we’ve made other senior hires and we know how that process goes. Stepping back and taking in the broader context is just as useful for senior leaders as it is for the frontline worker who’s making a decision on the 35th mortgage application or the 75th customer complaint.