Farnam Street helps you make better decisions, innovate, and avoid stupidity.
With over 350,000 monthly readers and more than 87,000 subscribers to our popular weekly digest, we've become an online intellectual hub.
Farnam Street helps you make better decisions, innovate, and avoid stupidity.
With over 350,000 monthly readers and more than 87,000 subscribers to our popular weekly digest, we've become an online intellectual hub.
There are four principles that Gregory Mankiw outlines in his multi-disciplinary economics textbook Principles of Economics.
I got the idea for reading an Economics textbook from Charlie Munger, the billionaire business partner of Warren Buffett. He said:
Economics was always more multidisciplinary than the rest of soft science. It just reached out and grabbed things as it needed to. And that tendency to just grab whatever you need from the rest of knowledge if you’re an economist has reached a fairly high point in Mankiw’s new textbook Principles of Economics. I checked out that textbook. I must have been one of the few businessmen in America that bought it immediately when it came out because it had gotten such a big advance. I wanted to figure out what the guy was doing where he could get an advance that great. So this is how I happened to riffle through Mankiw’s freshman textbook. And there I found laid out as principles of economics: opportunity cost is a superpower, to be used by all people who have any hope of getting the right answer. Also, incentives are superpowers.
So we know that we can add Opportunity cost and incentives to our list of Mental Models.
Let’s dig in.
You have likely heard the old saying, “There is no such thing as a free lunch.” There is much to this old adage and it’s one we often forget when making decisions. To get more of something we like we almost always have to give up something else we like. A good heuristic in life is that if someone offers you something for nothing, turn it down.
Making decisions requires trading off one goal against another.
Consider a student who must decide how to allocate her most valuable resource—her time. She can spend all of her time studying economics, spend all of it studying psychology, or divide it between the two fields. For every hour she studies one subject, she gives up an hour she could have used studying the other. And for every hour she spends studying, she gives up an hour that she could have spent napping, bike riding, watching TV, or working at her part-time job for some extra spending money.
Or consider parents deciding how to spend their family income. They can buy food, clothing, or a family vacation. Or they can save some of the family income for retirement or for children’s college education. When they choose to spend an extra dollar on one of these goods, they have one less dollar to spend on some other good.
These are rather simple examples but Mankiw offers some more complicated ones. Consider the trade-off that society faces between efficiency and equality.
Efficiency means that society is getting the maximum benefits from its scarce resources. Equality means that those benefits are distributed uniformly among society’s members. In other words, efficiency refers to the size of the economic pie, and equality refers to how the pie is divided into individual slices.
When government policies are designed, these two goals often conflict. Consider, for instance, policies aimed at equalizing the distribution of economic well-being. Some of these policies, such as the welfare system or unemployment insurance, try to help the members of society who are most in need. Others, such as the individual income tax, ask the financially successful to contribute more than others to support the government. Though they achieve greater equality, these policies reduce efficiency. When the government redistributes income from the rich to the poor, it reduces the reward for working hard; as a result, people work less and produce fewer goods and services. In other words, when the government tries to cut the economic pie into more equal slices, the pie gets smaller.
Because of trade-offs, people face decisions between the costs and benefits of one course of action and the cost and benefits of another course. But costs are not as obvious as they might first appear — we need to apply some second-level thinking:
Consider the decision to go to college. The main benefits are intellectual enrichment and a lifetime of better job opportunities. But what are the costs? To answer this question, you might be tempted to add up the money you spend on tuition, books, room, and board. Yet this total does not truly represent what you give up to spend a year in college.
There are two problems with this calculation. First, it includes some things that are not really costs of going to college. Even if you quit school, you need a place to sleep and food to eat. Room and board are costs of going to college only to the extent that they are more expensive at college than elsewhere. Second, this calculation ignores the largest cost of going to college—your time. When you spend a year listening to lectures, reading textbooks, and writing papers, you cannot spend that time working at a job. For most students, the earnings they give up to attend school are the single largest cost of their education.
The opportunity cost of an item is what you give up to get that item. When making any decision, decision makers should be aware of the opportunity costs that accompany each possible action. In fact, they usually are. College athletes who can earn millions if they drop out of school and play professional sports are well aware that the opportunity cost of their attending college is very high. It is not surprising that they often decide that the benefit of a college education is not worth the cost.
For the sake of simplicity economists normally assume that people are rational. While this causes many problems, there is an undercurrent of truth to the fact that people systematically and purposefully “do the best they can to achieve their objectives, given opportunities.” There are two parts to rationality. The first is that your understanding of the world is correct. Second you maximize the use of your resources toward your goals.
Rational people know that decisions in life are rarely black and white but usually involve shades of gray. At dinnertime, the question you face is not “Should I fast or eat like a pig?” More likely, you will be asking yourself “Should I take that extra spoonful of mashed potatoes?” When exams roll around, your decision is not between blowing them off and studying twenty-four hours a day but whether to spend an extra hour reviewing your notes instead of watching TV. Economists use the term marginal change to describe a small incremental adjustment to an existing plan of action. Keep in mind that margin means “edge,” so marginal changes are adjustments around the edges of what you are doing. Rational people often make decisions by comparing marginal benefits and marginal costs.
Thinking at the margin works for business decisions.
Consider an airline deciding how much to charge passengers who fly standby. Suppose that flying a 200-seat plane across the United States costs the airline $100,000. In this case, the average cost of each seat is $100,000/200, which is $500. One might be tempted to conclude that the airline should never sell a ticket for less than $500. But a rational airline can increase its profits by thinking at the margin. Imagine that a plane is about to take off with 10 empty seats and a standby passenger waiting at the gate is willing to pay $300 for a seat. Should the airline sell the ticket? Of course, it should. If the plane has empty seats, the cost of adding one more passenger is tiny. The average cost of flying a passenger is $500, but the marginal cost is merely the cost of the bag of peanuts and can of soda that the extra passenger will consume. As long as the standby passenger pays more than the marginal cost, selling the ticket is profitable.
This also helps answer the question of why diamonds are so expensive and water is so cheap.
Humans need water to survive, while diamonds are unnecessary; but for some reason, people are willing to pay much more for a diamond than for a cup of water. The reason is that a person’s willingness to pay for a good is based on the marginal benefit that an extra unit of the good would yield. The marginal benefit, in turn, depends on how many units a person already has. Water is essential, but the marginal benefit of an extra cup is small because water is plentiful. By contrast, no one needs diamonds to survive, but because diamonds are so rare, people consider the marginal benefit of an extra diamond to be large.
A rational decision maker takes an action if and only if the marginal benefit of the action exceeds the marginal cost.
Incentives induce people to act. If you use a rational approach to decision making that involves trade offs and comparing costs and benefits, you respond to incentives. Charlie Munger once said: “Never, ever, think about something else when you should be thinking about the power of incentives.”
Incentives are crucial to analyzing how markets work. For example, when the price of an apple rises, people decide to eat fewer apples. At the same time, apple orchards decide to hire more workers and harvest more apples. In other words, a higher price in a market provides an incentive for buyers to consume less and an incentive for sellers to produce more. As we will see, the influence of prices on the behavior of consumers and producers is crucial for how a market economy allocates scarce resources.
Public policymakers should never forget about incentives: Many policies change the costs or benefits that people face and, as a result, alter their behavior. A tax on gasoline, for instance, encourages people to drive smaller, more fuel-efficient cars. That is one reason people drive smaller cars in Europe, where gasoline taxes are high, than in the United States, where gasoline taxes are low. A higher gasoline tax also encourages people to carpool, take public transportation, and live closer to where they work. If the tax were larger, more people would be driving hybrid cars, and if it were large enough, they would switch to electric cars.
Failing to consider how policies and decisions affect incentives often results in unforeseen results.
You would be hard pressed to come across a reading list on behavioral economics that doesn’t mention Nudge: Improving Decisions About Health, Wealth, and Happiness by Richard Thaler and Cass Sunstein.
It is a fascinating look at how we can create environments or ‘choice architecture’ to help people make better decisions. But one of the reasons it’s been so influential is because it helps us understand why people sometimes make bad decisions in the first place. If we really want to understand how we can nudge people into making better choices, it’s important to understand why they often make such poor ones.
Let’s take a look at how Thaler and Sunstein explain some of our common mistakes in a chapter aptly called ‘Biases and Blunders.’
Humans have a tendency to put too much emphasis on one piece of information when making decisions. When we overweigh one piece of information and make assumptions based on it, we call that an anchor. Say I borrow a 400-page-book from a friend and I think to myself, the last book I read was about 300 pages and I read it in 5 days so I’ll let my friend know I’ll have her book back to her in 7 days. Problem is, I’ve only compared one factor related to me reading books and now I’ve made a decision without taking into account many other factors which could affect the outcome. For example, is the new book a topic I will digest at the same rate? Will I have the same time over those 7 days for reading? I have looked at number of pages but are the number of words per page similar?
As Thaler and Sunstein explain:
This process is called ‘anchoring and adjustment.’ You start with some anchor, the number you know, and adjust in the direction you think is appropriate. So far, so good. The bias occurs because the adjustments are typically insufficient.
This is the tendency of our mind to overweigh information that is recent and readily available. What did you think about the last time you read about a plane crash? Did you start thinking about you being in a plane crash? Imagine how much it would weigh on your mind if you were set to fly the next day.
We assess the likelihood of risks by asking how readily examples come to mind. If people can easily think of relevant examples, they are far more likely to be frightened and concerned than if they cannot.
Accessibility and salience are closely related to availability, and they are important as well. If you have personally experienced a serious earthquake, you’re more likely to believe that an earthquake is likely than if you read about it in a weekly magazine. Thus, vivid and easily imagined causes of death (for example, tornadoes) often receive inflated estimates of probability, and less-vivid causes (for example, asthma attacks) receive low estimates, even if they occur with a far greater frequency (here, by a factor of twenty). Timing counts too: more recent events have a greater impact on our behavior, and on our fears, than earlier ones.
Use of the representativeness heuristic can cause serious misperceptions of patterns in everyday life. When events are determined by chance, such as a sequence of coin tosses, people expect the resulting string of heads and tails to be representative of what they think of as random. Unfortunately, people do not have accurate perceptions of what random sequences look like. When they see the outcomes of random processes, they often detect patterns that they think have great meaning but in fact are just due to chance.
It would seem as though we have issues with randomness. Our brains automatically want to see patterns when none may exist. Try a coin toss experiment on yourself. Simply flip a coin and keep track if it’s heads or tails. At some point you will hit ‘a streak’ of either heads or tails and you will notice that you experience a sort of cognitive dissonance; you know that ‘a streak’ at some point is statistically probable but you can’t help but thinking the next toss has to break the streak because for some reason in your head it’s not right. That unwillingness to accept randomness, our need for a pattern, often clouds our judgement when making decisions.
We have touched upon optimism bias in the past. Optimism truly is a double-edged sword. On one hand it is extremely important to be able to look past a bad moment and tell yourself that it will get better. Optimism is one of the great drivers of human progress.
On the other hand, if you never take those rose-coloured glasses off, you will make mistakes and take risks that could have been avoided. When assessing the possible negative outcomes associated with risky behaviour we often think ‘it won’t happen to me.’ This is a brain trick: We are often insensitive to the base rate.
Unrealistic optimism is a pervasive feature of human life; it characterizes most people in most social categories. When they overestimate their personal immunity from harm, people may fail to take sensible preventive steps. If people are running risks because of unrealistic optimism, they might be able to benefit from a nudge.
When they have to give something up, they are hurt more than they are pleased if they acquire the very same thing.
We are familiar with loss aversion in the context described above but Thaler and Sunstein take the concept a step further and explain how it plays a role in ‘default choices.’ Loss aversion can make us so fearful of making the wrong decision that we don’t make any decision. This explains why so many people settle for default options.
The combination of loss aversion with mindless choosing implies that if an option is designated as the ‘default,’ it will attract a large market share. Default options thus act as powerful nudges. In many contexts defaults have some extra nudging power because consumers may feel, rightly or wrongly, that default options come with an implicit endorsement from the default setter, be it the employer, government, or TV scheduler.
Of course, this is not the only reason default options are so popular. “Anchoring,” which we mentioned above, plays a role here. Our mind anchors immediately to the default option, especially in unfamiliar territory for us.
We also have the tendency towards inertia, given that mental effort is tantamount to physical effort – thinking hard requires physical resources. If we don’t know the difference between two 401(k) plans and they both seem similar, why expend the mental effort to switch away from the default investment option? You may not have that thought consciously; it often happens as a “click, whirr.”
Our prefered definition requires recognizing that people’s state of arousal varies over time. To simplify things we will consider just the two endpoints: hot and cold. When Sally is very hungry and appetizing aromas are emanating from the kitchen, we can say she is in a hot state. When Sally is thinking abstractly on Tuesday about the right number of cashews she should consume before dinner on Saturday, she is in a cold state. We will call something ‘tempting’ if we consume more of it when hot than when cold. None of this means that decisions made in a cold state are always better. For example, sometimes we have to be in a hot state to overcome our fears about trying new things. Sometimes dessert really is delicious, and we do best to go for it. Sometimes it is best to fall in love. But it is clear that when we are in a hot state, we can often get into a lot of trouble.
For most of us, however, self-control issues arise because we underestimate the effect of arousal. This is something the behavioral economist George Loewenstein (1996) calls the ‘hot-cold empathy gap.’ When in a cold state, we do not appreciate how much our desires and our behavior reflects a certain naivete about the effects that context can have on choice.
The concept of arousal is analogous to mood. At the risk of stating the obvious, our mood can play a definitive role in our decision making. We all know it, but how many among us truly use that insight to make better decisions?
This is one reason we advocate decision journals when it comes to meaningful decisions (probably no need to log in your cashew calculations); a big part of tracking your decisions is your mood when you make them. A zillion contextual clues go into your state of arousal, but taking a quick pause to note which state you’re in as you make a decision can make a difference over time.
Mood is also affected by chemicals. This one may be familiar to you coffee (or tea) addicts out there. Do you recall the last time you felt terrible or uncertain about a decision when you were tired, only to feel confident and spunky about the same topic after a cup of java?
Or, how about alcohol? There’s a reason it’s called a “social lubricant” – our decision making changes when we’ve consumed enough of it.
Lastly, the connection between sleep and mood goes deep. Need we say more?
Peer pressure is another tricky nudge that can be both positive or negative. We can be nudged to make better decisions when we think that our peer group is doing the same. If we think our neighbors conserve more energy or recycle more, we start making a better effort to reduce our consumption and recycle. If we think the people around us are eating better and exercising more we tend to do the same. Information we get from peer groups can also help us make better decisions because of ‘collaborative filtering’; the choices of our peer groups help us filter out and narrow down our choices. If your friends who share similar views and tastes as you recommend book X, then you may like it as well. (Google, Amazon and Netflix are built on this principle).
However, if we are all reading the same book because we constantly see people with it, but none of us actually like it, then we all lose. We run off the mountain with the other lemmings.
Social influences come in two basic categories. The first involves information. If many people do something or think something, their actions and their thoughts convey information about what might be best for you to do or think. The second involves peer pressure. If you care about what other people think about you (perhaps in the mistaken belief that they are paying some attention to what you are doing), then you might go along with the crowd to avoid their wrath or curry their favor.
An important problem here is ‘pluralistic ignorance’ – that is, ignorance, on the part of all or most, about what other people think. We may follow a practice or a tradition not because we like it, or even think it defensible, but merely because we think that most other people like it. Many social practices persist for this reason, and a small shock, or nudge, can dislodge them.
How do we beat social influence? It’s very difficult, and not always desirable: If you are about to enter a building a lot of people are running away from, there’s a better than good chance you should too. But this useful instinct leads us awry.
A simple algorithm, when you feel yourself acting out of social proof, is to ask yourself: Would I still do this if everyone else was not?
For more, check out Nudge.
John Pollack is a former Presidential Speechwriter. If anyone knows the power of words to move people to action, shape arguments, and persuade, it is he.
In Shortcut: How Analogies Reveal Connections, Spark Innovation, and Sell Our Greatest Ideas, he explores the powerful role of analogy in persuasion and creativity.
One of the key tools he uses for this is analogy.
While they often operate unnoticed, analogies aren’t accidents, they’re arguments—arguments that, like icebergs, conceal most of their mass and power beneath the surface. In arguments, whoever has the best argument wins.
But analogies do more than just persuade others — they also play a role in innovation and decision making.
From the bloody Chicago slaughterhouse that inspired Henry Ford’s first moving assembly line, to the “domino theory” that led America into the Vietnam War, to the “bicycle for the mind” that Steve Jobs envisioned as a Macintosh computer, analogies have played a dynamic role in shaping the world around us.
Despite their importance, many people have only a vague sense of the definition.
In broad terms, an analogy is simply a comparison that asserts a parallel—explicit or implicit—between two distinct things, based on the perception of a share property or relation. In everyday use, analogies actually appear in many forms. Some of these include metaphors, similes, political slogans, legal arguments, marketing taglines, mathematical formulas, biblical parables, logos, TV ads, euphemisms, proverbs, fables and sports clichés.
Because they are so disguised they play a bigger role than we consciously realize. Not only do analogies effectively make arguments, but they trigger emotions. And emotions make it hard to make rational decisions.
While we take analogies for granted, the ideas they convey are notably complex.
All day every day, in fact, we make or evaluate one analogy after the other, because some comparisons are the only practical way to sort a flood of incoming data, place it within the content of our experience, and make decisions accordingly.
Remember the powerful metaphor — that arguments are war. This shapes a wide variety of expressions like “your claims are indefensible,” “attacking the weakpoints,” and “You disagree, OK shoot.”
Or consider the Map and the Territory — Analogies give people the map but explain nothing of the territory.
Warren Buffett is one of the best at using analogies to communicate effectively. One of my favorite analogies is when he noted “You never know who’s swimming naked until the tide goes out.” In other words, when times are good everyone looks amazing. When times suck, hidden weaknesses are exposed. The same could be said for analogies:
We never know what assumptions, deceptions, or brilliant insights they might be hiding until we look beneath the surface.
Most people underestimate the importance of a good analogy. As with many things in life, this lack of awareness comes at a cost. Ignorance is expensive.
Evidence suggests that people who tend to overlook or underestimate analogy’s influence often find themselves struggling to make their arguments or achieve their goals. The converse is also true. Those who construct the clearest, most resonant and apt analogies are usually the most successful in reaching the outcomes they seek.
The key to all of this is figuring out why analogies function so effectively and how they work. Once we know that, we should be able to craft better ones.
Effective, persuasive analogies frame situations and arguments, often so subtly that we don’t even realize there is a frame, let alone one that might not work in our favor. Such conceptual frames, like picture frames, include some ideas, images, and emotions and exclude others. By setting a frame, a person or organization can, for better or worse, exert remarkable influence on the direction of their own thinking and that of others.
He who holds the pen frames the story. The first person to frame the story controls the narrative and it takes a massive amount of energy to change the direction of the story. Sometimes even the way that people come across information, shapes it — stories that would be a non-event if disclosed proactively became front page stories because someone found out.
In Don’t Think of an Elephant, George Lakoff explores the issue of framing. The book famously begins with the instruction “Don’t think of an elephant.”
What’s the first thing we all do? Think of an elephant, of course. It’s almost impossible not to think of an elephant. When we stop consciously thinking about it, it floats away and we move on to other topics — like the new email that just arrived. But then again it will pop back into consciousness and bring some friends — associated ideas, other exotic animals, or even thoughts of the GOP.
“Every word, like elephant, evokes a frame, which can be an image of other kinds of knowledge,” Lakoff writes. This is why we want to control the frame rather than be controlled by it.
In Shortcut Pollack tells of Lakoff talking about an analogy that President George W. Bush made in the 2004 State of the Union address, in which he argued the Iraq war was necessary despite the international criticism. Before we go on, take Bush’s side here and think about how you would argue this point – how would you defend this?
In the speech, Bush proclaimed that “America will never seek a permission slip to defend the security of our people.”
As Lakoff notes, Bush could have said, “We won’t ask permission.” But he didn’t. Instead he intentionally used the analogy of permission slip and in so doing framed the issue in terms that would “trigger strong, more negative emotional associations that endured in people’s memories of childhood rules and restrictions.”
Commenting on this, Pollack writes:
Through structure mapping, we correlate the role of the United States to that of a young student who must appeal to their teacher for permission to do anything outside the classroom, even going down the hall to use the toilet.
But is seeking diplomatic consensus to avoid or end a war actually analogous to a child asking their teacher for permission to use the toilet? Not at all. Yet once this analogy has been stated (Farnam Street editorial: and tweeted), the debate has been framed. Those who would reject a unilateral, my-way-or-the-highway approach to foreign policy suddenly find themselves battling not just political opposition but people’s deeply ingrained resentment of childhood’s seemingly petty regulations and restrictions. On an even subtler level, the idea of not asking for a permission slip also frames the issue in terms of sidestepping bureaucratic paperwork, and who likes bureaucracy or paperwork.
Deconstructing analogies, we find out how they function so effectively. Pollack argues they meet five essential criteria.
Let’s explore how these work in greater detail. Let’s use the example of master-thief, Bruce Reynolds, who described the Great Train Robbery as his Sistine Chapel.
In the dark early hours of August 8, 1963, an intrepid gang of robbers hot-wired a six-volt battery to a railroad signal not far from the town of Leighton Buzzard, some forty miles north of London. Shortly, the engineer of an approaching mail train, spotting the red light ahead, slowed his train to a halt and sent one of his crew down the track, on foot, to investigate. Within minutes, the gang overpowered the train’s crew and, in less than twenty minutes, made off with the equivalent of more than $60 million in cash.
Years later, Bruce Reynolds, the mastermind of what quickly became known as the Great Train Robbery, described the spectacular heist as “my Sistine Chapel.”
Use the familiar to explain something less familiar
Reynolds exploits the public’s basic familiarity with the famous chapel in the Vatican City, which after Leonardo da Vinci’s Mona Lisa is perhaps the best-known work of Renaissance art in the world. Millions of people, even those who aren’t art connoisseurs, would likely share the cultural opinion that the paintings in the chapel represent “great art” (as compared to a smaller subset of people who might feel the same way about Jackson Pollock’s drip paintings, or Marcel Duchamp’s upturned urinal).
Highlight similarities and obscure differences
Reynold’s analogy highlights, through implication, similarities between the heist and the chapel—both took meticulous planning and masterful execution. After all, stopping a train and stealing the equivalent of $60m—and doing it without guns—does require a certain artistry. At the same time, the analogy obscures important differences. By invoking the image of a holy sanctuary, Reynolds triggers a host of associations in the audience’s mind—God, faith, morality, and forgiveness, among others—that camouflage the fact that he’s describing an action few would consider morally commendable, even if the artistry involved in robbing that train was admirable.
Identify useful abstractions
The analogy offers a subtle but useful abstraction: Genius is genius and art is art, no matter what the medium. The logic? If we believe that genius and artistry can transcend genre, we must concede that Reynolds, whose artful, ingenious theft netted millions, is an artist.
Tell a coherent story
The analogy offers a coherent narrative. Calling the Great Train Robbery his Sistine Chapel offers the audience a simple story that, at least on the surface makes sense: Just as Michelangelo was called by God, the pope, and history to create his greatest work, so too was Bruce Reynolds called by destiny to pull off the greatest robbery in history. And if the Sistine Chapel endures as an expression of genius, so too must the Great Train Robbery. Yes, robbing the train was wrong. But the public perceived it as largely a victimless crime, committed by renegades who were nothing if not audacious. And who but the most audacious in history ever create great art? Ergo, according to this narrative, Reynolds is an audacious genius, master of his chosen endeavor, and an artist to be admired in public.
There is an important point here. The narrative need not be accurate. It is the feelings and ideas the analogy evokes that make it powerful. Within the structure of the analogy, the argument rings true. The framing is enough to establish it succulently and subtly. That’s what makes it so powerful.
The analogy resonates emotionally. To many people, mere mention of the Sistine Chapel brings an image to mind, perhaps the finger of Adam reaching out toward the finger of God, or perhaps just that of a lesser chapel with which they are personally familiar. Generally speaking, chapels are considered beautiful, and beauty is an idea that tends to evoke positive emotions. Such positive emotions, in turn, reinforce the argument that Reynolds is making—that there’s little difference between his work and that of a great artist.
Daniel Kahneman explains the two thinking structures that govern the way we think: System one and system two . In his book, Thinking Fast and Slow, he writes “Jumping to conclusions is efficient if the conclusions are likely to be correct and the costs of an occasional mistake are acceptable, and if the jump saves much time and effort.”
“A good analogy serves as an intellectual springboard that helps us jump to conclusions,” Pollack writes. He continues:
And once we’re in midair, flying through assumptions that reinforce our preconceptions and preferences, we’re well on our way to a phenomenon known as confirmation bias. When we encounter a statement and seek to understand it, we evaluate it by first assuming it is true and exploring the implications that result. We don’t even consider dismissing the statement as untrue unless enough of its implications don’t add up. And consider is the operative word. Studies suggest that most people seek out only information that confirms the beliefs they currently hold and often dismiss any contradictory evidence they encounter.
The ongoing battle between fact and fiction commonly takes place in our subconscious systems. In The Political Brain: The Role of Emotion in Deciding the Fate of the Nation, Drew Westen, an Emory University psychologist, writes: “Our brains have a remarkable capacity to find their way toward convenient truths—even if they are not all true.”
This also helps explain why getting promoted has almost nothing to do with your performance.
Remember Apollo Robbins? He’s a professional pickpocket. While he has unique skills, he succeeds largely through the choreography of people’s attention. “Attention,” he says “is like water. It flows. It’s liquid. You create channels to divert it, and you hope that it flows the right way.”
“Pickpocketing and analogies are in a sense the same,” Pollack concludes, “as the misleading analogy picks a listener’s mental pocket.”
And this is true whether someone else diverts our attention through a resonant but misleading analogy—“Judges are like umpires”—or we simply choose the wrong analogy all by ourselves.
We rarely stop to see how much of our reasoning is done by analogy. In a 2005 study published in the Harvard Business Review, Giovanni Gavettie and Jan Rivkin wrote: “Leaders tend to be so immersed in the specifics of strategy that they rarely stop to think how much of their reasoning is done by analogy.” As a result they miss things. They make connections that don’t exist. They don’t check assumptions. They miss useful insights. By contrast “Managers who pay attention to their own analogical thinking will make better strategic decisions and fewer mistakes.”
Shortcut goes on to explore when to use analogies and how to craft them to maximize persuasion.
“(History) offers a ridiculous spectacle of a fragment expounding the whole.”
— Will Durant in Our Oriental Heritage
“That’s another thing we’ve learned from your Nation,” said Mein Herr, “map-making. But we’ve carried it much further than you. What do you consider the largest map that would be really useful?”
“About six inches to the mile.”
“Only six inches!” exclaimed Mein Herr. “We very soon got to six yards to the mile. Then we tried a hundred yards to the mile. And then came the grandest idea of all! We actually made a map of the country, on the scale of a mile to the mile!”
“Have you used it much?” I enquired.
“It has never been spread out, yet,” said Mein Herr: “the farmers objected: they said it would cover the whole country, and shut out the sunlight! So we now use the country itself, as its own map, and I assure you it does nearly as well.
— Sylvie and Bruno Concluded
In 1931, in New Orleans, Louisiana, mathematician Alfred Korzybski presented a paper on mathematical semantics. To the non-technical reader, most of the paper reads like an abstruse argument on the relationship of mathematics to human language, and of both to physical reality. Important stuff certainly, but not necessarily immediately useful for the layperson.
However, in his string of arguments on the structure of language, Korzybski introduced and popularized the idea that the map is not the territory. In other words, the description of the thing is not the thing itself. The model is not reality. The abstraction is not the abstracted. This has enormous practical consequences.
A.) A map may have a structure similar or dissimilar to the structure of the territory.
B.) Two similar structures have similar ‘logical’ characteristics. Thus, if in a correct map, Dresden is given as between Paris and Warsaw, a similar relation is found in the actual territory.
C.) A map is not the actual territory.
D.) An ideal map would contain the map of the map, the map of the map of the map, etc., endlessly…We may call this characteristic self-reflexiveness.
Maps are necessary, but flawed. (By maps, we mean any abstraction of reality, including descriptions, theories, models, etc.) The problem with a map is not simply that it is an abstraction; we need abstraction. Lewis Carroll made that clear by having Mein Herr describe a map with the scale of one mile to one mile. Such a map would not have the problems that maps have, nor would it be helpful in any way.
(See Borges for another take.)
To solve this problem, the mind creates maps of reality in order to understand it, because the only way we can process the complexity of reality is through abstraction. But frequently, we don’t understand our maps or their limits. In fact, we are so reliant on abstraction that we will frequently use an incorrect model simply because we feel any model is preferable to no model. (Reminding one of the drunk looking for his keys under the streetlight because “That’s where the light is!”)
Even the best and most useful maps suffer from limitations, and Korzybski gives us a few to explore: (A.) The map could be incorrect without us realizing it; (B.) The map is, by necessity, a reduction of the actual thing, a process in which you lose certain important information; and (C.) A map needs interpretation, a process that can cause major errors. (The only way to truly solve the last would be an endless chain of maps-of-maps, which he called self-reflexiveness.)
With the aid of modern psychology, we also see another issue: the human brain takes great leaps and shortcuts in order to make sense of its surroundings. As Charlie Munger has pointed out, a good idea and the human mind act something like the sperm and the egg — after the first good idea gets in, the door closes. This makes the map-territory problem a close cousin of man-with-a-hammer tendency.
This tendency is, obviously, problematic in our effort to simplify reality. When we see a powerful model work well, we tend to over-apply it, using it in non-analogous situations. We have trouble delimiting its usefulness, which causes errors.
Let’s check out an example.
By most accounts, Ron Johnson was one the most successful and desirable retail executives by the summer of 2011. Not only was he handpicked by Steve Jobs to build the Apple Stores, a venture which had itself come under major scrutiny – one retort printed in Bloomberg magazine: “I give them two years before they’re turning out the lights on a very painful and expensive mistake.” – but he had been credited with playing a major role in turning Target from a K-Mart look-alike into the trendy-but-cheap Tar-zhey by the late 90’s and early 00’s.
Johnson’s success at Apple was not immediate, but it was undeniable. By 2011, Apple stores were by far the most productive in the world on a per-square-foot basis, and had become the envy of the retail world. Their sales figures left Tiffany’s in the dust. The gleaming glass cube on Fifth Avenue became a more popular tourist attraction than the Statue of Liberty. It was a lollapalooza, something beyond ordinary success. And Johnson had led the charge.
With that success, in 2011 Johnson was hired by Bill Ackman, Steven Roth, and other luminaries of the financial world to turn around the dowdy old department store chain JCPenney. The situation of the department store was dour: Between 1992 and 2011, the retail market share held by department stores had declined from 57% to 31%.
Their core position was a no-brainer though. JCPenney had immensely valuable real estate, anchoring malls across the country. Johnson argued that their physical mall position was valuable if for no other reason that people often parked next to them and walked through them to get to the center of the mall. Foot traffic was a given. Because of contracts signed in the 50’s, 60’s, and 70’s, the heyday of the mall building era, rent was also cheap, another major competitive advantage. And unlike some struggling retailers, JCPenney was making (some) money. There was cash in the register to help fund a transformation.
The idea was to take the best ideas from his experience at Apple; great customer service, consistent pricing with no markdowns and markups, immaculate displays, world-class products, and apply them to the department store. Johnson planned to turn the stores into little malls-within-malls. He went as far as comparing the ever-rotating stores-within-a-store to Apple’s “apps.” Such a model would keep the store constantly fresh, and avoid the creeping staleness of retail.
Johnson pitched his idea to shareholders in a series of trendy New York City meetings reminiscent of Steve Jobs’ annual “But wait, there’s more!” product launches at Apple. He was persuasive: JCPenney’s stock price went from $26 in the summer of 2011 to $42 in early 2012 on the strength of the pitch.
The idea failed almost immediately. His new pricing model (eliminating discounting) was a flop. The coupon-hunters rebelled. Much of his new product was deemed too trendy. His new store model was wildly expensive for a middling department store chain – including operating losses purposefully endured, he’d spent several billion dollars trying to effect the physical transformation of the stores. JCPenney customers had no idea what was going on, and by 2013, Johnson was sacked. The stock price sank into the single digits, where it remains two years later.
What went wrong in the quest to build America’s Favorite Store? It turned out that Johnson was using a map of Tulsa to navigate Tuscaloosa. Apple’s products, customers, and history had far too little in common with JCPenney’s. Apple had a rabid, young, affluent fan-base before they built stores; JCPenney’s was not associated with youth or affluence. Apple had shiny products, and needed a shiny store; JCPenney was known for its affordable sweaters. Apple had never relied on discounting in the first place; JCPenney was taking away discounts given prior, triggering massive deprival super-reaction.
In other words, the old map was not very useful. Even his success at Target, which seems like a closer analogue, was misleading in the context of JCPenney. Target had made small, incremental changes over many years, to which Johnson had made a meaningful contribution. JCPenney was attempting to reinvent the concept of the department store in a year or two, leaving behind the core customer in an attempt to gain new ones. This was a much different proposition. (Another thing holding the company back was simply its base odds: Can you name a retailer of great significance that has lost its position in the world and come back?)
The main issue was not that Johnson was incompetent. He wasn’t. He wouldn’t have gotten the job if he was. He was extremely competent. But it was exactly his competence and past success that got him into trouble. He was like a great swimmer that tried to tackle a grand rapid, and the model he used successfully in the past, the map that had navigated a lot of difficult terrain, was not the map he needed anymore. He had an excellent theory about retailing that applied in some circumstances, but not in others. The terrain had changed, but the old idea stuck.
One person who well understands this problem of the map and the territory is Nassim Taleb, author of the Incerto series – Antifragile , The Black Swan, Fooled by Randomness, and The Bed of Procrustes.
Taleb has been vocal about the misuse of models for many years, but the earliest and most vivid I can recall is his firm criticism of a financial model called Value-at Risk, or VAR. The model, used in the banking community, is supposed to help manage risk by providing a maximum potential loss within a given confidence interval. In other words, it purports to allow risk managers to say that, within 95%, 99%, or 99.9% confidence, the firm will not lose more than $X million dollars in a given day. The higher the interval, the less accurate the analysis becomes. It might be possible to say that the firm has $100 million at risk at any time at a 99% confidence interval, but given the statistical properties of markets, a move to 99.9% confidence might mean the risk manager has to state the firm has $1 billion at risk. 99.99% might mean $10 billion. As rarer and rarer events are included in the distribution, the analysis gets less useful. So, by necessity, the “tails” are cut off somewhere and the analysis is deemed acceptable.
Elaborate statistical models are built to justify and use the VAR theory. On its face, it seems like a useful and powerful idea; if you know how much you can lose at any time, you can manage risk to the decimal. You can tell your board of directors and shareholders, with a straight face, that you’ve got your eye on the till.
The problem, in Nassim’s words, is that:
A model might show you some risks, but not the risks of using it. Moreover, models are built on a finite set of parameters, while reality affords us infinite sources of risks.
In order to come up with the VAR figure, the risk manager must take historical data and assume a statistical distribution in order to predict the future. For example, if we could take 100 million human beings and analyse their height and weight, we could then predict the distribution of heights and weights on a different 100 million, and there would be a microscopically small probability that we’d be wrong. That’s because we have a huge sample size and we are analysing something with very small and predictable deviations from the average.
But finance does not follow this kind of distribution. There’s no such predictability. As Nassim has argued, the “tails” are fat in this domain, and the rarest, most unpredictable events have the largest consequences. Let’s say you deem a highly threatening event (for example, a 90% crash in the S&P 500) to have a 1 in 10,000 chance of occurring in a given year, and your historical data set only has 300 years of data. How can you accurately state the probability of that event? You would need far more data.
Thus, financial events deemed to be 5, or 6, or 7 standard deviations from the norm tend to happen with a certain regularity that nowhere near matches their supposed statistical probability. Financial markets have no biological reality to tie them down: We can say with a useful amount of confidence that an elephant will not wake up as a monkey, but we can’t say anything with absolute confidence in an Extremistan arena.
We see several issues with VAR as a “map,” then. The first that the model is itself a severe abstraction of reality, relying on historical data to predict the future. (As all financial models must, to a certain extent.) VAR does not say “The risk of losing X dollars is Y, within a confidence of Z.” (Although risk managers treat it that way). What VAR actually says is “the risk of losing X dollars is Y, based on the given parameters.” The problem is obvious even to the non-technician: The future is a strange and foreign place that we do not understand. Deviations of the past may not be the deviations of the future. Just because municipal bonds have never traded at such-and-such a spread to U.S. Treasury bonds does not mean that they won’t in the future. They just haven’t yet. Frequently, the models are blind to this fact.
In fact, one of Nassim’s most trenchant points is that on the day before whatever “worst case” event happened in the past, you would have not been using the coming “worst case” as your worst case, because it wouldn’t have happened yet.
Here’s an easy illustration. October 19, 1987, the stock market dropped by 22.61%, or 508 points on the Dow Jones Industrial Average. In percentage terms, it was then and remains the worst one-day market drop in U.S. history. It was dubbed “Black Monday.” (Financial writers sometimes lack creativity — there are several other “Black Monday’s” in history.) But here we see Nassim’s point: On October 18, 1987, what would the models use as the worst possible case? We don’t know exactly, but we do know the previous worst case was 12.82%, which happened on October 28, 1929. A 22.61% drop would have been considered so many standard deviations from the average as to be near impossible.
But the tails are very fat in finance – improbable and consequential events seem to happen far more often than they should based on naive statistics. There is also a severe but often unrecognized recursiveness problem, which is that the models themselves influence the outcome they are trying to predict. (To understand this more fully, check out our post on Complex Adaptive Systems.)
A second problem with VAR is that even if we had a vastly more robust dataset, a statistical “confidence interval” does not do the job of financial risk management. Says Taleb:
There is an internal contradiction between measuring risk (i.e. standard deviation) and using a tool [VAR] with a higher standard error than that of the measure itself.
I find that those professional risk managers whom I heard recommend a “guarded” use of the VAR on grounds that it “generally works” or “it works on average” do not share my definition of risk management. The risk management objective function is survival, not profits and losses. A trader according to the Chicago legend, “made 8 million in eight years and lost 80 million in eight minutes”. According to the same standards, he would be, “in general”, and “on average” a good risk manager.
This is like a GPS system that shows you where you are at all times, but doesn’t include cliffs. You’d be perfectly happy with your GPS until you drove off a mountain.
It was this type of naive trust of models that got a lot of people in trouble in the recent mortgage crisis. Backward-looking, trend-fitting models, the most common maps of the financial territory, failed by describing a territory that was only a mirage: A world where home prices only went up. (Lewis Carroll would have approved.)
This was navigating Tulsa with a map of Tatooine.
The logical response to all this is, “So what?” If our maps fail us, how do we operate in an uncertain world? This is its own discussion for another time, and Taleb has gone to great pains to try and address the concern. Smart minds disagree on the solution. But one obvious key must be building systems that are robust to model error.
The practical problem with a model like VAR is that the banks use it to optimize. In other words, they take on as much exposure as the model deems OK. And when banks veer into managing to a highly detailed, highly confident model rather than to informed common sense, which happens frequently, they tend to build up hidden risks that will un-hide themselves in time.
If one were to instead assume that there were no precisely accurate maps of the financial territory, they would have to fall back on much simpler heuristics. (If you assume detailed statistical models of the future will fail you, you don’t use them.)
In short, you would do what Warren Buffett has done with Berkshire Hathaway. Mr. Buffett, to our knowledge, has never used a computer model in his life, yet manages an institution half a trillion dollars in size by assets, a large portion of which are financial assets. How?
The approach requires not only assuming a future worst case far more severe than the past, but also dictates building an institution with a robust set of backup systems, and margins-of-safety operating at multiple levels. Extra cash, rather than extra leverage. Taking great pains to make sure the tails can’t kill you. Instead of optimizing to a model, accepting the limits of your clairvoyance.
The trade-off, of course, is short-run rewards much less great than those available under more optimized models. Speaking of this, Charlie Munger has noted:
Berkshire’s past record has been almost ridiculous. If Berkshire had used even half the leverage of, say, Rupert Murdoch, it would be five times its current size.
For Berkshire at least, the trade-off seems to have been worth it.
The salient point then is that in our march to simplify reality with useful models, of which Farnam Street is an advocate, we confuse the models with reality. For many people, the model creates its own reality. It is as if the spreadsheet comes to life. We forget that reality is a lot messier. The map isn’t the territory. The theory isn’t what it describes, it’s simply a way we choose to interpret a certain set of information. Maps can also be wrong, but even if they are essentially correct, they are an abstraction, and abstraction means that information is lost to save space. (Recall the mile-to-mile scale map.)
How do we do better? This is fodder for another post, but the first step is to realize that you do not understand a model, map, or reduction unless you understand and respect its limitations. We must always be vigilant by stepping back to understand the context in which a map is useful, and where the cliffs might lie. Until we do that, we are the turkey.
I never went to Engineering school. My undergrad is Computer Science. Despite that I’ve always wanted to learn more about Engineering.
John Kuprenas and Matthew Frederick have put together a book, 101 Things I Learned in Engineering School, which contains some of the big ideas.
In the author’s note, Kuprenas writes:
(This book) introduces engineering largely through its context, by emphasizing the common sense behind some of its fundamental concepts, the themes intertwined among its many specialities, and the simple abstract principles that can be derived from real-world circumstances. It presents, I believe, some clear glimpses of the forest as well as the trees within it.
Here are three (of the many) things I noted in the book.
Force, stress, and strain are used somewhat interchangeably in the lay world and may even be used with less than ideal rigor by engineers. However, they have different meanings.
A force, sometimes called “load,” exists external to and acts upon a body, causing it to change speed, direction, or shape. Examples of forces include water pressure on a submarine hull, snow loads on a bridge, and wind loads on the sides of a skyscraper.
Stress is the “Experience” of a body—its internal resistance to an external force acting on it. Stress is force divided by unit area, and is expressed in units such as pounds per square inch.
Strain is a product of stress. It is the measurable percentage of deformation or change in an object such as a change in length.
Decisions made just days or weeks into a project—assumptions of end-user needs, commitments to a schedule, the size and shape of a building footprint, and so on—have the most significant impact on design, feasibility, and cost. As decisions are made later and later in the design process, their influence decreases. Minor cost savings sometimes can be realized through value engineering in the later stages of design, but the biggest cost factors are embedded at the outset in a project’s DNA.
Everyone seems to understand this point on the surface and yet few people consider the implications. I know a lot of people who make their career on cleaning up their own mess. That is, they make a poor initial decision and then work extra hours while running around with stress and panic as they clean up their own mess. In the worst organizations these people are promoted for doing an exceptional job.
Proper management of early decisions produces more free time and lower stress.
An imaginary team of engineers sought to build a “super-horse” that would be twice as tall as a normal horse. When they created it, they discovered it to be a troubled, inefficient beast. Not only was it two times the height of a normal horse, it was twice as wide and twice as long, resulting in an overall mass eight times greater than normal. But the cross sectional area of its veins and arteries was only four times that of a normal horse calling for its heart to work twice as hard. The surface area of its feed was four times that of a normal horse, but each foot had to support twice the weight per unit of surface area compared to a normal horse. Ultimately, the sickly animal had to be put down.
This becomes interesting when you think of the ideal size for things and how we, as well intentioned humans, often make things worse. This has a name. It’s called iatrogenics.
Let us briefly put an organizational lens on this. Inside organizations resources are scarce. Generally the more people you have under you the more influence and authority you have inside the organization. Unless there is a proper culture and incentive system in place, your incentive is to grow and not shrink. In fact, in all the meetings I’ve ever been in with senior management, I can’t recall anyone who ran a division saying I have too many resources. It’s a derivative of Parkinson’s Law — only work isn’t expanding to fill the time available. Instead, work is expanding to fill the number of people.
Contrast that with Berkshire Hathaway, run by Warren Buffett. In a 2010 letter to shareholders he wrote:
Our flexibility in respect to capital allocation has accounted for much of our progress to date. We have been able to take money we earn from, say, See’s Candies or Business Wire (two of our best-run businesses, but also two offering limited reinvestment opportunities) and use it as part of the stake we needed to buy BNSF.
In the 2014 letter he wrote:
To date, See’s has earned $1.9 billion pre-tax, with its growth having required added investment of only $40 million. See’s has thus been able to distribute huge sums that have helped Berkshire buy other businesses that, in turn, have themselves produced large distributable profits. (Envision rabbits breeding.) Additionally, through watching See’s in action, I gained a business education about the value of powerful brands that opened my eyes to many other profitable investments.
There is an optimal size to See’s. Had they retained that $1.9 billion in earnings they distributed to Berkshire, the CEO and management team might have a claim to bigger pay checks, they’d be managing ~$2 billion in assets instead of $40 million, but the result would have been very sub-optimal.
Our pursuit of growth beyond a certain point often ensures that one of the biggest forces in the world, time, is working against us. “What is missing,” writes Jeff Stibel in BreakPoint, “is that the unit of measure for progress isn’t size, it’s time.”
Bias from self-interest affects everything from how we see and filter information to how we avoid pain. It affects our self-preservation instincts and helps us rationalize our choices. In short, it permeates everything.
Our self-esteem can be a very important aspect of personal well-being, adjustment and happiness. It has been reported that people with higher self-esteem are happier with their lives, have fewer interpersonal problems, achieve at a higher and more consistent level and give in less to peer pressure.
The strong motivation to preserve a positive and consistent self-image is more than evident in our lives.
We attribute success to our own abilities and failures to environmental factors and we continuously rate ourselves as better than average on any subjective measure – ethics, beauty and ability to get along with others.
Look around – these positive illusions appear to be the rule rather than the exception in well-adjusted people.
However, sometimes life is harsh on us and gives few if any reasons for self-love.
We get fired, a relationship ends, and we end up making decisions which are not well aligned with our inner selves. And so we come up with ways to straighten our damaged self-image.
Under the influence of bias from self-interest we may find ourselves drifting away from facts and spinning them to the point they become acceptable. While the tendency is mostly harmless and episodical, there are cases when it grows extreme.
The imperfect and confusing realities of our life can activate strong responses, which helps us preserve ourselves and our fragile self-images. Usually amplified by love, death or chemical dependency, strong self-serving bias may leave the person with little capacity to assess the situation objectively.
In his speech, The Psychology of Human Misjudgment, Charlie Munger reflects on the extreme tendencies that serious criminals display in Tolstoy’s novels and beyond. Their defense mechanisms can be divided in two distinct types – they are either in denial of committing the crime at all or they think that the crime is justifiable in light of their hardships.
Munger coins the two cases the Tolstoy effect.
Denial occurs, when we encounter a serious thought about reality, but decide to ignore it.
Imagine one day you notice a strange, dark spot on your skin. You feel a sudden sense of anxiety, but soon go on with your day and forget about it. Weeks later, it has not gone away and has slowly become darker and you eventually decide to take action and visit the doctor.
In such cases, small doses of denial might serve us well. We have time to absorb the information slowly and figure out the next steps for action, in case our darkest fears come true. However, once denial becomes a prolonged measure for coping with troubling matters, causing our problems to amplify, we are bound to suffer from consequences.
The consequences can be different. The mildest one is a simple inability to move on with our lives.
Charlie Munger was startled to see a case of persistent denial in a family friend:
This first really hit me between the eyes when a friend of our family had a super-athlete, super-student son who flew off a carrier in the north Atlantic and never came back, and his mother, who was a very sane woman, just never believed that he was dead.
The case made him realize that denial is often amplified by intense feelings of love and death. We’re denying to avoid pain.
While denial of the death of someone close is usually harmless and understandable, it can become a significant problem, when we deny an issue that is detrimental to ourselves and others.
A good example of such issues are physical dependencies, such as alcoholism or drug addiction.
Munger advises to stay away from any opportunity to slip into an addiction, since the psychological effects are most damaging. The reality distortion that happens in the minds of drug addicts leads them to believe that they have remained in a respectable condition and with reasonable prospects even as their condition keeps deteriorating.
A less severe case of distortion, but no less foolish, is our tendency to rationalize the choices we have made.
Most of us have a positive concept of ourselves and we believe ourselves to be competent, moral and smart.
We can go to great lengths to preserve this self-image. No doubt we have all engaged in behaviors that are less than consistent with our inner self-image and then used phrases, such as “not telling the truth is not lying”, “I didn’t have the time” and “others are even worse” to justify our less than ideal actions.
This tendency in part can be explained by the engine that drives self-justification called cognitive dissonance. It is the state of tension that occurs, whenever we hold two opposing facts in our heads, such as “smoking is bad” and “I smoke two packs a day”.
Dissonance bothers us under any circumstances, but it becomes particularly unbearable, when our self-concept is threatened by it. After all, we spend our lives trying to lead lives that are consistent and meaningful. This drive “to save face” is so powerful that it often overrules and contradicts the pure effects of rewards and punishments as assumed by economic theory or observed in simple animal behavioral research.
The most obvious way to quiet dissonance is by quitting. However, a smoker that has tried to quit and failed can also quiet the other belief – namely that smoking is not all that bad. It is the simple and failure-free option that allows her to feel good about herself and requires hardly any effort. Having suspended our moral compass only once and found rationales for the bad, but fixable, choices gives us permission to repeat them in the future and continue the vicious cycle.
Carol Tavris and Elliot Aronson in their book Mistakes Were Made (But Not by Me): Why We Justify Foolish Beliefs, Bad Decisions, and Hurtful Acts explain the vicious cycle of choices with an analogy of a pyramid.
Consider the case of two reasonably honest students at the beginning of the term. They face the temptation to cheat on an important test. One of them gives in and the other does not. How do you think they will feel about cheating a week later?
Most likely their initially torn opinions will have polarized in light of their initial choices. Now take this effect and amplify it over the term. By the time they are through with the term two things will have happened:
1) They will be very far from each other in their beliefs
2) They will be convinced that they have always felt strongly about the issue and their side of the argument
Just like those students, we are often at the top of the choice pyramid, facing a decision whose consequences are morally ambiguous. This first choice then starts a process of entrapment of action – justification – further action, which increases the intensity of our commitment
Over time our choices reinforce themselves and towards the bottom of the pyramid, we find ourselves rolling toward increasingly extreme views.
Consider the famous Stanley Milgram experiment, where two thirds of the 3,000 subjects administered a life threatening level of electric shock to another person. While this study is often used to illustrate our obedience to authority, it also a demonstrates the effects of self-justification.
Simply imagine the scenario of someone asking you to do the favor inflicting 500V of potentially deadly and incredibly painful shock on another person for the sake of science. Chances are most of us would refuse it under any circumstances.
Now suppose the researcher tells you he is interested in effects of punishment on learning and you will have to inflict hardly noticeable electric impulses on another person. You are even encouraged to try the lower levels of 10V yourself to feel that the pain is hardly noticeable.
When you come along, suddenly the experimenter asks you to increase the shock to 20V, which seems like a small increase, so you agree without thinking much. Then the cascade continues – if you gave 20V shock, what is the harm in giving 30V? Suddenly you find yourself unable to draw the line, so you simply tag along with the instructions.
When people are asked in advance whether they would administer shock above 450V, nearly nobody believes they would. However, when facing the choice under pressing circumstances, two thirds of them did!
The implications here are powerful – if we don’t actively draw the line ourselves, our habits and circumstances will decide for us.
We will all do dumb things. We can’t help it. We are wired that way. However, we are not doomed to live in denial or keep striving to justify our actions. We always have the choice to correct our tendencies, once we recognize them.
A better understanding of our minds serves as the first step towards breaking the self-justification habit. It takes time, self-reflection and willingness to become more mindful about our behavior and reasons for our behavior, but it is well worth the effort.
The authors of Mistakes Were Made (But not By Me) give an example of conservative William Safire, who wrote a column criticizing (then and current) American presidential candidate Hillary Clinton’s efforts to conceal the identity of her health care task force. A few years later Dick Cheney, a Republican (conservative) candidate whom Safire admired, made a similar move to Clinton by insisting on keeping his energy task force secret.
The alarm bell in Safire’s head rang and he admits that the temptation to rationalize the occasion and apply double standards was enormous. However, he recognized the dissonance effects and ended up writing a similar column about Cheney.
We know that Safire’s ability to spot his own dissonance and do the fair thing is rare. People will bend over backward to reduce dissonance in a way that is favorable to them and their team. Resisting that urge is not easy to do, but it is much better than letting the natural psychological tendencies cripple the integrity of our behaviors. There are ways to make fairness easier.
On the personal level Charlie Munger suggests we should face two simple facts. Firstly, fixable, but unfixed bad performance is bad character and tends to create more of itself and cause more damage — a sort of Gresham’s Law. And, secondly, in demanding places like athletic teams, excuses and bad behavior will not get us far.
On the institutional level Munger advises to build a fair, meritocratic, demanding culture plus personnel handling methods that build up morale. His second piece of advice is severance of the worst offenders, when possible.
Munger expands on the second point by noting that it is not in any case possible to let go our children, but we must therefore try to fix them to our best ability. He gives a real life example of a child, who had the habit of taking candy from the stock of his father’s employer with the excuse that he had intended to replace it later. The father said words that never left the child:
“Son, it would be better for you to simply take all you want and call yourself a thief every time you do it.”
Turns out the child in this example was the dean of University of Southern California Business School, where Munger delivered the speech.
If we are effective, the lessons we teach our children will serve them well throughout their lives.
There is so much more to touch on with bias from self interest, including its relation to hierarchy, how it distorts information, how it feeds our desire for self-preservation and scarcity, how it impacts group preservation, its relationship to terrority etc.
Bias From Self-Interest is part of the Farnam Street latticework of mental models.