Tag Archives: Decision Making

How (Supposedly) Rational People Make Decisions

There are four principles that Gregory Mankiw outlines in his multi-disciplinary economics textbook Principles of Economics.

I got the idea for reading an Economics textbook from Charlie Munger, the billionaire business partner of Warren Buffett. He said:

Economics was always more multidisciplinary than the rest of soft science. It just reached out and grabbed things as it needed to. And that tendency to just grab whatever you need from the rest of knowledge if you’re an economist has reached a fairly high point in Mankiw’s new textbook Principles of Economics. I checked out that textbook. I must have been one of the few businessmen in America that bought it immediately when it came out because it had gotten such a big advance. I wanted to figure out what the guy was doing where he could get an advance that great. So this is how I happened to riffle through Mankiw’s freshman textbook. And there I found laid out as principles of economics: opportunity cost is a superpower, to be used by all people who have any hope of getting the right answer. Also, incentives are superpowers.

So we know that we can add Opportunity cost and incentives to our list of Mental Models.

Let’s dig in.

Principle 1: People Face Trade-offs

You have likely heard the old saying, “There is no such thing as a free lunch.” There is much to this old adage and it’s one we often forget when making decisions. To get more of something we like we almost always have to give up something else we like. A good heuristic in life is that if someone offers you something for nothing, turn it down.

Making decisions requires trading off one goal against another.

Consider a student who must decide how to allocate her most valuable resource—her time. She can spend all of her time studying economics, spend all of it studying psychology, or divide it between the two fields. For every hour she studies one subject, she gives up an hour she could have used studying the other. And for every hour she spends studying, she gives up an hour that she could have spent napping, bike riding, watching TV, or working at her part-time job for some extra spending money.

Or consider parents deciding how to spend their family income. They can buy food, clothing, or a family vacation. Or they can save some of the family income for retirement or for children’s college education. When they choose to spend an extra dollar on one of these goods, they have one less dollar to spend on some other good.

These are rather simple examples but Mankiw offers some more complicated ones. Consider the trade-off that society faces between efficiency and equality.

Efficiency means that society is getting the maximum benefits from its scarce resources. Equality means that those benefits are distributed uniformly among society’s members. In other words, efficiency refers to the size of the economic pie, and equality refers to how the pie is divided into individual slices.

When government policies are designed, these two goals often conflict. Consider, for instance, policies aimed at equalizing the distribution of economic well-being. Some of these policies, such as the welfare system or unemployment insurance, try to help the members of society who are most in need. Others, such as the individual income tax, ask the financially successful to contribute more than others to support the government. Though they achieve greater equality, these policies reduce efficiency. When the government redistributes income from the rich to the poor, it reduces the reward for working hard; as a result, people work less and produce fewer goods and services. In other words, when the government tries to cut the economic pie into more equal slices, the pie gets smaller.

Principle 2: The Cost of Something Is What You Give Up to Get It

Because of trade-offs, people face decisions between the costs and benefits of one course of action and the cost and benefits of another course. But costs are not as obvious as they might first appear — we need to apply some second-level thinking:

Consider the decision to go to college. The main benefits are intellectual enrichment and a lifetime of better job opportunities. But what are the costs? To answer this question, you might be tempted to add up the money you spend on tuition, books, room, and board. Yet this total does not truly represent what you give up to spend a year in college.

There are two problems with this calculation. First, it includes some things that are not really costs of going to college. Even if you quit school, you need a place to sleep and food to eat. Room and board are costs of going to college only to the extent that they are more expensive at college than elsewhere. Second, this calculation ignores the largest cost of going to college—your time. When you spend a year listening to lectures, reading textbooks, and writing papers, you cannot spend that time working at a job. For most students, the earnings they give up to attend school are the single largest cost of their education.

The opportunity cost of an item is what you give up to get that item. When making any decision, decision makers should be aware of the opportunity costs that accompany each possible action. In fact, they usually are. College athletes who can earn millions if they drop out of school and play professional sports are well aware that the opportunity cost of their attending college is very high. It is not surprising that they often decide that the benefit of a college education is not worth the cost.

Principle 3: Rational People Think at the Margin

For the sake of simplicity economists normally assume that people are rational. While this causes many problems, there is an undercurrent of truth to the fact that people systematically and purposefully “do the best they can to achieve their objectives, given opportunities.” There are two parts to rationality. The first is that your understanding of the world is correct. Second you maximize the use of your resources toward your goals.

Rational people know that decisions in life are rarely black and white but usually involve shades of gray. At dinnertime, the question you face is not “Should I fast or eat like a pig?” More likely, you will be asking yourself “Should I take that extra spoonful of mashed potatoes?” When exams roll around, your decision is not between blowing them off and studying twenty-four hours a day but whether to spend an extra hour reviewing your notes instead of watching TV. Economists use the term marginal change to describe a small incremental adjustment to an existing plan of action. Keep in mind that margin means “edge,” so marginal changes are adjustments around the edges of what you are doing. Rational people often make decisions by comparing marginal benefits and marginal costs.

Thinking at the margin works for business decisions.

Consider an airline deciding how much to charge passengers who fly standby. Suppose that flying a 200-seat plane across the United States costs the airline $100,000. In this case, the average cost of each seat is $100,000/200, which is $500. One might be tempted to conclude that the airline should never sell a ticket for less than $500. But a rational airline can increase its profits by thinking at the margin. Imagine that a plane is about to take off with 10 empty seats and a standby passenger waiting at the gate is willing to pay $300 for a seat. Should the airline sell the ticket? Of course, it should. If the plane has empty seats, the cost of adding one more passenger is tiny. The average cost of flying a passenger is $500, but the marginal cost is merely the cost of the bag of peanuts and can of soda that the extra passenger will consume. As long as the standby passenger pays more than the marginal cost, selling the ticket is profitable.

This also helps answer the question of why diamonds are so expensive and water is so cheap.

Humans need water to survive, while diamonds are unnecessary; but for some reason, people are willing to pay much more for a diamond than for a cup of water. The reason is that a person’s willingness to pay for a good is based on the marginal benefit that an extra unit of the good would yield. The marginal benefit, in turn, depends on how many units a person already has. Water is essential, but the marginal benefit of an extra cup is small because water is plentiful. By contrast, no one needs diamonds to survive, but because diamonds are so rare, people consider the marginal benefit of an extra diamond to be large.

A rational decision maker takes an action if and only if the marginal benefit of the action exceeds the marginal cost.

Principle 4: People Respond to Incentives

Incentives induce people to act. If you use a rational approach to decision making that involves trade offs and comparing costs and benefits, you respond to incentives. Charlie Munger once said: “Never, ever, think about something else when you should be thinking about the power of incentives.”

Incentives are crucial to analyzing how markets work. For example, when the price of an apple rises, people decide to eat fewer apples. At the same time, apple orchards decide to hire more workers and harvest more apples. In other words, a higher price in a market provides an incentive for buyers to consume less and an incentive for sellers to produce more. As we will see, the influence of prices on the behavior of consumers and producers is crucial for how a market economy allocates scarce resources.

Public policymakers should never forget about incentives: Many policies change the costs or benefits that people face and, as a result, alter their behavior. A tax on gasoline, for instance, encourages people to drive smaller, more fuel-efficient cars. That is one reason people drive smaller cars in Europe, where gasoline taxes are high, than in the United States, where gasoline taxes are low. A higher gasoline tax also encourages people to carpool, take public transportation, and live closer to where they work. If the tax were larger, more people would be driving hybrid cars, and if it were large enough, they would switch to electric cars.

Failing to consider how policies and decisions affect incentives often results in unforeseen results.

Biases and Blunders

Nudge: Improving Decisions About Health, Wealth, and Happiness

You would be hard pressed to come across a reading list on behavioral economics that doesn’t mention Nudge: Improving Decisions About Health, Wealth, and Happiness by Richard Thaler and Cass Sunstein.

It is a fascinating look at how we can create environments or ‘choice architecture’ to help people make better decisions. But one of the reasons it’s been so influential is because it helps us understand why people sometimes make bad decisions in the first place. If we really want to understand how we can nudge people into making better choices, it’s important to understand why they often make such poor ones.

Let’s take a look at how Thaler and Sunstein explain some of our common mistakes in a chapter aptly called ‘Biases and Blunders.’

Anchoring and Adjustment

Humans have a tendency to put too much emphasis on one piece of information when making decisions. When we overweigh one piece of information and make assumptions based on it, we call that an anchor. Say I borrow a 400-page-book from a friend and I think to myself, the last book I read was about 300 pages and I read it in 5 days so I’ll let my friend know I’ll have her book back to her in 7 days. Problem is, I’ve only compared one factor related to me reading books and now I’ve made a decision without taking into account many other factors which could affect the outcome. For example, is the new book a topic I will digest at the same rate? Will I have the same time over those 7 days for reading? I have looked at number of pages but are the number of words per page similar?

As Thaler and Sunstein explain:

This process is called ‘anchoring and adjustment.’ You start with some anchor, the number you know, and adjust in the direction you think is appropriate. So far, so good. The bias occurs because the adjustments are typically insufficient.

Availability Heuristic

This is the tendency of our mind to overweigh information that is recent and readily available. What did you think about the last time you read about a plane crash? Did you start thinking about you being in a plane crash? Imagine how much it would weigh on your mind if you were set to fly the next day.

We assess the likelihood of risks by asking how readily examples come to mind. If people can easily think of relevant examples, they are far more likely to be frightened and concerned than if they cannot.

Accessibility and salience are closely related to availability, and they are important as well. If you have personally experienced a serious earthquake, you’re more likely to believe that an earthquake is likely than if you read about it in a weekly magazine. Thus, vivid and easily imagined causes of death (for example, tornadoes) often receive inflated estimates of probability, and less-vivid causes (for example, asthma attacks) receive low estimates, even if they occur with a far greater frequency (here, by a factor of twenty). Timing counts too: more recent events have a greater impact on our behavior, and on our fears, than earlier ones.

Representativeness Heuristic

Use of the representativeness heuristic can cause serious misperceptions of patterns in everyday life. When events are determined by chance, such as a sequence of coin tosses, people expect the resulting string of heads and tails to be representative of what they think of as random. Unfortunately, people do not have accurate perceptions of what random sequences look like. When they see the outcomes of random processes, they often detect patterns that they think have great meaning but in fact are just due to chance.

It would seem as though we have issues with randomness. Our brains automatically want to see patterns when none may exist. Try a coin toss experiment on yourself. Simply flip a coin and keep track if it’s heads or tails. At some point you will hit ‘a streak’ of either heads or tails and you will notice that you experience a sort of cognitive dissonance; you know that ‘a streak’ at some point is statistically probable but you can’t help but thinking the next toss has to break the streak because for some reason in your head it’s not right. That unwillingness to accept randomness, our need for a pattern, often clouds our judgement when making decisions.

Unrealistic Optimism

We have touched upon optimism bias in the past. Optimism truly is a double-edged sword. On one hand it is extremely important to be able to look past a bad moment and tell yourself that it will get better. Optimism is one of the great drivers of human progress.

On the other hand, if you never take those rose-coloured glasses off, you will make mistakes and take risks that could have been avoided. When assessing the possible negative outcomes associated with risky behaviour we often think ‘it won’t happen to me.’ This is a brain trick: We are often insensitive to the base rate.

Unrealistic optimism is a pervasive feature of human life; it characterizes most people in most social categories. When they overestimate their personal immunity from harm, people may fail to take sensible preventive steps. If people are running risks because of unrealistic optimism, they might be able to benefit from a nudge.

Loss Aversion

When they have to give something up, they are hurt more than they are pleased if they acquire the very same thing.

We are familiar with loss aversion in the context described above but Thaler and Sunstein take the concept a step further and explain how it plays a role in ‘default choices.’ Loss aversion can make us so fearful of making the wrong decision that we don’t make any decision. This explains why so many people settle for default options.

The combination of loss aversion with mindless choosing implies that if an option is designated as the ‘default,’ it will attract a large market share. Default options thus act as powerful nudges. In many contexts defaults have some extra nudging power because consumers may feel, rightly or wrongly, that default options come with an implicit endorsement from the default setter, be it the employer, government, or TV scheduler.

Of course, this is not the only reason default options are so popular. “Anchoring,” which we mentioned above, plays a role here. Our mind anchors immediately to the default option, especially in unfamiliar territory for us.

We also have the tendency towards inertia, given that mental effort is tantamount to physical effort – thinking hard requires physical resources. If we don’t know the difference between two 401(k) plans and they both seem similar, why expend the mental effort to switch away from the default investment option? You may not have that thought consciously; it often happens as a “click, whirr.

State of Arousal

Our prefered definition requires recognizing that people’s state of arousal varies over time. To simplify things we will consider just the two endpoints: hot and cold. When Sally is very hungry and appetizing aromas are emanating from the kitchen, we can say she is in a hot state. When Sally is thinking abstractly on Tuesday about the right number of cashews she should consume before dinner on Saturday, she is in a cold state. We will call something ‘tempting’ if we consume more of it when hot than when cold. None of this means that decisions made in a cold state are always better. For example, sometimes we have to be in a hot state to overcome our fears about trying new things. Sometimes dessert really is delicious, and we do best to go for it. Sometimes it is best to fall in love. But it is clear that when we are in a hot state, we can often get into a lot of trouble.

For most of us, however, self-control issues arise because we underestimate the effect of arousal. This is something the behavioral economist George Loewenstein (1996) calls the ‘hot-cold empathy gap.’ When in a cold state, we do not appreciate how much our desires and our behavior reflects a certain naivete about the effects that context can have on choice.

The concept of arousal is analogous to mood. At the risk of stating the obvious, our mood can play a definitive role in our decision making. We all know it, but how many among us truly use that insight to make better decisions?

This is one reason we advocate decision journals when it comes to meaningful decisions (probably no need to log in your cashew calculations); a big part of tracking your decisions is your mood when you make themA zillion contextual clues go into your state of arousal, but taking a quick pause to note which state you’re in as you make a decision can make a difference over time.

Mood is also affected by chemicals. This one may be familiar to you coffee (or tea) addicts out there. Do you recall the last time you felt terrible or uncertain about a decision when you were tired, only to feel confident and spunky about the same topic after a cup of java?

Or, how about alcohol? There’s a reason it’s called a “social lubricant” – our decision making changes when we’ve consumed enough of it.

Lastly, the connection between sleep and mood goes deep. Need we say more?

Peer Pressure

Peer pressure is another tricky nudge that can be both positive or negative. We can be nudged to make better decisions when we think that our peer group is doing the same. If we think our neighbors conserve more energy or recycle more, we start making a better effort to reduce our consumption and recycle. If we think the people around us are eating better and exercising more we tend to do the same. Information we get from peer groups can also help us make better decisions because of ‘collaborative filtering’; the choices of our peer groups help us filter out and narrow down our choices. If your friends who share similar views and tastes as you recommend book X, then you may like it as well. (Google, Amazon and Netflix are built on this principle).

However, if we are all reading the same book because we constantly see people with it, but none of us actually like it, then we all lose. We run off the mountain with the other lemmings.

Social influences come in two basic categories. The first involves information. If many people do something or think something, their actions and their thoughts convey information about what might be best for you to do or think. The second involves peer pressure. If you care about what other people think about you (perhaps in the mistaken belief that they are paying some attention to what you are doing), then you might go along with the crowd to avoid their wrath or curry their favor.

An important problem here is ‘pluralistic ignorance’ – that is, ignorance, on the part of all or most, about what other people think. We may follow a practice or a tradition not because we like it, or even think it defensible, but merely because we think that most other people like it. Many social practices persist for this reason, and a small shock, or nudge, can dislodge them.

How do we beat social influence? It’s very difficult, and not always desirable: If you are about to enter a building a lot of people are running away from, there’s a better than good chance you should too. But this useful instinct leads us awry.

A simple algorithm, when you feel yourself acting out of social proof, is to ask yourself: Would I still do this if everyone else was not?

***

For more, check out Nudge.

How Analogies Reveal Connections, Spark Innovation, and Sell Our Greatest Ideas

Image Source: XKCD
Source: xkcd.com

John Pollack is a former Presidential Speechwriter. If anyone knows the power of words to move people to action, shape arguments, and persuade, it is he.

In Shortcut: How Analogies Reveal Connections, Spark Innovation, and Sell Our Greatest Ideas, he explores the powerful role of analogy in persuasion and creativity.

One of the key tools he uses for this is analogy.

While they often operate unnoticed, analogies aren’t accidents, they’re arguments—arguments that, like icebergs, conceal most of their mass and power beneath the surface. In arguments, whoever has the best argument wins.

But analogies do more than just persuade others — they also play a role in innovation and decision making.

From the bloody Chicago slaughterhouse that inspired Henry Ford’s first moving assembly line, to the “domino theory” that led America into the Vietnam War, to the “bicycle for the mind” that Steve Jobs envisioned as a Macintosh computer, analogies have played a dynamic role in shaping the world around us.

Despite their importance, many people have only a vague sense of the definition.

What is an Analogy?

In broad terms, an analogy is simply a comparison that asserts a parallel—explicit or implicit—between two distinct things, based on the perception of a share property or relation. In everyday use, analogies actually appear in many forms. Some of these include metaphors, similes, political slogans, legal arguments, marketing taglines, mathematical formulas, biblical parables, logos, TV ads, euphemisms, proverbs, fables and sports clichés.

Because they are so disguised they play a bigger role than we consciously realize. Not only do analogies effectively make arguments, but they trigger emotions. And emotions make it hard to make rational decisions.

While we take analogies for granted, the ideas they convey are notably complex.

All day every day, in fact, we make or evaluate one analogy after the other, because some comparisons are the only practical way to sort a flood of incoming data, place it within the content of our experience, and make decisions accordingly.

Remember the powerful metaphor — that arguments are war. This shapes a wide variety of expressions like “your claims are indefensible,” “attacking the weakpoints,” and “You disagree, OK shoot.”

Or consider the Map and the Territory — Analogies give people the map but explain nothing of the territory.

Warren Buffett is one of the best at using analogies to communicate effectively. One of my favorite analogies is when he noted “You never know who’s swimming naked until the tide goes out.” In other words, when times are good everyone looks amazing. When times suck, hidden weaknesses are exposed. The same could be said for analogies:

We never know what assumptions, deceptions, or brilliant insights they might be hiding until we look beneath the surface.

Most people underestimate the importance of a good analogy. As with many things in life, this lack of awareness comes at a cost. Ignorance is expensive.

Evidence suggests that people who tend to overlook or underestimate analogy’s influence often find themselves struggling to make their arguments or achieve their goals. The converse is also true. Those who construct the clearest, most resonant and apt analogies are usually the most successful in reaching the outcomes they seek.

The key to all of this is figuring out why analogies function so effectively and how they work. Once we know that, we should be able to craft better ones.

Don’t Think of an Elephant

Effective, persuasive analogies frame situations and arguments, often so subtly that we don’t even realize there is a frame, let alone one that might not work in our favor. Such conceptual frames, like picture frames, include some ideas, images, and emotions and exclude others. By setting a frame, a person or organization can, for better or worse, exert remarkable influence on the direction of their own thinking and that of others.

He who holds the pen frames the story. The first person to frame the story controls the narrative and it takes a massive amount of energy to change the direction of the story. Sometimes even the way that people come across information, shapes it — stories that would be a non-event if disclosed proactively became front page stories because someone found out.

In Don’t Think of an Elephant, George Lakoff explores the issue of framing. The book famously begins with the instruction “Don’t think of an elephant.”

What’s the first thing we all do? Think of an elephant, of course. It’s almost impossible not to think of an elephant. When we stop consciously thinking about it, it floats away and we move on to other topics — like the new email that just arrived. But then again it will pop back into consciousness and bring some friends — associated ideas, other exotic animals, or even thoughts of the GOP.

“Every word, like elephant, evokes a frame, which can be an image of other kinds of knowledge,” Lakoff writes. This is why we want to control the frame rather than be controlled by it.

In Shortcut Pollack tells of Lakoff talking about an analogy that President George W. Bush made in the 2004 State of the Union address, in which he argued the Iraq war was necessary despite the international criticism. Before we go on, take Bush’s side here and think about how you would argue this point – how would you defend this?

In the speech, Bush proclaimed that “America will never seek a permission slip to defend the security of our people.”

As Lakoff notes, Bush could have said, “We won’t ask permission.” But he didn’t. Instead he intentionally used the analogy of permission slip and in so doing framed the issue in terms that would “trigger strong, more negative emotional associations that endured in people’s memories of childhood rules and restrictions.”

Commenting on this, Pollack writes:

Through structure mapping, we correlate the role of the United States to that of a young student who must appeal to their teacher for permission to do anything outside the classroom, even going down the hall to use the toilet.

But is seeking diplomatic consensus to avoid or end a war actually analogous to a child asking their teacher for permission to use the toilet? Not at all. Yet once this analogy has been stated (Farnam Street editorial: and tweeted), the debate has been framed. Those who would reject a unilateral, my-way-or-the-highway approach to foreign policy suddenly find themselves battling not just political opposition but people’s deeply ingrained resentment of childhood’s seemingly petty regulations and restrictions. On an even subtler level, the idea of not asking for a permission slip also frames the issue in terms of sidestepping bureaucratic paperwork, and who likes bureaucracy or paperwork.

Deconstructing Analogies

Deconstructing analogies, we find out how they function so effectively. Pollack argues they meet five essential criteria.

  1. Use the highly familiar to explain something less familiar.
  2. Highlight similarities and obscure differences.
  3. Identify useful abstractions.
  4. Tell a coherent story.
  5. Resonate emotionally.

Let’s explore how these work in greater detail. Let’s use the example of master-thief, Bruce Reynolds, who described the Great Train Robbery as his Sistine Chapel.

The Great Train Robbery

In the dark early hours of August 8, 1963, an intrepid gang of robbers hot-wired a six-volt battery to a railroad signal not far from the town of Leighton Buzzard, some forty miles north of London. Shortly, the engineer of an approaching mail train, spotting the red light ahead, slowed his train to a halt and sent one of his crew down the track, on foot, to investigate. Within minutes, the gang overpowered the train’s crew and, in less than twenty minutes, made off with the equivalent of more than $60 million in cash.

Years later, Bruce Reynolds, the mastermind of what quickly became known as the Great Train Robbery, described the spectacular heist as “my Sistine Chapel.”

Use the familiar to explain something less familiar

Reynolds exploits the public’s basic familiarity with the famous chapel in the Vatican City, which after Leonardo da Vinci’s Mona Lisa is perhaps the best-known work of Renaissance art in the world. Millions of people, even those who aren’t art connoisseurs, would likely share the cultural opinion that the paintings in the chapel represent “great art” (as compared to a smaller subset of people who might feel the same way about Jackson Pollock’s drip paintings, or Marcel Duchamp’s upturned urinal).

Highlight similarities and obscure differences

Reynold’s analogy highlights, through implication, similarities between the heist and the chapel—both took meticulous planning and masterful execution. After all, stopping a train and stealing the equivalent of $60m—and doing it without guns—does require a certain artistry. At the same time, the analogy obscures important differences. By invoking the image of a holy sanctuary, Reynolds triggers a host of associations in the audience’s mind—God, faith, morality, and forgiveness, among others—that camouflage the fact that he’s describing an action few would consider morally commendable, even if the artistry involved in robbing that train was admirable.

Identify useful abstractions

The analogy offers a subtle but useful abstraction: Genius is genius and art is art, no matter what the medium. The logic? If we believe that genius and artistry can transcend genre, we must concede that Reynolds, whose artful, ingenious theft netted millions, is an artist.

Tell a coherent story

The analogy offers a coherent narrative. Calling the Great Train Robbery his Sistine Chapel offers the audience a simple story that, at least on the surface makes sense: Just as Michelangelo was called by God, the pope, and history to create his greatest work, so too was Bruce Reynolds called by destiny to pull off the greatest robbery in history. And if the Sistine Chapel endures as an expression of genius, so too must the Great Train Robbery. Yes, robbing the train was wrong. But the public perceived it as largely a victimless crime, committed by renegades who were nothing if not audacious. And who but the most audacious in history ever create great art? Ergo, according to this narrative, Reynolds is an audacious genius, master of his chosen endeavor, and an artist to be admired in public.

There is an important point here. The narrative need not be accurate. It is the feelings and ideas the analogy evokes that make it powerful. Within the structure of the analogy, the argument rings true. The framing is enough to establish it succulently and subtly. That’s what makes it so powerful.

Resonate emotionally

The analogy resonates emotionally. To many people, mere mention of the Sistine Chapel brings an image to mind, perhaps the finger of Adam reaching out toward the finger of God, or perhaps just that of a lesser chapel with which they are personally familiar. Generally speaking, chapels are considered beautiful, and beauty is an idea that tends to evoke positive emotions. Such positive emotions, in turn, reinforce the argument that Reynolds is making—that there’s little difference between his work and that of a great artist.

Jumping to Conclusions

Daniel Kahneman explains the two thinking structures that govern the way we think: System one and system two . In his book, Thinking Fast and Slow, he writes “Jumping to conclusions is efficient if the conclusions are likely to be correct and the costs of an occasional mistake are acceptable, and if the jump saves much time and effort.”

“A good analogy serves as an intellectual springboard that helps us jump to conclusions,” Pollack writes. He continues:

And once we’re in midair, flying through assumptions that reinforce our preconceptions and preferences, we’re well on our way to a phenomenon known as confirmation bias. When we encounter a statement and seek to understand it, we evaluate it by first assuming it is true and exploring the implications that result. We don’t even consider dismissing the statement as untrue unless enough of its implications don’t add up. And consider is the operative word. Studies suggest that most people seek out only information that confirms the beliefs they currently hold and often dismiss any contradictory evidence they encounter.

The ongoing battle between fact and fiction commonly takes place in our subconscious systems. In The Political Brain: The Role of Emotion in Deciding the Fate of the Nation, Drew Westen, an Emory University psychologist, writes: “Our brains have a remarkable capacity to find their way toward convenient truths—even if they are not all true.”

This also helps explain why getting promoted has almost nothing to do with your performance.

Remember Apollo Robbins? He’s a professional pickpocket. While he has unique skills, he succeeds largely through the choreography of people’s attention. “Attention,” he says “is like water. It flows. It’s liquid. You create channels to divert it, and you hope that it flows the right way.”

“Pickpocketing and analogies are in a sense the same,” Pollack concludes, “as the misleading analogy picks a listener’s mental pocket.”

And this is true whether someone else diverts our attention through a resonant but misleading analogy—“Judges are like umpires”—or we simply choose the wrong analogy all by ourselves.

Reasoning by Analogy

We rarely stop to see how much of our reasoning is done by analogy. In a 2005 study published in the Harvard Business Review, Giovanni Gavettie and Jan Rivkin wrote: “Leaders tend to be so immersed in the specifics of strategy that they rarely stop to think how much of their reasoning is done by analogy.” As a result they miss things. They make connections that don’t exist. They don’t check assumptions. They miss useful insights. By contrast “Managers who pay attention to their own analogical thinking will make better strategic decisions and fewer mistakes.”

***

Shortcut goes on to explore when to use analogies and how to craft them to maximize persuasion.

The Map is Not the Territory

“(History) offers a ridiculous spectacle of a fragment expounding the whole.”
— Will Durant in Our Oriental Heritage

***

In 1931, in New Orleans, Louisiana, mathematician Alfred Korzybski presented a paper on mathematical semantics. To the non-technical reader, most of the paper reads like an abstruse argument on the relationship of mathematics to human language, and of both to physical reality. Important stuff certainly, but not necessarily immediately useful for the layperson.

However, in his string of arguments on the structure of language, Korzybski introduced and popularized the idea that the map is not the territory. In other words, the description of the thing is not the thing itself. The model is not reality. The abstraction is not the abstracted. This has enormous practical consequences.

To continue reading this post you must be a member. (Current members log-in here.)

***

To learn more about our membership program please visit this page. Or instantly sign up for a basic membership:

Join for $149/Year

***

By signing up for a membership you’re helping me earn a living, funding more exclusive content and research, and enabling me to hire some people to help out.

If you love Farnam Street’s regular content, you’ll love our membership program.

 

Why Early Decisions Have the Greatest Impact and Why Growing too Much is a Bad Thing

I never went to Engineering school. My undergrad is Computer Science. Despite that I’ve always wanted to learn more about Engineering.

John Kuprenas and Matthew Frederick have put together a book, 101 Things I Learned in Engineering School, which contains some of the big ideas.

In the author’s note, Kuprenas writes:

(This book) introduces engineering largely through its context, by emphasizing the common sense behind some of its fundamental concepts, the themes intertwined among its many specialities, and the simple abstract principles that can be derived from real-world circumstances. It presents, I believe, some clear glimpses of the forest as well as the trees within it.

Here are three (of the many) things I noted in the book.

***

#8 An object receives a force, experiences stress, and exhibits strain.

force-stress-strain

Force, stress, and strain are used somewhat interchangeably in the lay world and may even be used with less than ideal rigor by engineers. However, they have different meanings.

A force, sometimes called “load,” exists external to and acts upon a body, causing it to change speed, direction, or shape. Examples of forces include water pressure on a submarine hull, snow loads on a bridge, and wind loads on the sides of a skyscraper.

Stress is the “Experience” of a body—its internal resistance to an external force acting on it. Stress is force divided by unit area, and is expressed in units such as pounds per square inch.

Strain is a product of stress. It is the measurable percentage of deformation or change in an object such as a change in length.

#48 Early decisions have the greatest impact.

Early Decisions Have Greater Impact

Decisions made just days or weeks into a project—assumptions of end-user needs, commitments to a schedule, the size and shape of a building footprint, and so on—have the most significant impact on design, feasibility, and cost. As decisions are made later and later in the design process, their influence decreases. Minor cost savings sometimes can be realized through value engineering in the later stages of design, but the biggest cost factors are embedded at the outset in a project’s DNA.

Everyone seems to understand this point on the surface and yet few people consider the implications. I know a lot of people who make their career on cleaning up their own mess. That is, they make a poor initial decision and then work extra hours while running around with stress and panic as they clean up their own mess. In the worst organizations these people are promoted for doing an exceptional job.

Proper management of early decisions produces more free time and lower stress.

#75 A successful system won’t necessarily work at a different scale.

Systems Scale

An imaginary team of engineers sought to build a “super-horse” that would be twice as tall as a normal horse. When they created it, they discovered it to be a troubled, inefficient beast. Not only was it two times the height of a normal horse, it was twice as wide and twice as long, resulting in an overall mass eight times greater than normal. But the cross sectional area of its veins and arteries was only four times that of a normal horse calling for its heart to work twice as hard. The surface area of its feed was four times that of a normal horse, but each foot had to support twice the weight per unit of surface area compared to a normal horse. Ultimately, the sickly animal had to be put down.

This becomes interesting when you think of the ideal size for things and how we, as well intentioned humans, often make things worse. This has a name. It’s called iatrogenics.

Let us briefly put an organizational lens on this. Inside organizations resources are scarce. Generally the more people you have under you the more influence and authority you have inside the organization. Unless there is a proper culture and incentive system in place, your incentive is to grow and not shrink. In fact, in all the meetings I’ve ever been in with senior management, I can’t recall anyone who ran a division saying I have too many resources. It’s a derivative of Parkinson’s Law — only work isn’t expanding to fill the time available. Instead, work is expanding to fill the number of people.

Contrast that with Berkshire Hathaway, run by Warren Buffett. In a 2010 letter to shareholders he wrote:

Our flexibility in respect to capital allocation has accounted for much of our progress to date. We have been able to take money we earn from, say, See’s Candies or Business Wire (two of our best-run businesses, but also two offering limited reinvestment opportunities) and use it as part of the stake we needed to buy BNSF.

In the 2014 letter he wrote:

To date, See’s has earned $1.9 billion pre-tax, with its growth having required added investment of only $40 million. See’s has thus been able to distribute huge sums that have helped Berkshire buy other businesses that, in turn, have themselves produced large distributable profits. (Envision rabbits breeding.) Additionally, through watching See’s in action, I gained a business education about the value of powerful brands that opened my eyes to many other profitable investments.

There is an optimal size to See’s. Had they retained that $1.9 billion in earnings they distributed to Berkshire, the CEO and management team might have a claim to bigger pay checks, they’d be managing ~$2 billion in assets instead of $40 million, but the result would have been very sub-optimal.

Our pursuit of growth beyond a certain point often ensures that one of the biggest forces in the world, time, is working against us. “What is missing,” writes Jeff Stibel in BreakPoint, “is that the unit of measure for progress isn’t size, it’s time.”

***

Other books in the series:
101 Things I Learned in Culinary School
101 Things I Learned in Business School
101 Things I Learned in Law School
101 Things I Learned in Film School

Bias from Self-Interest — Self Deception and Denial to Reduce Pain or Increase Pleasure; Regret Avoidance (Tolstoy effect)

We can ignore reality, but we cannot ignore the consequences of reality.

Bias from self-interest affects everything from how we see and filter information to how we avoid pain. It affects our self-preservation instincts and helps us rationalize our choices. In short, it permeates everything.

***

Our Self-Esteem

Our self-esteem can be a very important aspect of personal well-being, adjustment and happiness. It has been reported that people with higher self-esteem are happier with their lives, have fewer interpersonal problems, achieve at a higher and more consistent level and give in less to peer pressure.

The strong motivation to preserve a positive and consistent self-image is more than evident in our lives.

We attribute success to our own abilities and failures to environmental factors and we continuously rate ourselves as better than average on any subjective measure – ethics, beauty and ability to get along with others.

Look around – these positive illusions appear to be the rule rather than the exception in well-adjusted people.

However, sometimes life is harsh on us and gives few if any reasons for self-love.

We get fired, a relationship ends, and we end up making decisions which are not well aligned with our inner selves. And so we come up with ways to straighten our damaged self-image.

Under the influence of bias from self-interest we may find ourselves drifting away from facts and spinning them to the point they become acceptable. While the tendency is mostly harmless and episodical, there are cases when it grows extreme.

The imperfect and confusing realities of our life can activate strong responses, which helps us preserve ourselves and our fragile self-images. Usually amplified by love, death or chemical dependency, strong self-serving bias may leave the person with little capacity to assess the situation objectively.

In his speech, The Psychology of Human Misjudgment, Charlie Munger reflects on the extreme tendencies that serious criminals display in Tolstoy’s novels and beyond. Their defense mechanisms can be divided in two distinct types – they are either in denial of committing the crime at all or they think that the crime is justifiable in light of their hardships.

Munger coins the two cases the Tolstoy effect.

Avoiding Reality by Denying It

Denial occurs, when we encounter a serious thought about reality, but decide to ignore it.

Imagine one day you notice a strange, dark spot on your skin. You feel a sudden sense of anxiety, but soon go on with your day and forget about it. Weeks later, it has not gone away and has slowly become darker and you eventually decide to take action and visit the doctor.

In such cases, small doses of denial might serve us well. We have time to absorb the information slowly and figure out the next steps for action, in case our darkest fears come true. However, once denial becomes a prolonged measure for coping with troubling matters, causing our problems to amplify, we are bound to suffer from consequences.

The consequences can be different. The mildest one is a simple inability to move on with our lives.

Charlie Munger was startled to see a case of persistent denial in a family friend:

This first really hit me between the eyes when a friend of our family had a super-athlete, super-student son who flew off a carrier in the north Atlantic and never came back, and his mother, who was a very sane woman, just never believed that he was dead.

The case made him realize that denial is often amplified by intense feelings of love and death. We’re denying to avoid pain.

While denial of the death of someone close is usually harmless and understandable, it can become a significant problem, when we deny an issue that is detrimental to ourselves and others.

A good example of such issues are physical dependencies, such as alcoholism or drug addiction.

Munger advises to stay away from any opportunity to slip into an addiction, since the psychological effects are most damaging. The reality distortion that happens in the minds of drug addicts leads them to believe that they have remained in a respectable condition and with reasonable prospects even as their condition keeps deteriorating.

Rationalizing Our Choices

A less severe case of distortion, but no less foolish, is our tendency to rationalize the choices we have made.

Most of us have a positive concept of ourselves and we believe ourselves to be competent, moral and smart.

We can go to great lengths to preserve this self-image. No doubt we have all engaged in behaviors that are less than consistent with our inner self-image and then used phrases, such as “not telling the truth is not lying”, “I didn’t have the time” and “others are even worse” to justify our less than ideal actions.

This tendency in part can be explained by the engine that drives self-justification called cognitive dissonance. It is the state of tension that occurs, whenever we hold two opposing facts in our heads, such as “smoking is bad” and “I smoke two packs a day”.

Dissonance bothers us under any circumstances, but it becomes particularly unbearable, when our self-concept is threatened by it. After all, we spend our lives trying to lead lives that are consistent and meaningful. This drive “to save face” is so powerful that it often overrules and contradicts the pure effects of rewards and punishments as assumed by economic theory or observed in simple animal behavioral research.

The most obvious way to quiet dissonance is by quitting. However, a smoker that has tried to quit and failed can also quiet the other belief – namely that smoking is not all that bad. It is the simple and failure-free option that allows her to feel good about herself and requires hardly any effort. Having suspended our moral compass only once and found rationales for the bad, but fixable, choices gives us permission to repeat them in the future and continue the vicious cycle.

The Vicious Cycle of Self-Justification

Carol Tavris

Carol Tavris and Elliot Aronson in their book Mistakes Were Made (But Not by Me): Why We Justify Foolish Beliefs, Bad Decisions, and Hurtful Acts explain the vicious cycle of choices with an analogy of a pyramid.

Consider the case of two reasonably honest students at the beginning of the term. They face the temptation to cheat on an important test. One of them gives in and the other does not. How do you think they will feel about cheating a week later?

Most likely their initially torn opinions will have polarized in light of their initial choices. Now take this effect and amplify it over the term. By the time they are through with the term two things will have happened:
1) They will be very far from each other in their beliefs
2) They will be convinced that they have always felt strongly about the issue and their side of the argument

Just like those students, we are often at the top of the choice pyramid, facing a decision whose consequences are morally ambiguous. This first choice then starts a process of entrapment of action – justification – further action, which increases the intensity of our commitment

triangle

Over time our choices reinforce themselves and towards the bottom of the pyramid, we find ourselves rolling toward increasingly extreme views.

Consider the famous Stanley Milgram experiment, where two thirds of the 3,000 subjects administered a life threatening level of electric shock to another person. While this study is often used to illustrate our obedience to authority, it also a demonstrates the effects of self-justification.

Simply imagine the scenario of someone asking you to do the favor inflicting 500V of potentially deadly and incredibly painful shock on another person for the sake of science. Chances are most of us would refuse it under any circumstances.

Now suppose the researcher tells you he is interested in effects of punishment on learning and you will have to inflict hardly noticeable electric impulses on another person. You are even encouraged to try the lower levels of 10V yourself to feel that the pain is hardly noticeable.

When you come along, suddenly the experimenter asks you to increase the shock to 20V, which seems like a small increase, so you agree without thinking much. Then the cascade continues – if you gave 20V shock, what is the harm in giving 30V? Suddenly you find yourself unable to draw the line, so you simply tag along with the instructions.

When people are asked in advance whether they would administer shock above 450V, nearly nobody believes they would. However, when facing the choice under pressing circumstances, two thirds of them did!

The implications here are powerful – if we don’t actively draw the line ourselves, our habits and circumstances will decide for us.

Making Smarter Choices

We will all do dumb things. We can’t help it. We are wired that way. However, we are not doomed to live in denial or keep striving to justify our actions. We always have the choice to correct our tendencies, once we recognize them.

A better understanding of our minds serves as the first step towards breaking the self-justification habit. It takes time, self-reflection and willingness to become more mindful about our behavior and reasons for our behavior, but it is well worth the effort.

The authors of Mistakes Were Made (But not By Me) give an example of conservative William Safire, who wrote a column criticizing (then and current) American presidential candidate Hillary Clinton’s efforts to conceal the identity of her health care task force. A few years later Dick Cheney, a Republican (conservative) candidate whom Safire admired, made a similar move to Clinton by insisting on keeping his energy task force secret.

The alarm bell in Safire’s head rang and he admits that the temptation to rationalize the occasion and apply double standards was enormous. However, he recognized the dissonance effects and ended up writing a similar column about Cheney.

We know that Safire’s ability to spot his own dissonance and do the fair thing is rare. People will bend over backward to reduce dissonance in a way that is favorable to them and their team. Resisting that urge is not easy to do, but it is much better than letting the natural psychological tendencies cripple the integrity of our behaviors. There are ways to make fairness easier.

Making Things Easier

On the personal level Charlie Munger suggests we should face two simple facts. Firstly, fixable, but unfixed bad performance is bad character and tends to create more of itself and cause more damage — a sort of Gresham’s Law. And, secondly, in demanding places like athletic teams, excuses and bad behavior will not get us far.

On the institutional level Munger advises to build a fair, meritocratic, demanding culture plus personnel handling methods that build up morale. His second piece of advice is severance of the worst offenders, when possible.

Munger expands on the second point by noting that it is not in any case possible to let go our children, but we must therefore try to fix them to our best ability. He gives a real life example of a child, who had the habit of taking candy from the stock of his father’s employer with the excuse that he had intended to replace it later. The father said words that never left the child:

“Son, it would be better for you to simply take all you want and call yourself a thief every time you do it.”

Turns out the child in this example was the dean of University of Southern California Business School, where Munger delivered the speech.

If we are effective, the lessons we teach our children will serve them well throughout their lives.

***

There is so much more to touch on with bias from self interest, including its relation to hierarchy, how it distorts information, how it feeds our desire for self-preservation and scarcity, how it impacts group preservation, its relationship to terrority etc.

Bias From Self-Interest is part of the Farnam Street latticework of mental models