Tag: Heuristics

Choosing your Choice Architect(ure)

“Nothing will ever be attempted
if all possible objections must first be overcome.”

— Samuel Johnson

***

In the book Nudge by Richard Thaler and Cass Sunstein they coin the terms ‘Choice Architecture’ and ‘Choice Architect’. For them, if you have an ability to influence the choices other people make, you are a choice architect.

Considering the number of interactions we have everyday, it would be quite easy to argue that we are all Choice Architects at some point. But this also makes the inverse true; we are also wandering around someone else’s Choice Architecture.

Let’s take a look at a few of the principles of good choice architecture, so we can get a better idea of when someone is trying to nudge us.

This information can then be used/weighed when making decisions.  

Defaults

Thaler and Sunstein start with a discussion on “defaults” that are commonly offered to us:

For reasons we have discussed, many people will take whatever option requires the least effort, or the path of least resistance. Recall the discussion of inertia, status quo bias, and the ‘yeah, whatever’ heuristic. All these forces imply that if, for a given choice, there is a default option — an option that will obtain if the chooser does nothing — then we can expect a large number of people to end up with that option, whether or not it is good for them. And as we have also stressed, these behavioral tendencies toward doing nothing will be reinforced if the default option comes with some implicit or explicit suggestion that it represents the normal or even the recommended course of action.

When making decisions people will often take the option that requires the least effort or the path of least resistance. This makes sense: It’s not just a matter of laziness, we also only have so many hours in a day. Unless you feel particularly strongly about it, if putting little to no effort towards something leads you forward (or at least doesn’t noticeably kick you backwards) this is what you are likely to do. Loss aversion plays a role as well. If we feel like the consequences of making a poor choice are high, we will simply decide to do nothing. 

Inertia is another reason: If the ship is currently sailing forward, it can often take a lot of time and effort just to slightly change course.

You have likely seen many examples of inertia at play in your work environment and this isn’t necessarily a bad thing.

Sometimes we need that ship to just steadily move forward. The important bit is to realize when this is factoring into your decisions, or more specifically, when this knowledge is being used to nudge you into making specific choices.

Let’s think about some of your monthly recurring bills. While you might not be reading that magazine or going to the gym, you’re still paying for the ability to use that good or service. If you weren’t being auto-renewed monthly, what is the chance that you would put the effort into renewing that subscription or membership? Much lower, right? Publishers and gym owners know this, and they know you don't want to go through the hassle of cancelling either, so they make that difficult, too. (They understand well our tendency to want to travel the path of least resistance and avoid conflict.)

This is also where they will imply that the default option is the recommended course of action. It sounds like this:

“We’re sorry to hear you no longer want the magazine Mr. Smith. You know, more than half of the fortune 500 companies have a monthly subscription to magazine X, but we understand if it’s not something you’d like to do at the moment.”

or

“Mr. Smith we are sorry to hear that you want to cancel your membership at GymX. We understand if you can’t make your health a priority at this point but we’d love to see you back sometime soon. We see this all the time, these days everyone is so busy. But I’m happy to say we are noticing a shift where people are starting to make time for themselves, especially in your demographic…”

(Just cancel them. You’ll feel better. We promise.)

The Structure of Complex Choices

We live in a world of reviews. Product reviews, corporate reviews, movie reviews… When was the last time you bought a phone or a car before checking the reviews? When was the last time that you hired an employee without checking out their references? 

Thaler and Sunstein call this Collaborative Filtering and explain it as follows:

You use the judgements of other people who share your tastes to filter through the vast number of books or movies available in order to increase the likelihood of picking one you like. Collaborative filtering is an effort to solve a problem of choice architecture. If you know what people like you tend to like, you might well be comfortable in selecting products you don’t know, because people like you tend to like them. For many of us, collaborative filtering is making difficult choices easier.

While collaborative filtering does a great job of making difficult choices easier we have to remember that companies also know that you will use this tool and will try to manipulate it. We just have to look at the information critically, compare multiple sources and take some time to review the reviewers.

These techniques can be useful for decisions of a certain scale and complexity: when the alternatives are understood and in small enough numbers. However, once we reach a certain size we require additional tools to make the right decision.

One strategy to use is what Amos Tversky (1972) called ‘elimination by aspects.’ Someone using this strategy first decides what aspect is most important (say, commuting distance), establishes a cutoff level (say, no more than a thirty-minute commute), then eliminates all the alternatives that do not come up to this standard. The process is repeated, attribute by attribute (no more than $1,500 per month; at least two bedrooms; dogs permitted), until either a choice is made or the set is narrowed down enough to switch over to a compensatory evaluation of the ‘finalists.’”

This is a very useful tool if you have a good idea of which attributes are of most value to you.

When using these techniques, we have to be mindful of the fact that the companies trying to sell us goods have spent a lot of time and money figuring out what attributes are important to you as well.

For example, if you were to shop for an SUV you would notice that there are a specific number of variables they all seem to have in common now (engine options, towing options, seating options, storage options). They are trying to nudge you not to eliminate them from your list. This forces you to do the tertiary research or better yet, this forces you to walk into dealerships where they will try to inflate the importance of those attributes (which they do best).

They also try to call things new names as a means to differentiate themselves and get onto your list. What do you mean our competitors don't have FLEXfuel?

Incentives

Incentives are so ubiquitous in our lives that it’s very easy to overlook them. Unfortunately, this can influence us to make poor decisions.

Thaler and Sunstein believe this is tied into how salient the incentive is.

The most important modification that must be made to a standard analysis of incentives is salience. Do the choosers actually notice the incentives they face? In free markets, the answer is usually yes, but in important cases the answer is no.

Consider the example of members of an urban family deciding whether to buy a car. Suppose their choices are to take taxis and public transportation or to spend ten thousand dollars to buy a used car, which they can park on the street in front of their home. The only salient costs of owning this car will be the weekly stops at the gas station, occasional repair bills, and a yearly insurance bill. The opportunity cost of the ten thousand dollars is likely to be neglected. (In other words, once they purchase the car, they tend to forget about the ten thousand dollars and stop treating it as money that could have been spent on something else.) In contrast, every time the family uses a taxi the cost will be in their face, with the meter clicking every few blocks. So behavioral analysis of the incentives of car ownership will predict that people will underweight the opportunity costs of car ownership, and possibly other less salient aspects such as depreciation, and may overweight the very salient costs of using a taxi.

The problems here are relatable and easily solved: If the family above had written down all the numbers related to either taxi, public transportation, or car ownership, it would have been a lot more difficult for them to undervalue the salient aspects of any of their choices. (At least if the highest value attribute is cost).

***

This isn’t an exhaustive list of all the daily nudges we face but it’s a good start and some important, translatable, themes emerge.

  • Realize when you are wandering around someone’s choice architecture.
  • Do your homework
  • Develop strategies to help you make decisions when you are being nudged.

 

Still Interested? Buy, and most importantly read, the whole book. Also, check out our other post on some of the Biases and Blunders covered in Nudge.

How Analogies Reveal Connections, Spark Innovation, and Sell Our Greatest Ideas

Image Source: XKCD
Source: xkcd.com

 

John Pollack is a former Presidential Speechwriter. If anyone knows the power of words to move people to action, shape arguments, and persuade, it is he.

In Shortcut: How Analogies Reveal Connections, Spark Innovation, and Sell Our Greatest Ideas, he explores the powerful role of analogy in persuasion and creativity.

One of the key tools he uses for this is analogy.

While they often operate unnoticed, analogies aren’t accidents, they’re arguments—arguments that, like icebergs, conceal most of their mass and power beneath the surface. In arguments, whoever has the best argument wins.

But analogies do more than just persuade others — they also play a role in innovation and decision making.

From the bloody Chicago slaughterhouse that inspired Henry Ford’s first moving assembly line, to the “domino theory” that led America into the Vietnam War, to the “bicycle for the mind” that Steve Jobs envisioned as a Macintosh computer, analogies have played a dynamic role in shaping the world around us.

Despite their importance, many people have only a vague sense of the definition.

What is an Analogy?

In broad terms, an analogy is simply a comparison that asserts a parallel—explicit or implicit—between two distinct things, based on the perception of a share property or relation. In everyday use, analogies actually appear in many forms. Some of these include metaphors, similes, political slogans, legal arguments, marketing taglines, mathematical formulas, biblical parables, logos, TV ads, euphemisms, proverbs, fables and sports clichés.

Because they are so disguised they play a bigger role than we consciously realize. Not only do analogies effectively make arguments, but they trigger emotions. And emotions make it hard to make rational decisions.

While we take analogies for granted, the ideas they convey are notably complex.

All day every day, in fact, we make or evaluate one analogy after the other, because some comparisons are the only practical way to sort a flood of incoming data, place it within the content of our experience, and make decisions accordingly.

Remember the powerful metaphor — that arguments are war. This shapes a wide variety of expressions like “your claims are indefensible,” “attacking the weakpoints,” and “You disagree, OK shoot.”

Or consider the Map and the Territory — Analogies give people the map but explain nothing of the territory.

Warren Buffett is one of the best at using analogies to communicate effectively. One of my favorite analogies is when he noted “You never know who’s swimming naked until the tide goes out.” In other words, when times are good everyone looks amazing. When times suck, hidden weaknesses are exposed. The same could be said for analogies:

We never know what assumptions, deceptions, or brilliant insights they might be hiding until we look beneath the surface.

Most people underestimate the importance of a good analogy. As with many things in life, this lack of awareness comes at a cost. Ignorance is expensive.

Evidence suggests that people who tend to overlook or underestimate analogy’s influence often find themselves struggling to make their arguments or achieve their goals. The converse is also true. Those who construct the clearest, most resonant and apt analogies are usually the most successful in reaching the outcomes they seek.

The key to all of this is figuring out why analogies function so effectively and how they work. Once we know that, we should be able to craft better ones.

Don’t Think of an Elephant

Effective, persuasive analogies frame situations and arguments, often so subtly that we don’t even realize there is a frame, let alone one that might not work in our favor. Such conceptual frames, like picture frames, include some ideas, images, and emotions and exclude others. By setting a frame, a person or organization can, for better or worse, exert remarkable influence on the direction of their own thinking and that of others.

He who holds the pen frames the story. The first person to frame the story controls the narrative and it takes a massive amount of energy to change the direction of the story. Sometimes even the way that people come across information, shapes it — stories that would be a non-event if disclosed proactively became front page stories because someone found out.

In Don’t Think of an Elephant, George Lakoff explores the issue of framing. The book famously begins with the instruction “Don’t think of an elephant.”

What’s the first thing we all do? Think of an elephant, of course. It’s almost impossible not to think of an elephant. When we stop consciously thinking about it, it floats away and we move on to other topics — like the new email that just arrived. But then again it will pop back into consciousness and bring some friends — associated ideas, other exotic animals, or even thoughts of the GOP.

“Every word, like elephant, evokes a frame, which can be an image of other kinds of knowledge,” Lakoff writes. This is why we want to control the frame rather than be controlled by it.

In Shortcut Pollack tells of Lakoff talking about an analogy that President George W. Bush made in the 2004 State of the Union address, in which he argued the Iraq war was necessary despite the international criticism. Before we go on, take Bush’s side here and think about how you would argue this point – how would you defend this?

In the speech, Bush proclaimed that “America will never seek a permission slip to defend the security of our people.”

As Lakoff notes, Bush could have said, “We won’t ask permission.” But he didn’t. Instead he intentionally used the analogy of permission slip and in so doing framed the issue in terms that would “trigger strong, more negative emotional associations that endured in people’s memories of childhood rules and restrictions.”

Commenting on this, Pollack writes:

Through structure mapping, we correlate the role of the United States to that of a young student who must appeal to their teacher for permission to do anything outside the classroom, even going down the hall to use the toilet.

But is seeking diplomatic consensus to avoid or end a war actually analogous to a child asking their teacher for permission to use the toilet? Not at all. Yet once this analogy has been stated (Farnam Street editorial: and tweeted), the debate has been framed. Those who would reject a unilateral, my-way-or-the-highway approach to foreign policy suddenly find themselves battling not just political opposition but people’s deeply ingrained resentment of childhood’s seemingly petty regulations and restrictions. On an even subtler level, the idea of not asking for a permission slip also frames the issue in terms of sidestepping bureaucratic paperwork, and who likes bureaucracy or paperwork.

Deconstructing Analogies

Deconstructing analogies, we find out how they function so effectively. Pollack argues they meet five essential criteria.

  1. Use the highly familiar to explain something less familiar.
  2. Highlight similarities and obscure differences.
  3. Identify useful abstractions.
  4. Tell a coherent story.
  5. Resonate emotionally.

Let’s explore how these work in greater detail. Let’s use the example of master-thief, Bruce Reynolds, who described the Great Train Robbery as his Sistine Chapel.

The Great Train Robbery

In the dark early hours of August 8, 1963, an intrepid gang of robbers hot-wired a six-volt battery to a railroad signal not far from the town of Leighton Buzzard, some forty miles north of London. Shortly, the engineer of an approaching mail train, spotting the red light ahead, slowed his train to a halt and sent one of his crew down the track, on foot, to investigate. Within minutes, the gang overpowered the train’s crew and, in less than twenty minutes, made off with the equivalent of more than $60 million in cash.

Years later, Bruce Reynolds, the mastermind of what quickly became known as the Great Train Robbery, described the spectacular heist as “my Sistine Chapel.”

Use the familiar to explain something less familiar

Reynolds exploits the public’s basic familiarity with the famous chapel in the Vatican City, which after Leonardo da Vinci’s Mona Lisa is perhaps the best-known work of Renaissance art in the world. Millions of people, even those who aren’t art connoisseurs, would likely share the cultural opinion that the paintings in the chapel represent “great art” (as compared to a smaller subset of people who might feel the same way about Jackson Pollock’s drip paintings, or Marcel Duchamp’s upturned urinal).

Highlight similarities and obscure differences

Reynold’s analogy highlights, through implication, similarities between the heist and the chapel—both took meticulous planning and masterful execution. After all, stopping a train and stealing the equivalent of $60m—and doing it without guns—does require a certain artistry. At the same time, the analogy obscures important differences. By invoking the image of a holy sanctuary, Reynolds triggers a host of associations in the audience’s mind—God, faith, morality, and forgiveness, among others—that camouflage the fact that he’s describing an action few would consider morally commendable, even if the artistry involved in robbing that train was admirable.

Identify useful abstractions

The analogy offers a subtle but useful abstraction: Genius is genius and art is art, no matter what the medium. The logic? If we believe that genius and artistry can transcend genre, we must concede that Reynolds, whose artful, ingenious theft netted millions, is an artist.

Tell a coherent story

The analogy offers a coherent narrative. Calling the Great Train Robbery his Sistine Chapel offers the audience a simple story that, at least on the surface makes sense: Just as Michelangelo was called by God, the pope, and history to create his greatest work, so too was Bruce Reynolds called by destiny to pull off the greatest robbery in history. And if the Sistine Chapel endures as an expression of genius, so too must the Great Train Robbery. Yes, robbing the train was wrong. But the public perceived it as largely a victimless crime, committed by renegades who were nothing if not audacious. And who but the most audacious in history ever create great art? Ergo, according to this narrative, Reynolds is an audacious genius, master of his chosen endeavor, and an artist to be admired in public.

There is an important point here. The narrative need not be accurate. It is the feelings and ideas the analogy evokes that make it powerful. Within the structure of the analogy, the argument rings true. The framing is enough to establish it succulently and subtly. That’s what makes it so powerful.

Resonate emotionally

The analogy resonates emotionally. To many people, mere mention of the Sistine Chapel brings an image to mind, perhaps the finger of Adam reaching out toward the finger of God, or perhaps just that of a lesser chapel with which they are personally familiar. Generally speaking, chapels are considered beautiful, and beauty is an idea that tends to evoke positive emotions. Such positive emotions, in turn, reinforce the argument that Reynolds is making—that there’s little difference between his work and that of a great artist.

Jumping to Conclusions

Daniel Kahneman explains the two thinking structures that govern the way we think: System one and system two . In his book, Thinking Fast and Slow, he writes “Jumping to conclusions is efficient if the conclusions are likely to be correct and the costs of an occasional mistake are acceptable, and if the jump saves much time and effort.”

“A good analogy serves as an intellectual springboard that helps us jump to conclusions,” Pollack writes. He continues:

And once we’re in midair, flying through assumptions that reinforce our preconceptions and preferences, we’re well on our way to a phenomenon known as confirmation bias. When we encounter a statement and seek to understand it, we evaluate it by first assuming it is true and exploring the implications that result. We don’t even consider dismissing the statement as untrue unless enough of its implications don’t add up. And consider is the operative word. Studies suggest that most people seek out only information that confirms the beliefs they currently hold and often dismiss any contradictory evidence they encounter.

The ongoing battle between fact and fiction commonly takes place in our subconscious systems. In The Political Brain: The Role of Emotion in Deciding the Fate of the Nation, Drew Westen, an Emory University psychologist, writes: “Our brains have a remarkable capacity to find their way toward convenient truths—even if they are not all true.”

This also helps explain why getting promoted has almost nothing to do with your performance.

Remember Apollo Robbins? He’s a professional pickpocket. While he has unique skills, he succeeds largely through the choreography of people’s attention. “Attention,” he says “is like water. It flows. It’s liquid. You create channels to divert it, and you hope that it flows the right way.”

“Pickpocketing and analogies are in a sense the same,” Pollack concludes, “as the misleading analogy picks a listener’s mental pocket.”

And this is true whether someone else diverts our attention through a resonant but misleading analogy—“Judges are like umpires”—or we simply choose the wrong analogy all by ourselves.

Reasoning by Analogy

We rarely stop to see how much of our reasoning is done by analogy. In a 2005 study published in the Harvard Business Review, Giovanni Gavettie and Jan Rivkin wrote: “Leaders tend to be so immersed in the specifics of strategy that they rarely stop to think how much of their reasoning is done by analogy.” As a result they miss things. They make connections that don’t exist. They don’t check assumptions. They miss useful insights. By contrast “Managers who pay attention to their own analogical thinking will make better strategic decisions and fewer mistakes.”

***

Shortcut goes on to explore when to use analogies and how to craft them to maximize persuasion.

Miracles Happen — The Simple Heuristic That Saved 150 Lives

"In an uncertain world, statistical thinking and risk communication alone are not sufficient. Good rules of thumb are essential for good decisions."
“In an uncertain world, statistical thinking and risk communication alone are not sufficient. Good rules of thumb are essential for good decisions.”

Three minutes after taking off from LaGuardia airport in New York City, US Airways Flight 1549 ran into a flock of Canada geese. At 2800 feet, passengers and crew heard loud bangs as the geese collided with the engines rendering them both inoperable.

Gerd Gigerenzer picks up the story in his book Risk Savvy: How to Make Good Decisions:

When it dawned on the passengers that they were gliding toward the ground , it grew quiet on the plane. No panic, only silent prayer. Captain Chesley Sullenberger called air traffic control: “Hit birds. We’ve lost thrust in both engines. We’re turning back towards LaGuardia.”

But landing short of the airport would have catastrophic consequences, for passengers, crew , and the people living below. The captain and the copilot had to make a good judgment. Could the plane actually make it to LaGuardia, or would they have to try something more risky, such as a water landing in the Hudson River? One might expect the pilots to have measured speed, wind, altitude, and distance and fed this information into a calculator. Instead, they simply used a rule of thumb:

Fix your gaze on the tower: If the tower rises in your windshield, you won’t make it.

No estimation of the trajectory of the gliding plane is necessary. No time is wasted. And the rule is immune to calculation errors. In the words of copilot Jeffrey Skiles: “It’s not so much a mathematical calculation as visual, in that when you are flying in an airplane, things that— a point that you can’t reach will actually rise in your windshield. A point that you are going to overfly will descend in your windshield.” This time the point they were trying to reach did not descend but rose. They went for the Hudson.

In the cabin, the passengers were not aware of what was going on in the cockpit. All they heard was: “This is the captain: Brace for impact.” Flight attendants shouted: “Heads down! Stay down!” Passengers and crew later recalled that they were trying to grasp what death would be like, and the anguish of their kids, husbands, and wives. Then the impact happened, and the plane stopped. When passengers opened the emergency doors, sunlight streamed in. Everyone got up and rushed toward the openings. Only one passenger headed to the overhead bin to get her carry-on but was immediately stopped. The wings of the floating but slowly sinking plane were packed with people in life jackets hoping to be rescued. Then they saw the ferry coming. Everyone survived.

All this happened within the three minutes between the geese hitting the plane and the ditch in the river. During that time, the pilots began to run through the dual-engine failure checklist, a three-page list designed to be used at thirty thousand feet, not at three thousand feet: turn the ignition on, reset flight control computer, and so on. But they could not finish it. Nor did they have time to even start on the ditching checklist. While the evacuation was underway, Skiles remained in the cockpit and went through the evacuation checklist to safeguard against potential fire hazards and other dangers. Sullenberger went back to check on passengers and left the cabin only after making sure that no one was left behind. It was the combination of teamwork, checklists, and smart rules of thumb that made the miracle possible.

***

Say what? They used a heuristic?

Heuristics enable us to make fast, highly (but not perfectly) accurate, decisions without taking too much time and searching for information. Heuristics allow us to focus on only a few pieces of information and ignore the rest.

“Experts,” Gigerenzer writes, “often search for less information than novices do.”

We do the same thing, intuitively, to catch a baseball — the gaze heuristic.

Fix your gaze on an object, and adjust your speed so that the angle of gaze remains constant.

Professionals and amateurs alike rely on this rule.

… If a fly ball comes in high, the player fixates his eyes on the ball, starts running, and adjusts his running speed so that the angle of gaze remains constant. The player does not need to calculate the trajectory of the ball. To select the right parabola, the player’s brain would have to estimate the ball’s initial distance, velocity, and angle, which is not a simple feat. And to make things more complicated, real-life balls do not fly in parabolas . Wind, air resistance, and spin affect their paths. Even the most sophisticated robots or computers today cannot correctly estimate a landing point during the few seconds a ball soars through the air. The gaze heuristic solves this problem by guiding the player toward the landing point, not by calculating it mathematically . That’s why players don’t know exactly where the ball will land, and often run into walls and over the stands in their pursuit.

The gaze heuristic is an example of how the mind can discover simple solutions to very complex problems.

***

The broader point of Gigerenzer's book is that while rational thinking works well for risks, you need a combination of rational and heuristic thinking to make decisions under uncertainty.

A heuristic to figure out success

Nassim Taleb writes:

A trivial and potent heuristic to figure out success: a) you are absolutely successful if and only if you don't envy anyone; b) quite successful if those you envy you don't know in person; c) miserably unsuccessful if those you envy you encounter or think about daily.

Absolute success is mostly found among ascetic persons.

Still curious? Taleb's new book, Antifragile: Things That Gain from Disorder, comes out this fall. In the meantime, if you haven't read The Black Swan then you're missing out.

3 Things You Should Know About the Availability Heuristic

William James

There are 3 things you should know about the availability heuristic:

  1. We often misjudge the frequency and magnitude of events that have happened recently.
  2. This happens, in part, because of the limitations on memory.
  3. We remember things better when they come in a vivid narrative.

***

There are two biases emanating from the availability heuristic (a.k.a. the availability bias): Ease of recall and retrievability.

Because of the availability bias, our perceptions of risk may be in error and we might worry about the wrong risks. This can have disastrous impacts.

Ease of recall suggests that if something is more easily recalled in memory it must occur with a higher probability.

The availability heuristic distorts our understanding of real risks.

When we make decisions we tend to be swayed by what we remember. What we remember is influenced by many things including beliefs, expectations, emotions, and feelings as well as things like frequency of exposure.  Media coverage (e.g., Internet, radio, television) makes a big difference. When rare events occur they become very visible to us as they receive heavy coverage by the media. This means we are more likely to recall it, especially in the immediate aftermath of the event. However, recalling an event and estimating its real probability are two different things. If you're in a car accident, for example, you are likely to rate the odds of getting into another car accident much higher than base rates would indicate.

Retrievability suggests that we are biased in assessments of frequency in part because of our memory structure limitations and our search mechanisms. It's the way we remember that matters.

The retrievability and ease of recall biases indicate that the availability bias can substantially and unconsciously influence our judgment. We too easily assume that our recollections are representative and true and discount events that are outside of our immediate memory.

***

In Thinking Fast and Slow, Kahneman writes:

People tend to assess the relative importance of issues by the ease with which they are retrieved from memory—and this is largely determined by the extent of coverage in the media.

***

Nobel Prize winning Social Scientist and Father of Artificial Intelligence, Herbert Simon, wrote in Models of My life:

I soon learned that one wins awards mainly for winning awards: an example of what Bob Merton calls the Matthew Effect. It is akin also to the phenomenon known in politics as “availability,” or name recognition. Once one becomes sufficiently well known, one's name surfaces automatically as soon as an award committee assembles.

* * *

According to Harvard professor Max Bazerman

Many life decisions are affected by the vividness of information. Although most people recognize that AIDS is a devastating disease, many individuals ignore clear data about how to avoid contracting AIDS. In the fall of 1991, however, sexual behavior in Dallas was dramatically affected by one vivid piece of data that may or may not have been true. In a chilling interview, a Dallas woman calling herself C.J. claimed she had AIDS and was trying to spread the disease out of revenge against the man who had infected her. After this vivid interview made the local news, attendance at Dallas AIDS seminary increased dramatically. Although C.J.'s possible actions were a legitimate cause for concern, it is clear that most of the health risks related to AIDS are not a result of one woman's actions. There are many more important reasons to be concerned about AIDS. However, C.J.'s vivid report had a more substantial effect on many people's behavior than the mountains of data available. The Availability Heuristic describes the inferences we make about even commonness based on the ease with which we can remember instances of that event

While this example of vividness may seem fairly benign, it is not difficult to see how the availability bias could lead managers to make potentially destructive workplace decisions. The following came from the experience of one of our MBA students: As a purchasing agent, he had to select one of several possible suppliers. He chose the firm with whose name was the most familiar to him. He later found out that the salience of the name resulted from recent adverse publicity concerning the firm's extortion of funds from client companies!

Managers conducting performance appraisals often fall victim to the availability heuristic. Working from memory, vivid instances of an employee's behavior (either positive or negative) will be most easily recalled from memory, will appear more numerous than commonplace incidents, and will therefore be weighted more heavily in the performance appraisals. The recency of events is also a factor: Managers give more weight to performance during the three months prior to the evaluation than to the previous nine months of the evaluation period because it is more available in memory.

* * *

There are numerous implications for availability bias for investors.

A study by Karlsson, Loewenstein, and Ariely (2008) showed that people are more likely to purchase insurance to protect themselves after a natural disaster they have just experienced than they are to purchase insurance on this type of disaster before it happens.

Bazerman adds:

This pattern may be sensible for some types of risks. After all, the experience of surviving a hurricane may offer solid evidence that your property is more vulnerable to hurricanes than you had thought or that climate change is increasing your vulnerability to hurricanes.

Robyn M. Dawes, in his book Everyday Irrationality, says:

What is a little less obvious is that people can make judgments of the ease with which instances can come to mind without actually recalling specific instances. We know, for example, whether we can recall the presidents of the United States–or rather how well we can recall their names; moreover, we know at which periods of history we are better at recalling them than at which other periods. We can make judgments without actually listing in our minds the names of the specific presidents.

This recall of ease of creating instances is not limited to actual experience, but extends to hypothetical experience as well. For example, subjects are asked to consider how many subcommittees of two people can be formed from a committee of eight, and either the same or other subjects are asked to estimate how many subcommittees of six can be formed from a committee of eight people. It is much easier to think about pairs of people than to think about sets of six people, with the result that the estimate of pairs tends to be much higher than the estimate of subsets of six. In point of logic, however, the number of subsets of two is identical that of six; the formation of a particular subset of two people automatically involves the formation of a particular subset consisting of the remaining six. Because these unique subsets are paired together, there are the same number of each.

This availability to the imagination also creates a particularly striking irrationality, which can be termed with the conjunction fallacy or compound probability fallacy. Often combinations of events or entities are easier to think about than their components, because the combination might make sense whereas the individual component does not. A classic example is that of a hypothetical woman names Linda who is said to have been a social activist majoring in philosophy as a college undergraduate. What is the probability that at age thirty she is a bank teller? Subjects judge the probability as very unlikely. But when asked whether she might be a bank teller active in a feminist movement, subjects judge this combination to be more likely than for her to be a bank teller.

* * *

Retrievability (based on memory structures)

We are better at retrieving words from memory using the word's initial letter than a random position like 3 (Tversky & Kahneman, 1973).

In 1984 Tverksy and Kahneman demonstrated the retrievability bias again when they asked participants in their study to estimate the frequency of seven-letter words that had the letter “n” in the sixth position. Their participants estimated such words to be less common than seven letter words ending in the more memorable “ing”. This response is incorrect. All seven letter words ending with “ing” also have an “n” in the sixth position. However it's easy to recall seven letter words ending with ing. As we demonstrated with Dawes above, this is another example of the conjunction fallacy.

Retail locations are chosen based on search as well, which explains why gas stations and retail stores are often “clumped” together. Consumers learn the location of a product and organize their mind accordingly. While you may not remember the name of all three gas stations on the same corner, your mind tells you that is where to go to find gas. Each station, assuming all else equal, then has a 1/3 shot at your business which is much better than gas stations you don't visit because their location doesn't resonate with your minds search. In order to maximize traffic stores must find locations that consumers associate with a product.

* * *

Exposure Effect

People tend to develop a preference for things because they are familiar with them. This is called the exposure effect. According to Titchener (1910) the exposure effect leads people to experience a “glow or warmth, a sense of ownership, a feeling of intimacy.”

The exposure effect applies only to things that are perceived as neutral to positive. If you are repeatedly exposed to something perceived as a negative stimuli it may in fact amplify negative feelings. For example, when someone is playing loud music you tend to have a lot of patience at first. However as time goes on you get increasingly aggravated as your exposure to the stimuli increases.

The more we are exposed to something the easier it is to recall in our minds. The exposure effect influences us in many ways. Think about brands, stocks, songs, companies, and even the old saying “the devil you know.”

* * *

The Von Restorff Effect

“One of these things doesn't belong,” can accurately summarize the Von Restorff Effect (also known as the isolation effect and novelty effect). In our minds, things that stand out are more likely to be remembered and recalled because we give increased attention to distinctive items in a set.

For example, if i asked you to remember the following sequence of characters “RTASDT9RTGS” I suspect the most common character remembered would be the “9” because it stands out and thus your mind gives it more attention.

The Von Restorff Effect leads us to Vivid evidence.

* * *

Vivid Evidence

According to William James in the Principles of Psychology:

An impression may be so exciting emotionally as to almost leave a scar upon the cerebral tissues; and thus originates a pathological delusion. For example “A woman attacked by robbers takes all the men whom she sees, even her own son, for brigands bent on killing her. Another woman sees her child run over by a horse; no amount of reasoning, not even the sight of the living child, will persuade her that he is not killed.

M. Taine wrote:

If we compare different sensations, images, or ideas, we find that their aptitudes for revival are not equal. A large number of them are obliterated, and never reappear throughout life; for instance, I drove through Paris a day or two ago, and though I saw plainly some sixty or eighty new faces, I cannot now recall any one of them; some extraordinary circumstance, a fit of delirium, or the excitement of hashish would be necessary to give me a chance at revival. On the other hand, there are sensations with a force of revival which nothing destroys or decreases. Though, as a rule, time weakens and impairs our strongest sensations, these reappear entire and intense, without having lost a particle of their detail, or any degree of their force. M. Breirre de Boismont, having suffered when a child from a disease of the scalp, asserts that ‘after fifty-five years have elapsed he can still feel his hair pulled out under the treatment of the ‘skull-cap.'–For my own part, after thirty years, I remember feature for feature the appearance of the theater to which I was taken for the first time. From the third row of boxes, the body of the theater appeared to me an immense well, red and flaming, swarming with heads; below, on the right, on a narrow floor, two men and a woman entered, went out, and re-entered, made gestures, and seemed to me like lively dwarfs: to my great surprise one of these dwarfs fell on his knees, kissed the lady's hand, then hid behind a screen: the other, who was coming in, seemed angry, and raised his arm. I was then seven, I could understand nothing of what was going on; but the well of crimson velvet was so crowded, and bright, that after a quarter of an hour i was, as it were, intoxicated, and fell asleep.

Every one of us may find similar recollections in his memory, and may distinguish them in a common character. The primitive impression has been accompanied by an extraordinary degree of attention, either as being horrible or delightful, or as being new, surprising, and out of proportion to the ordinary run of life; this it is we express by saying that we have been strongly impressed; that we were absorbed, that we could not think of anything else; that our other sensations were effaced; that we were pursued all the next day by the resulting image; that it beset us, that we could not drive it away; that all distractions were feeble beside it. It is by force of this disproportion that impressions of childhood are so persistent; the mind being quite fresh, ordinary objects and events are surprising…

Whatever may be the kind of attention, voluntary or involuntary, it always acts alike; the image of an object or event is capable of revival, and of complete revival, in proportion to the degree of attention with which we have considered the object or event. We put this rule into practice at every moment in ordinary life.

An example from Freeman Dyson:

A striking example of availability bias is the fact that sharks save the lives of swimmers. Careful analysis of deaths in the ocean near San Diego shows that on average, the death of each swimmer killed by a shark saves the lives of ten others. Every time a swimmer is killed, the number of deaths by drowning goes down for a few years and then returns to the normal level. The effect occurs because reports of death by shark attack are remembered more vividly than reports of drownings.

Availability Bias is a Mental Model in the Farnam Street Mental Model Index

Future Babble: Why expert predictions fail and why we believe them anyway

Future Babble has come out to mixed reviews. I think the book would interest anyone seeking wisdom.

Here are some of my notes:

First a little background: Predictions fail because the world is too complicated to be predicted with accuracy and we're wired to avoid uncertainty. However, we shouldn't blindly believe experts. The world is divided into two: foxes and hedgehogs. The fox knows many things whereas the hedgehog knows one big thing. Foxes beat hedgehogs when it comes to making predictions.

  • What we should ask is, in a non-linear world, why would we think oil prices can be predicted. Practically since the dawn of the oil industry in the nineteenth century, experts have been forecasting the price of oil. They've been wrong ever since. And yet this dismal record hasn't caused us to give up on the enterprise of forecasting oil prices. 
  • One of psychology's fundamental insights, wrote psychologist Daniel Gilbert, is that judgements are generally the products of non-conscious systems that operate quickly, on the basis scant evidence, and in a routine manner, and then pass their hurried approximations to consciousness, which slowly and deliberately adjust them. … (one consequence of this is that) Appearance equals reality. In the ancient environment in which our brains evolved, that as a good rule, which is why it became hard-wired into the brain and remains there to this day. (an example of this) as psychologists have shown, people often stereotype “baby-faced” adults as innocent, helpless, and needy.
  • We have a hard time with randomness. If we try, we can understand it intellectually, but as countless experiments have shown, we don't get it intuitively. This is why someone who plunks one coin after another into a slot machine without winning will have a strong and growing sense—the gambler's fallacy—that a jackpot is “due,” even though every result is random and therefore unconnected to what came before. … and people believe that a sequence of random coin tosses that goes “THTHHT” is far more likely than the sequence “THTHTH” even though they are equally likely.
  • People are particularly disinclined to see randomness as the explanation for an outcome when their own actions are involved. Gamblers rolling dice tend to concentrate and throw harder for higher numbers, softer for lower. Psychologists call this the “illusion of control.” … they also found the illusion is stronger when it involves prediction. In a sense, the “illusion of control” should be renamed the “illusion of prediction.”
  • Overconfidence is a universal human trait closely related to an equally widespread phenomenon known as “optimism bias.” Ask smokers about the risk of getting lung cancer from smoking and they'll say it's high. But their risk? Not so high. … The evolutionary advantage of this bias is obvious: It encourages people to take action and makes them more resilient in the face of setbacks.
  • … How could so many experts have been so wrong? … A crucial component of the answer lies in psychology. For all the statistics and reasoning involved, the experts derived their judgements, to one degree or another, from what they felt to be true. And in doing so they were fooled by a common bias. … This tendency to take current trends and project them into the future is the starting point of most attempts to predict. Very often. it's also the end point. That's not necessarily a bad thing. After all, tomorrow typically is like today. Current trends do tend to continue. But not always. Change happens. And the further we look into the future, the more opportunity there is for current rends to be modified, bent, or reversed. Predicting the future by projecting the present is like driving with no hands. It works while you are on a long stretch of straight road but even a gentle curve is trouble, and a sharp turn always ends in a flaming wreck.
  • When people attempt to judge how common something is—or how likely it is to happen in the future—they attempt to think of an example of that thing. If an example is recalled easily, it must be common. If it's harder to recall, it must be less common. … Again, this is not a conscious calculation. The “availability heuristic” is a tool of the unconscious mind.
  • “deviating too far from consensus leaves one feeling potentially ostracized from the group, with the risk that one may be terminated.” (Robert Shiller) … It's tempting to think that only ordinary people are vulnerable to conformity, that esteemed experts could not be so easily swayed. Tempting, but wrong. As Shiller demonstrated, “groupthink” is very much a disease that can strike experts. In fact, psychologist Irving Janis coined the term “groupthink” to describe expert behavior. In his 1972 classic, Victims of Groupthink, Janis investigated four high-level disasters: the defence of Pearl Harbour, the Bay of Pigs invasion, and escalation of the wars in Korea and Vietnam and demonstrated that conformity among highly educated, skilled, and successful people working in their fields of expertise was a root cause in each case.
  • (On corporate use of scenario planning)… Scenarios are not predictions, emphasizes Peter Schwartz, the guru of scenario planning. “They are tools for testing choices.” The idea is to have a clever person dream up a number of very different futures, usually three to four. … Managers then consider the implications of each, forcing them out of the rut of the status quo, and thinking about what they would do if confronted with real change. The ultimate goal is to make decisions that would stand up well in a wide variety of contexts. No one denies there maybe some value in such exercises. But how much value? The consultants who offer scenario planning services are understandably bullish, but ask them for evidence and they typically point to examples of scenarios that accurately foreshadowed the future. That is silly, frankly. For one thing, it contradicts their claim that scenarios are not predictions and al the misses would have to be considered, and the misses vastly outnumber the hits. … Consultants also cite the enormous popularity of scenario planning as proof of its enormous value… Lack of evidence aside, there are more disturbing reasons to be wary of scenarios. Remember that what drives the availably heuristic is not how many examples the mind can recall but how easily they are recalled. … and what are scenarios? Vivid, colourful, dramatic stories. Nothing could be easier to remember or recall. And so being exposed to a dramatic scenario about (whatever)… will make the depicted events feel much more likely to happen.
  • (on not having control) At its core, torture is a process of psychological destruction. and that process almost always begins with the torturer explicitly telling the victim he his powerless. “I decide when you can eat and sleep. I decide when you suffer, how you suffer, if it will end. I decide if you live or die.” …Knowing what will happen in the future is a form of control, even if we cannot change what will happen. …Uncertainty is potent… people who experienced the mild-but-unpredictable shocks experienced much more fear than those who got the strong-but-predictable shocks.
  • Our profound aversion to uncertainty helps explain what would otherwise be a riddle: Why do people pay so much attention to dark and scary predictions? Why do gloomy forecasts so often outnumber optimistic predictions, take up more media space, and sell more books? Part of this predilection for gloom is simply an outgrowth of what is sometimes called negativity bias: our attention is drawn more swiftly by bad news or images, and we are more likely to remember them than cheery information….People who's brains gave priority to bad news were much less likely to be eaten by lions or die some other pleasant death. … (negative) predictions are supported by our intuitive pessimism, so they feel right to us. And that conclusion is bolstered by our attraction to certainty. As strange as it sounds, we want to believe the expert predicting a dark future is less tormenting then suspecting it. Certainty is always preferable to uncertainty, even when what's certain is disaster.
  • Researchers have also shown that financial advisors who express considerable confidence in their stock forecasts are more trusted than those who are less confident, even when their objective records are the same. … This “confidence heuristic” like the availability heuristics, isn't necessarily a conscious decision path. We may not actually say to ourselves “she's so sure of herself she must be right”…
  • (on our love for stories) Confirmation bias also plays a critical role for the very simple reason that none of us is a blank slate. Every human brain is a vast warehouse of beliefs and assumptions about the world and how it works. Psychologists call these “schemas.” We love stories that fit our schemas; they're the cognitive equivalent of beautiful music. But a story that doesn't fit – a story that contradicts basic beliefs – is dissonant.
  • … What makes this mass delusion possible is the different emphasis we put on predictions that hit and those that miss. We ignore misses, even when they lie scattered by the dozen at our feet; we celebrate hits, even when we have to hunt for them and pretend there was more to them that luck.
  • By giving us the sense that we should have predicted what is now the present, or even that we actually did predict it when we did not, it strong suggests that we can predict the future. This is an illusion, and yet it seems only logical – which makes it a particularly persuasive illusion.

If you like the notes you should buy Future Babble. Like the book summaries? Check out my notes from Adapt: Why Success Always Starts With Failure, The Ambiguities of Experience, On Leadership.

Subscribe to Farnam Street via twitteremail, or RSS.

12