Tag: Bias

Mental Model: Bias from Conjunction Fallacy

Daniel Kahneman and Amos Tversky spent decades in psychology research to disentangle patterns in errors of human reasoning. Over the course of their work they discovered a variety of logical fallacies that we tend to make, when facing information that appears vaguely familiar. These fallacies lead to bias – irrational behavior based on beliefs that are not always grounded in reality.

In his book Thinking Fast and Slow, which summarizes his and Tversky’s life work, Kahneman introduces biases that stem from the conjunction fallacy – the false belief that a conjunction of two events is more probable than one of the events on its own.

What is Probability?

Probability can be a difficult concept. Most of us have an intuitive understanding of what probability is, but there is little consensus on what it actually means. It is just as vague and subjective a concept as democracy, beauty or freedom. However, this is not always troublesome – we can still easily discuss the notion with others. Kahneman reflects:

In all the years I spent asking questions about the probability of events, no one ever raised a hand to ask me, “Sir, what do you mean by probability?” as they would have done if I had asked them to assess a strange concept such as globability.

Everyone acted as if they knew how to answer my questions, although we all understood that it would be unfair to ask them for an explanation of what the word means.

While logicians and statisticians might disagree, probability to most of us is simply a tool that describes our degree of belief. For instance, we know that the sun will rise tomorrow and we consider it near impossible that there will be two suns up in the sky instead of one. In addition to the extremes, there are also events which lie somewhere in the middle on the probability spectrum, such as the degree of belief that it will rain tomorrow.

Despite its vagueness, probability has its virtues. Assigning probabilities helps us make the degree of belief actionable and also communicable to others. If we believe that the probability it will rain tomorrow is 90%, we are likely to carry an umbrella and suggest our family do so as well.

Probability, Base Rates and Representativeness

Most of us are already familiar with representativeness and base rates. Consider the classic example of x number of black and y number of white colored marbles in a jar. It is a simple exercise to tell what the probabilities of drawing each color are if you know their base rates (proportion). Using base rates is the obvious approach for estimations when no other information is provided.

However, Kahneman managed to prove that we have a tendency to ignore base rates in light of specific descriptions. He calls this phenomenon the Representativeness Bias. To illustrate representativeness bias, consider the example of seeing a person reading The New York Times on the New York subway. Which do you think would be a better bet about the reading stranger?

1) She has a PhD.
2) She does not have a college degree.

Representativeness would tell you to bet on the PhD, but this is not necessarily a good idea. You should seriously consider the second alternative, because many more non-graduates than PhDs ride in New York subways. While a larger proportion of PhDs may read The New York Times, the total number of New York Times readers with only high school degrees is likely to be much larger, even if the proportion itself is very slim.

In a series of similar experiments, Kahneman’s subjects failed to recognize the base rates in light of individual information. This is unsurprising. Kahneman explains:

On most occasions, people who act friendly are in fact friendly. A professional athlete who is very tall and thin is much more likely to play basketball than football. People with a PhD are more likely to subscribe to The New York Times than people who ended their education after high school. Young men are more likely than elderly women to drive aggressively.

While following representativeness bias might improve your overall accuracy, it will not always be the statistically optimal approach.

Michael Lewis in his bestseller Moneyball tells a story of Oakland A’s baseball team coach, Billy Beane, who recognized this fallacy and used it to his advantage. When recruiting new players for the team, instead of relying on scouts he relied heavily on statistics of past performance. This approach allowed him to build a team of great players that were passed up by other teams because they did not look the part. Needless to say, the team achieved excellent results at a low cost.

Conjunction Fallacy

While representativeness bias occurs when we fail to account for low base rates, conjunction fallacy occurs when we assign a higher probability to an event of higher specificity. This violates the laws of probability.

Consider the following study:

Participants were asked to rank four possible outcomes of the next Wimbledon tournament from most to least probable. Björn Borg was the dominant tennis player of the day when the study was conducted. These were the outcomes:

A. Borg will win the match.
B. Borg will lose the first set.
C. Borg will lose the first set but win the match.
D. Borg will win the first set but lose the match.

How would you order them?

Kahneman was surprised to see that most subjects ordered the chances by directly contradicting the laws of logic and probability. He explains:

The critical items are B and C. B is the more inclusive event and its probability must be higher than that of an event it includes. Contrary to logic, but not to representativeness or plausibility, 72% assigned B a lower probability than C.

If you thought about the problem carefully you drew the following diagram in your head. Losing the first set will always, by definition, be a more probable event than losing the first set and winning the match.
Screen Shot 2016-08-05 at 6.28.30 PM

The Linda Problem

As discussed in our piece on the Narrative Fallacy, the best-known and most controversial of Kahneman and Tversky’s experiments involved a fictitious lady called Linda. The fictional character was created to illustrate the role heuristics play in our judgement and how it can be incompatible with logic. This is how they described Linda.

Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.

Kahneman conducted a series of experiments, in which he showed that representativeness tends to cloud our judgements and that we ignore the base rates in light of stories. The Linda problem started off with the task to estimate the plausibility of 9 different scenarios that subjects were supposed to rank in order of likelihood.

Linda is a teacher in elementary school.
Linda works in a bookstore and takes yoga classes.
Linda is active in the feminist movement.
Linda is a psychiatric social worker.
Linda is a member of the League of Women Voters.
Linda is a bank teller.
Linda is an insurance salesperson.
Linda is a bank teller and is active in the feminist movement.

Kahneman was startled to see that his subjects judged the likelihood of Linda being a bank teller and a feminist more likely than her being just a bank teller. As explained earlier, doing so makes little sense. He went on to explore the phenomenon further:

In what we later described as “increasingly desperate” attempts to eliminate the error, we introduced large groups of people to Linda and asked them this simple question:

Which alternative is more probable?

Linda is a bank teller.
Linda is a bank teller and is active in the feminist movement.

This stark version of the problem made Linda famous in some circles, and it earned us years of controversy. About 85% to 90% of undergraduates at several major universities chose the second option, contrary to logic.

What is especially interesting about these results is that, even when aware of the biases in place, we do not discard them.

When I asked my large undergraduate class in some indignation, “Do you realize that you have violated an elementary logical rule?” someone in the back row shouted, “So what?” and a graduate student who made the same error explained herself by saying, “I thought you just asked for my opinion.”

The issue is not constrained to students and but also affects professionals.

The naturalist Stephen Jay Gould described his own struggle with the Linda problem. He knew the correct answer, of course, and yet, he wrote, “a little homunculus in my head continues to jump up and down, shouting at me—‘but she can’t just be a bank teller; read the description.”

Our brains simply seem to prefer consistency over logic.

The Role of Plausibility

Representativeness and conjunction fallacy occur, because we make the mental shortcut from our perceived plausibility of a scenario to its probability.

The most coherent stories are not necessarily the most probable, but they are plausible, and the notions of coherence, plausibility, and probability are easily confused by the unwary. Representativeness belongs to a cluster of closely related basic assessments that are likely to be generated together. The most representative outcomes combine with the personality description to produce the most coherent stories.

Kahneman warns us about the effects of these biases on our perception of expert opinion and forecasting. He explains that we are more likely to believe scenarios that are illustrative rather than probable.

The uncritical substitution of plausibility for probability has pernicious effects on judgments when scenarios are used as tools of forecasting. Consider these two scenarios, which were presented to different groups, with a request to evaluate their probability:

A massive flood somewhere in North America next year, in which more than 1,000 people drown

An earthquake in California sometime next year, causing a flood in which more than 1,000 people drown

The California earthquake scenario is more plausible than the North America scenario, although its probability is certainly smaller. As expected, probability judgments were higher for the richer and more detailed scenario, contrary to logic. This is a trap for forecasters and their clients: adding detail to scenarios makes them more persuasive, but less likely to come true.

In order to appreciate the role of plausibility, he suggests we have a look at an example without an accompanying explanation.

Which alternative is more probable?

Jane is a teacher.
Jane is a teacher and walks to work.

In this case, when evaluating plausibility and coherence there are no quick answers to the probability question and we can easily conclude that the first one is more likely. The rule goes that in the absence of a competing intuition, logic prevails.

Taming our intuition

The first lesson to thinking clearly is to question how you think. We should not simply believe whatever comes to our mind – our beliefs must be constrained by logic. You don’t have to become an expert in probability to tame your intuition, but having a grasp of simple concepts will help. There are two main rules that are worth repeating in light of representativeness bias:

1) All probabilities add up to 100%.

This means that if you believe that there’s a 90% chance it will rain tomorrow, there’s a 10% of chance that it will not rain tomorrow.

However, since you believe that there is only 90% chance that it will rain tomorrow, you cannot be 95% certain that it will rain tomorrow morning.

We typically make this type of error, when we mean to say that, if it rains, there’s 95% probability it will happen in the morning. That’s a different claim and the probability of raining tomorrow morning under such premises is 0.9*0.95=85.5%.

This also means the odds that, if it rains, it will not rain in the morning, are 90.0%-85.5% = 4.5%.

2) The second principle is called the Bayes rule.

 It allows us to correctly adjust our beliefs with the diagnosticity of the evidence. Bayes rule follows the formula:

Picture1

In essence the formula states that the posterior odds are proportional to prior odds times the likelihood. Kahneman crystallizes two keys to disciplined Bayesian reasoning:

• Anchor your judgment of the probability of an outcome on a plausible base rate.
• Question the diagnosticity of your evidence.

Kahnmenan explains it with an example:

If you believe that 3% of graduate students are enrolled in computer science (the base rate), and you also believe that the description of Tom is 4 times more likely for a graduate student in computer science than in other fields, then Bayes’s rule says you must believe that the probability that Tom is a computer science student is now 11%.

Four times as likely means that we expect roughly 80% of all computer science students to resemble Tom. We use this proportion to obtain the adjusted odds. (The calculation goes as follows: 0.03*0.8/(0.03*0.8+((1-0.03)*(1-0.8)))=11%)

The easiest way to become better at making decisions is by making sure you question your assumptions and follow strong evidence. When evidence is anecdotal, adjust minimally and trust the base rates. Odds are, you will be pleasantly surprised.

***

Want More? Check out our ever-growing collection of mental models and biases and get to work.

Mental Model: Commitment and Consistency Bias

“The difficulty lies not in the new ideas,
but in escaping the old ones, which ramify,
for those brought up as most of us have been,
into every corner of our minds.”

— John Maynard Keynes

***

Ben Franklin tells an interesting little story in his autobiography. Facing opposition to being reelected Clerk of the General Assembly, he sought to gain favor with the member so vocally opposing him:

Having heard that he had in his library a certain very scarce and curious book, I wrote a note to him, expressing my desire of perusing that book, and requesting he would do me the favour of lending it to me for a few days. He sent it immediately, and I return'd it in about a week with another note, expressing strongly my sense of the favour.

When we next met in the House, he spoke to me (which he had never done before), and with great civility; and he ever after manifested a readiness to serve me on all occasions, so that we became great friends, and our friendship continued to his death.

This is another instance of the truth of an old maxim I had learned, which says, “He that has once done you a kindness will be more ready to do you another, than he whom you yourself have obliged.”

The man, having lent Franklin a rare and valuable book, sought to stay consistent with his past actions. He wouldn't, of course, lend a book to an unworthy man, would he?

***

Positive Self Image

Scottish philosopher and economist Adam Smith said in The Theory of Moral Sentiments:

The opinion which we entertain of our own character depends entirely on our judgments concerning our past conduct. It is so disagreeable to think ill of ourselves, that we often purposely turn away our view from those circumstances which might render that judgment unfavorable.

Even when it acts against our best interest our tendency is to be consistent with our prior commitments, ideas, thoughts, words, and actions. As a byproduct of confirmation bias, we rarely seek disconfirming evidence of what we believe. This, after all, makes it easier to maintain our positive self-image.

Part of the reason this happens is our desire to appear and feel like we’re right. We also want to show people our conviction. This shouldn’t come as a surprise. Society values consistency and conviction even when it is wrong.

We associate consistency with intellectual and personal strength, rationality, honesty, and stability. On the other hand, the person who is perceived as inconsistent is also seen as confused, two-faced, even mentally ill in certain extreme circumstances.

A politician, for example, who wavers, gets labelled a flip flopper and can lose an election over it (John Kerry). A CEO who risks everything on a successful bet and holds a conviction that no one else holds is held to be a hero (Elon Musk).

But it’s not just our words and actions that nudge our subconscious, but also how other people see us. There is a profound truth behind Eminem’s lyrics: I am, whatever you say I am. If I wasn't, then why would I say I am?

If you think I’m talented, I become more talented in your eyes — in part because you labelling me as talented filters the way you see me. You start seeing more of my genius and less of my normal-ness, simply by way of staying consistent with your own words.

In his book Outliers, Malcolm Gladwell talks about how teachers simply identifying students as smart not only affected how the teachers saw their work but, more importantly, affected the opportunities teachers gave the students. Smarter students received better opportunities, which, we can reason, offers them better experiences. This is turn makes them better. It’s almost a self-fulfilling prophecy.

And the more we invest in our beliefs of ourselves or others—think money, effort, or pain, the more sunk costs we have and the harder it becomes to change our mind. It doesn’t matter if we’re right. It doesn’t matter if the Ikea bookshelf sucks, we’re going to love it.

In Too Much Invested to Quit, psychologist Allan Teger says something similar of the Vietnam War:

The longer the war continued, the more difficult it was to justify the additional investments in terms of the value of possible victory. On the other hand, the longer the war continued, the more difficult it became to write off the tremendous losses without having anything to show for them.

***

As a consequence, there are few rules we abide by more than the “Don’t make any promises that you can’t keep.” This, generally speaking, is a great rule that keeps society together by ensuring that our commitments for the most part are real and reliable.

Aside from the benefits of preserving our public image, being consistent is simply easier and leads to a more predictable and consistent life. By being consistent in our habits and with previous decisions, we significantly reduce the need to think and can go on “auto-pilot” for most of our lives.

However beneficial these biases are, they too deserve deeper understanding and caution. Sometimes our drive to appear consistent can lure us into choices we otherwise would consider against our best interests. This is the essence of a harmful bias as opposed to a benign one: We are hurting ourselves and others by committing it. 

A Slippery Slope

Part of why commitment can be so dangerous is because it is like a slippery slope – you only need a single slip to slide down completely. Therefore compliance to even tiny requests, which initially appear insignificant, have a good probability of leading to full commitment later.

People whose job it is to persuade us know this.

Among the more blunt techniques on the spectrum are those reported by a used-car sales manager in Robert Cialdini’s book Influence. The dealer knows the power of commitment and that if we comply a little now, we are likely to comply fully later on. His advice to other sellers goes as follows:

“Put 'em on paper. Get the customer's OK on paper. Get the money up front. Control 'em. Control the deal. Ask 'em if they would buy the car right now if the price is right. Pin 'em down.”

This technique will be obvious to most of us. However, there are also more subtle ways to make us comply without us noticing.

A great example of a subtle compliance practitioner is Jo-Ellen Demitrius, the woman currently reputed to be the best consultant in the business of jury selection.

Whenever screening potential jurors before a trial she asks an artful question:

“If you were the only person who believed in my client's innocence, could you withstand the pressure of the rest of the jury to change your mind?”

It’s unlikely that any self-respecting prospective juror would answer negatively. And, now that the juror has made the implicit promise, it is unlikely that once selected he will give in to the pressure exerted by the rest of the jury.

Innocent questions and requests like this can be a great springboard for initiating a cycle of compliance.

The Lenient Policy

A great case study for compliance is the tactics that Chinese soliders employed on American war captives during the Korean War. The Chinese were particularly effective in getting Americans to inform on one another. In fact, nearly all American prisoners in the Chinese camps are said to have collaborated with the enemy in one way or another.

This was striking, since such behavior was rarely observed among American war prisoners during WWII. It raises the question of what secret trades led to the success of the Chinese?

Unlike the North Koreans, the Chinese did not treat the victims harshly. Instead they engaged in what they called “lenient policy” towards the captives, which was, in reality, a clever series of psychological assaults.

In their exploits the Chinese relied heavily on commitment and consistency tactics to receive the compliance they desired. At first, the Americans were not too collaborative, as they had been trained to provide only name, rank, and serial number, but the Chinese were patient.

They started with seemingly small but frequent requests to repeat statements like “The United States is not perfect” and “In a Communist country, unemployment is not a problem.” Once these requests had been complied with, the heaviness of the requests grew. Someone who had just agreed that United States was not perfect would be encouraged to expand on his thoughts about specific imperfections. Later he might be asked to write up and read out a list of these imperfections in a discussion group with other prisoners. “After all, it's what you really believe, isn't it?” The Chinese would then broadcast the essay readings not only to the whole camp, but to other camps and even the American forces in South Korea. Suddenly the soldier would find himself a “collaborator” of the enemy.

The awareness that the essays did not contradict his beliefs could even change his self-image to be consistent with the new “collaborator” label, often resulting in more cooperation with the enemy.

It is not surprising that very few American soldiers were able to avoid such “collaboration” altogether.

Foot in the Door

The small request growing into bigger requests as applied by the Chinese on American soldiers is also called the Foot-in-the-door Technique. It was first discovered by two scientists – Freedman and Fraser, who had worked on an experiment in which a fake volunteer worker asked home owners to allow a public-service billboard to be installed on their front lawns.

To get a better idea of how it would look, the home owners were even shown a photograph depicting an attractive house that was almost completely obscured by an ugly sign reading DRIVE CAREFULLY. While the request was quite understandably denied by 83 percent of residents, one particular group reacted favorably.

Two weeks earlier a different “volunteer worker” had come and asked the respondents of this group a similar request to display a much smaller sign that read BE A SAFE DRIVER. The request was so negligible that nearly all of them complied. However, the future effects of that request turned out to be so enormous that 76 percent of this group complied with the bigger, much less reasonable request (the big ugly sign).

At first, even the researchers themselves were baffled by the results and repeated the experiment on similar setups. The effect persisted. Finally, they proposed that the subjects must have distorted their own views about themselves as a result of their initial actions:

What may occur is a change in the person's feelings about getting involved or taking action. Once he has agreed to a request, his attitude may change, he may become, in his own eyes, the kind of person who does this sort of thing, who agrees to requests made by strangers, who takes action on things he believes in, who cooperates with good causes.

The rule goes that once someone has instilled our self-image where they want it to be, we will comply naturally with the set of requests that adhere to the new self-view. Therefore we must be very careful about agreeing to even the smallest requests. Not only can it make us comply with larger requests later on, but it can make us even more willing to do favors that are only remotely connected to the earlier ones.

Even Cialdini, someone who knows this bias inside-out, admits to his fear that his behavior will be affected by consistency bias:

It scares me enough that I am rarely willing to sign a petition anymore, even for a position I support. Such an action has the potential to influence not only my future behavior but also my self-image in ways I may not want.

Further, once a person's self-image is altered, all sorts of subtle advantages become available to someone who wants to exploit that new image.

Give it, take it away later

Have you ever witnessed a deal that is a little too good to be true only to later be disappointed? You had already made up your mind, had gotten excited and were ready to pay or sign until a calculation error was discovered. Now with the adjusted price, the offer did not look all that great.

It is likely that the error was not an accident – this technique, also called low-balling, is often used by compliance professionals in sales. Cialdini, having observed the phenomenon among car dealers, tested its effects on his own students.

In an experiment with colleagues, he made two groups of students show up at 7:00 AM in the morning to do a study on “thinking processes”. When they called one group of students they immediately told them that the study starts at 7:00 AM. Unsurprisingly, only 24 percent wanted to participate.

However, for the other group of students, researchers threw a low-ball. The first question was whether they wanted to take part in a study of thinking processes. Fifty-six percent of them replied positively. Now, to those that agreed, the meeting time of 7:00 AM was revealed.

These students were given the opportunity to opt out, but none of them did. In fact, driven by their commitment, 95 percent of the low-balled students showed up to the Psychology Building at 7:00 AM as they had promised.

Do you recognize the similarities between the experiment and the sales situation?

The script of low-balling tends to be the same:

First, an advantage is offered that induces a favorable decision in the manipulator's direction. Then, after the decision has been made, but before the bargain is sealed, the original advantage is deftly removed (i.e., the price is raised, the time is changed, etc.).

It would seem surprising that anyone would buy under these circumstances, yet many do. Often the self-created justifications provide so many new reasons for the decision that even when the dealer pulls away the original favorable rationale, like a low price, the decision is not changed. We stick with our old decision even in the face of new information!

Of course not everyone complies, but that’s not the point. The effect is strong enough to hold for a good number of buyers, students or anyone else whose rate of compliance we may want to raise.

The Way Out

The first real defense to consistency bias is awareness about the phenomenon and the harm a certain rigidity in our decisions can cause us.

Robert Cialdini suggests two approaches to recognizing when consistency biases are unduly creeping into our decision making. The first one is to listen to our stomachs. Stomach signs display themselves when we realize that the request being pushed is something we don’t want to do.

He recalls a time when a beautiful young woman tried to sell him a membership he most certainly did not need by using the tactics displayed above. He writes:

I remember quite well feeling my stomach tighten as I stammered my agreement. It was a clear call to my brain, “Hey, you're being taken here!” But I couldn't see a way out. I had been cornered by my own words. To decline her offer at that point would have meant facing a pair of distasteful alternatives: If I tried to back out by protesting that I was not actually the man-about-town I had claimed to be during the interview, I would come off a liar; trying to refuse without that protest would make me come off a fool for not wanting to save $1,200. I bought the entertainment package, even though I knew I had been set up. The need to be consistent with what I had already said snared me.

But then eventually he came up with the perfect counter-attack for later episodes, which allowed him to get out of the situation gracefully.

Whenever my stomach tells me I would be a sucker to comply with a request merely because doing so would be consistent with some prior commitment I was tricked into, I relay that message to the requester. I don't try to deny the importance of consistency; I just point out the absurdity of foolish consistency. Whether, in response, the requester shrinks away guiltily or retreats in bewilderment, I am content. I have won; an exploiter has lost.

The second approach concerns the signs that are felt within our heart and is best used when it is not really clear whether the initial commitment was wrongheaded.

Imagine you have recognized that your initial assumptions about a particular deal were not correct. The car is not extraordinarily cheap and the experiment is not as fun if you have to wake up at 6 AM to make it. Here it helps to ask one simple question:

“Knowing what I know, if I could go back in time, would I make the same commitment?”

Ask it frequently enough and the answer might surprise you.

***

Want More? Check out our ever-growing library of mental models and biases.

The Many Ways Our Memory Fails Us (Part 2)

(Purchase a copy of the entire 3-part series in one sexy PDF for $3.99)

***

In part one, we began a conversation about the trappings of the human memory, using Daniel Schacter's excellent The Seven Sins of Memory as our guide. (We've also covered some reasons why our memory is pretty darn good.) We covered transience — the loss of memory due to time — and absent-mindedness — memories that were never encoded at all or were not available when needed. Let's keep going with a couple more whoppers: Blocking and Misattribution.

Blocking

Blocking is the phenomenon when something is indeed encoded in our memory and should be easily available in the given situation, but simply will not come to mind. We're most familiar with blocking as the always frustrating “It's on the tip of my tongue!

Unsurprisingly, blocking occurs most frequently when it comes to names and indeed occurs more frequently as we get older:

Twenty-year-olds, forty-year-olds, and seventy-year-olds kept diaries for a month in which they recorded spontaneously occurring retrieval blocks that were accompanied by the “tip of the tongue” sensation. Blocking occurred occasionally for the names of objects (for example, algae) and abstract words (for example, idiomatic). In all three groups, however, blocking occurred most frequently for proper names, with more blocks for people than for other proper names such as countries or cities. Proper name blocks occurred more frequently in the seventy-year-olds than in either of the other two groups.

This is not the worst sin our memory commits — excepting the times when we forget an important person's name (which is admittedly not fun), blocking doesn't cause the terrible practical results some of the other memory issues cause. But the reason blocking occurs does tells us something interesting about memory, something we intuitively know from other domains: We have a hard time learning things by rote or by force. We prefer associations and connections to form strong, lasting, easily available memories.

Why are names blocked from us so frequently, even more than objects, places, descriptions, and other nouns? For example, Schacter mentions experiments in which researchers show that we more easily forget a man's name than his occupationeven if they're the same word! (Baker/baker or Potter/potter, for example.)

It's because relative to a descriptive noun like “baker,” which calls to mind all sorts of connotations, images, and associations, a person's name has very little attached to it. We have no easy associations to make — it doesn't tell us anything about the person or give us much to hang our hat on. It doesn't really help us form an image or impression. And so we basically remember it by rote, which doesn't always work that well.

Most models of name retrieval hold that activation of phonological representations [sound associations] occurs only after activation of conceptual and visual representations. This idea explains why people can often retrieve conceptual information about an object or person whom they cannot name, whereas the reverse does not occur. For example, diary studies indicate that people frequently recall a person's occupation without remembering his name, but no instances have been documented in which a name is recalled without any conceptual knowledge about the person. In experiments in which people named pictures of famous individuals, participants who failed to retrieve the name “Charlton Heston” could often recall that he was an actor. Thus, when you block on the name “John Baker” you may very well recall that he is an attorney who enjoys golf, but it is highly unlikely that you would recall Baker's name and fail to recall any of his personal attributes.

A person's name is the weakest piece of information we have about them in our people-information lexicon, and thus the least available at any time, and the most susceptible to not being available as needed. It gets worse if it's a name we haven't needed to recall frequently or recently, as we all can probably attest to. (This also applies to the other types of words we block on less frequently — objects, places, etc.)

The only real way to avoid blocking problems is to create stronger associations when we learn names, or even re-encode names we already know by increasing their salience with a vivid image, even a silly one. (If you ever meet anyone named Baker…you know what to do.)

But the most important idea here is that information gains salience in our brain based on what it brings to mind. 

Whether or not blocking occurs in the sense implied by Freud's idea of repressed memories, Schacter is non-committal about — it seems the issue was not, at the time of writing, settled.

Misattribution

The memory sin of misattribution has fairly serious consequences. Misattribution happens all the time and is a peculiar memory sin where we do remember something, but that thing is wrong, or possibly not even our own memory at all:

Sometimes we remember events that never happened, misattributing speedy processing of incoming information or vivid images that spring to mind, to memories of past events that did not occur. Sometimes we recall correctly what happened, but misattribute it to the wrong time and place. And at other times misattribution operates in a different direction: we mistakenly credit a spontaneous image or thought to our own imagination, when in reality we are recalling it–without awareness–from something we read or heard.

The most familiar, but benign, experience we've all had with misattribution is the curious case of deja vu. As of the writing of his book, Schacter felt there was no convincing explanation for why deja vu occurs, but we know that the brain is capable of thinking it's recalling an event that happened previously, even if it hasn't.

In the case of deja vu, it's simply a bit of an annoyance. But the misattribution problem causes more serious problems elsewhere.

The major one is eyewitness testimony, which we now know is notoriously unreliable. It turns out that when eyewitnesses claim they “know what they saw!” it's unlikely they remember as well as they claim. It's not their fault and it's not a lie — you do think you recall the details of a situation perfectly well. But your brain is tricking you, just like deja vu. How bad is the eyewitness testimony problem? It used to be pretty bad.

…consider two facts. First, according to estimates made in the late 1980s, each year in the United States more than seventy-five thousand criminal trials were decided on the basis of eyewitness testimony. Second, a recent analysis of forty cases in which DNA evidence established the innocence of wrongly imprisoned individuals revealed that thirty-six of them (90 percent) involved mistaken eyewitness identification. There are no doubt other such mistakes that have not been rectified.

What happens is that, in any situation where our memory stores away information, it doesn't have the horsepower to do it with complete accuracy. There are just too many variables to sort through. So we remember the general aspects of what happened, and we remember some details, depending on how salient they were.

We recall that we met John, Jim, and Todd, who were all part of the sales team for John Deere. We might recall that John was the young one with glasses, Jim was the older bald one, and Todd talked the most. We might remember specific moments or details of the conversation which stuck out.

But we don't get it all perfectly, and if it was an unmemorable meeting, with the transience of time, we start to lose the details. The combination of the specifics and the details is a process called memory binding, and it's often the source of misattribution errors.

Let's say we remember for sure that we curled our hair this morning. All of our usual cues tell us that we did — our hair is curly, it's part of our morning routine, we remember thinking it needed to be done, etc. But…did we turn the curling iron off? We remember that we did, but is that yesterday's memory or today's?

This is a memory binding error. Our brain didn't sufficiently “link up” the curling event and the turning off of the curler, so we're left to wonder. This binding issue leads to other errors, like the memory conjunction error, where sometimes the binding process does occur, but it makes a mistake. We misattribute the strong familiarity:

Having met Mr. Wilson and Mr. Albert during your business meeting, you reply confidently the next day when an associate asks you the name of the company vice president: “Mr. Wilbert.” You remembered correctly pieces of the two surnames but mistakenly combined them into a new one. Cognitive psychologists have developed experimental procedures in which people exhibit precisely these kinds of erroneous conjunctions between features of different words, pictures, sentences, or even faces. Thus, having studied spaniel and varnish, people sometimes claim to remember Spanish.

What's happening is a misattribution. We know we saw the syllables Span- and –nish and our memory tells us we must have heard Spanish. But we didn't.

Back to the eyewitness testimony problem, what's happening is we're combining a general familiarity with a lack of specific recall, and our brain is recombining those into a misattribution. We recall a tall-ish man with some sort of facial hair, and then we're shown 6 men in a lineup, and one is tall-ish with facial hair, and our brain tells us that must be the guy. We make a relative judgment: Which person here is closest to what I think I saw? Unfortunately, like the Spanish/varnish issue, we never actually saw the person we've identified as the perp.

None of this occurs with much conscious involvement, of course. It's happening subconsciously, which is why good procedures are needed to overcome the problem. In the case of suspect lineups, the solution is to show the witness each suspect, one after another, and have them give a thumbs up or thumbs down immediately. This takes away the relative comparison and makes us consciously compare the suspect in front of us with our memory of the perpetrator.

The good thing about this error is that people can be encouraged to search their memory more carefully. But it's far from foolproof, even if we're getting a very strong indication that we remember something.

And what helps prevent us from making too many errors is something Schacter calls the distinctiveness heuristic. If a distinctive thing supposedly happened, we usually reason we'd have a good memory of it. And usually this is a very good heuristic to have. (Remember, salience always encourages memory formation.) As we discussed in Part One, a salient artifact gives us something to tie a memory to. If I meet someone wearing a bright rainbow-colored shirt, I'm a lot more likely to recall some details about them, simply because they stuck out.

***

As an aside, misattribution allows us one other interesting insight into the human brain: Our “people information” remembering is a specific, distinct module, one that can falter on its own, without harming any other modules. Schacter discusses a man with a delusion that many of the normal people around him were film stars. He even misattributed made-up famous-sounding names (like Sharon Sugar) to famous people, although he couldn't put his finger on who they were.

But the man did not falsely recognize other things. Made up cities or made up words did not trip up his brain in the strange way people did. This (and other data) tells us that our ability to recognize people is a distinct “module” our brain uses, supporting one of Judith Rich Harris's ideas about human personality that we've discussed: The “people information lexicon” we develop throughout our lives is a uniquely important module we use.

***

One final misattribution is something called cryptomnesia — essentially the opposite of deja vu. It's when we think we recognize something as new and novel even though we've seen it before. Accidental plagiarizing can even result from cryptomnesia. (Try telling that to your school teachers!) Cryptomnesia falls into the same bucket as other misattributions in that we fail to recollect the source of information we're recalling — the information and event where we first remembered it are not bound together properly. Let's say we “invent” the melody to a song which already exists. The melody sounds wonderful and familiar, so we like it. But we mistakenly think it's new.

In the end, Schacter reminds us to think carefully about the memories we “know” are true, and to try to remember specifics when possible:

We often need to sort out ambiguous signals, such as feelings of familiarity or fleeting images, that may originate in specific past experiences, or arise from subtle influences in the present. Relying on judgment and reasoning to come up with plausible attributions, we sometimes go astray.  When misattribution combines with another of memory's sins — suggestibility — people can develop detailed and strongly held recollections of complex events that never occurred.

And with that, we will leave it here for now. Next time we'll delve into suggestibility and bias, two more memory sins with a range of practical outcomes.