Farnam Street helps you make better decisions, innovate, and avoid stupidity.

With over 400,000 monthly readers and more than 93,000 subscribers to our popular weekly digest, we've become an online intellectual hub.

Category Archives: Human Nature

The Chessboard Fallacy

“In the great chess-board of human society,
every single piece has a principle of motion of its own.”
— Adam Smith


One of our favorite dictums, much referenced here, is an idea by Joseph Tussman, about getting the world to do the work for you:

“What the pupil must learn, if he learns anything at all, is that the world will do most of the work for you, provided you cooperate with it by identifying how it really works and aligning with those realities. If we do not let the world teach us, it teaches us a lesson.”

By aligning with the world, as it really is and not as we wish it to be, we get it to do the work for us.

Tussman’s idea has at least one predecessor: Adam Smith.

In The Theory of Moral Sentiments, Smith excoriates the “Men of System” who have decided on an inflexible ideology of how the world should work, and try to fit the societies they lead into a Procrustean Bed of their choosing — the Mao Zedong-type leaders who would allow millions to die rather than sacrifice an inch of ideology (although Smith’s book predates Maoism by almost 200 years).

In his great wisdom, Smith perfectly explains the futility of swimming “against the tide” of how the world really works and the benefit of going “with the tide” whenever possible. He recognizes that people are not chess pieces, to be moved around as desired.

Instead, he encourages us to remember that everyone we deal with has their own goals, feelings, aspirations, and motivations, many of them not always immediately obvious. We must construct human systems with human nature in full view, fully harnessed, fully acknowledged.

Any system of human relations that doesn’t accept this truth will always be fighting the world, rather than getting it to work for them.

The man of system, on the contrary, is apt to be very wise in his own conceit; and is often so enamored with the supposed beauty of his own ideal plan of government, that he cannot suffer the smallest deviation from any part of it. He goes on to establish it completely and in all its parts, without any regard either to the great interests, or to the strong prejudices which may oppose it.

He seems to imagine that he can arrange the different members of a great society with as much ease as the hand arranges the different pieces upon a chess-board. He does not consider that the pieces upon the chess-board have no other principle of motion besides that which the hand impresses upon them; but that, in the great chess-board of human society, every single piece has a principle of motion of its own, altogether different from that which the legislature might choose to impress upon it.

If those two principles coincide and act in the same direction, the game of human society will go on easily and harmoniously, and is very likely to be happy and successful. If they are opposite or different, the game will go on miserably, and the society must be at all times in the highest degree of disorder.

Think of how many policies, procedures and systems of organization which forget this basic truth; systems of political control, price control, social control and behavioral control — from bad workplaces to bad governments – which have failed so miserably because they refused to account for the underlying motivations of the people in the system, and failed to do a second-step analysis of the consequences of their policies.

It’s just as true in personal relations: How often do we fail to treat others correctly because we haven’t taken their point of view, motivations, aspirations, and desires properly into account? How often is our own “system of relations” built on faulty assumptions that don’t actually work for us? (The old marriage advice “You can either be right, or be happy” is pure gold wisdom in this sense.)

Smith’s counsel offers us a nice out, though. If our own system for dealing with people and their own “principles of motion” are the same, then we are likely to get a harmonious result! If not? We get misery.

The choice is ours.

Daniel Kahneman on Human Gullibility

“The premise of this book is that it is easier to recognize other people’s mistakes than our own.”


A simple article connecting two ideas from Daniel Kahneman’s Thinking Fast and Slow on human gullibility and availability bias.

A reliable way to make people believe in falsehoods is frequent repetition, because familiarity is not easily distinguished from truth. Authoritarian institutions and marketers have always known this fact. But it was psychologists who discovered that you do not have to repeat the entire statement of a fact or idea to make it appear true. People who were repeatedly exposed to the phrase “the body temperature of a chicken” were more likely to accept as true the statement that “the body temperature of a chicken is 144°” (or any other arbitrary number). The familiarity of one phrase in the statement sufficed to make the whole statement feel familiar, and therefore true. If you cannot remember the source of a statement, and have no way to relate it to other things you know, you have no option but to go with the sense of cognitive ease.

This is due, in part, to the fact that repetition causes familiarity and familiarity distorts our thinking.

People tend to assess the relative importance of issues by the ease with which they are retrieved from memory—and this is largely determined by the extent of coverage in the media. Frequently mentioned topics populate the mind even as others slip away from awareness. In turn, what the media choose to report corresponds to their view of what is currently on the public’s mind. It is no accident that authoritarian regimes exert substantial pressure on independent media. Because public interest is most easily aroused by dramatic events and by celebrities, media feeding frenzies are common. For several weeks after Michael Jackson’s death, for example, it was virtually impossible to find a television channel reporting on another topic. In contrast, there is little coverage of critical but unexciting issues that provide less drama, such as declining educational standards or overinvestment of medical resources in the last year of life. (As I write this, I notice that my choice of “little-covered” examples was guided by availability. The topics I chose as examples are mentioned often; equally important issues that are less available did not come to my mind.)

The Many Ways our Memory Fails Us (Part 3)

(Purchase a copy of the entire 3-part series in one sexy PDF for $3.99)


In the first two parts of our series on memory, we covered four major “sins” committed by our memories: Absent-Mindedness, Transience, Misattribution, and Blocking, using Daniel Schacter’s The Seven Sins of Memory as our guide.

We’re going to finish it off today with three other sins: Suggestibility, Bias, and Persistence, hopefully leaving us with a full understanding of our memory and where it fails us from time to time.



As its name suggests, the sin of suggestibility refers to our brain’s tendency to misremember the source of memories:

Suggestibility in memory refers to an individual’s tendency to incorporate misleading information from external sources — other people, written materials or pictures, even the media — into personal recollections. Suggestibility is closely related to misattribution in the sense that the conversion of suggestions into inaccurate memories must involve misattribution. However, misattribution often occurs in the absence of overt suggestion, making suggestibility a distinct sin of memory.

Suggestibility is such a difficult phenomenon because the memories we’ve pulled from outside sources seem as truly real as our own. Take the case of a “false veteran” which Schacter describes in the book:

On May 31, 2000, a front-page story in the New York Times described the baffling case of Edward Daly, a Korean War veteran who made up elaborate — but imaginary — stories about his battle exploits, including his involvement in a terrible massacre in which he had not actually participated. While weaving his delusional tale, Daly talked to veterans who had participated in the massacre and “reminded” them of his heroic deeds. His suggestions infiltrated their memories. “I know that Daly was there,” pleaded one veteran. “I know that. I know that.”

The key word here is infiltrated. This brings to mind the wonderful Christopher Nolan movie Inception, about a group of experts who seek to infiltrate the minds of sleeping targets in order to change their memories. The movie is fictional but there is a subtle reality to the idea: With enough work, an idea that is merely suggested to us in one context can seem like our own idea or our own memory.

Take suggestive questioning, a problem with criminal investigations. The investigator talks to an eyewitness and, hoping to jog their memory, asks a series of leading questions, arriving at the answer he was hoping for. But is it genuine? Not always.

Schacter describes a psychology experiment wherein participants see a video of a robbery and then are fed misleading suggestions about the robbery soon after, such as the idea that the victim of the robbery was wearing a white apron. Amazingly, even when people could recognize that the apron idea was merely suggested to them, many people still regurgitated the suggested idea!

Previous experiments had shown that suggestive questions produce memory distortion by creating source memory problems like those in the previous chapter: participants misattribute information presented only in suggestive questions about the original videotape. [The psychologist Philip] Higham’s results provide an additional twist. He found that when people took a memory test just minutes after receiving the misleading question, and thus still correctly recalled that the “white apron” was suggested by the experimenter, they sometimes insisted nevertheless that the attendant wore a white apron in the video itself. In fact, they made this mistake just as often as people who took the memory test two days after receiving misleading suggestions, and who had more time to forget that the white apron was merely suggested. The findings testify to the power of misleading suggestions: they can create false memories of an event even when people recall that the misinformation was suggested.

The problem of overconfidence also plays a role in suggestion and memory errors. Take an experiment where subjects are shown a man entering a department store and then told he murdered a security guard. After being shown a photo lineup (which did not contain the gunman), some were told they chose correctly and some were told they chose incorrectly. Guess which group was more confident and trustful of their memories afterwards?

It was, of course, the group that received reinforcement. Not only were they more confident, but they felt they had better command of the details of the gunman’s appearance, even though they were as wrong as the group that received no positive feedback. This has vast practical applications. (Consider a jury taking into account the testimony of a very confident eyewitness, reinforced by police with an agenda.)


One more interesting idea in reference to suggestibility: Like the DiCaprio-led clan in the movie Inception, psychologists have been able to successfully “implant” false memories of childhood in many subjects based merely on suggestion alone. This should make you think carefully about what you think you remember about the distant past:

[The psychologist Ira] Hyman asked college students about various childhood experiences that, according to their parents, had actually happened, and also asked about a false event that, their parents confirmed, had never happened. For instance, students were asked: “When you were five you were at the wedding reception of some friends of the family and you were running around with some other kids, when you bumped into the table and spilled the punch bowl on the parents of the bride.” Participants accurately remembered almost all of the true events, but initially reported no memory of the false events.

However, approximately 20 to 40 percent of participants in different experimental conditions eventually came to describe some memory of the false event in later interviews. In one experiment, more than half of the participants who produced false memories describe them as “clear” recollections that included specific details of the central even, such as remembering exactly where or how one spilled the punch. Just under half reported “partial” false memories, which included some details but no specific memory of the central event.

Thus is the “power of the suggestion.”

The Sin of Bias

The problem of bias will be familiar to regular readers. In some form or another, we’re subject to mental biases every single day, most of which are benign, some of which are harmful, and most of which are not hard to understand. Biases specific to memory are so good to study because they’re so easy and natural to fall into. Because we trust our memory so deeply, they often go unquestioned. But we might want to be careful:

The sin of bias refers to distorting influences of our present knowledge, beliefs, feelings on new experiences, or our later memories of them. In the stifling psychological climate of 1984, the Ministry of Truth used memory as a pawn in the service of party rule. Much in the same manner, biases in remembering past experiences reveal how memory can serve as a pawn for the ruling masters of our cognitive systems.

There are four biases we’re subject to in this realm: Consistency and change bias, hindsight bias, egocentric bias, and stereotyping bias.

Consistency and Change Bias

The first is a consistency bias: We re-write our memories of the past based on how we feel in the present. In one experiment after another, this has undoubtedly been proven true. It’s probably something of a coping mechanism: If we saw the past with complete accuracy, we might not be such happy individuals.

We re-write our memories of the past based on how we feel in the present. Click To Tweet

We often re-write the past so that it seems we’ve always felt like we feel now, that we always believed what we believe now:

This consistency bias has turned up in several different contexts. Recalling past experiences of pain, for instance, is powerfully influenced by current pain level. When patients afflicted by chronic pain are experiencing high levels of pain in the present, they are biased to recall similarly high levels of pain in the past; when present pain isn’t so bad, past pain experiences seem more benign, too. Attitudes towards political and social issues also reflect consistency bias. People whose views on political issues have changed over time often recall incorrectly past attitudes as highly similar to present ones. In fact, memories of past political views are sometimes more closely related to present views than what they actually believed in the past.

Think about your stance five or ten years ago on some major issue like sentencing for drug-related crime. Can your recall specifically what you believed? For most people, they believe they have stayed consistent on the issue. But easily performed experiments show that a large percentage of people who think “all is the same” have actually changed their tune significantly over time. Such is the bias towards consistency.

This affects relationships fairly significantly: Schacter shows that our current feelings about our partner color our memories of our past feelings.

Consider a study that followed nearly four hundred Michigan couples through the first years of their marriage. In those couples who expressed growing unhappiness over the four years of the study, men mistakenly recalled the beginnings of their marriages as negative even though they said they were happy at the time. “Such biases can lead to a dangerous “downward spiral,” noted the researchers who conducted the study. “The worse your current view of your partner is, the worse your memories are, which only further confirms your negative attitudes.”

In other contexts, we sometimes lean in the other direction: We think things have changed more than they really have. We think the past was much better than it is today, or much worse than it is today.

Schacter discusses a twenty-year study done with a group of women between 1969 and 1989, assessing how they felt about their marriages throughout. Turns out, their recollections of the past were constantly on the move, but the false recollection did seem to serve a purpose: Keeping the marriage alive.

When reflecting back on the first ten years of their marriages, wives showed a change bias: They remembered their initial assessments as worse than they actually were. The bias made their present feelings seem an improvement by comparison, even though the wives actually felt more negatively ten years into the marriage than they had at the beginning. When they had been married for twenty years and reflected back on their second ten years of marriage, the women now showed a consistency bias: they mistakenly recalled that feelings from ten years earlier were similar to their present ones. In reality, however, they felt more negatively after twenty years of marriage than after ten. Both types of bias helped women cope with their marriages. 

The purpose of all this is to reduce our cognitive dissonance: That mental discomfort we get when we have conflicting ideas. (“I need to stay married” / “My marriage isn’t working” for example.)

Hindsight Bias

We won’t go into hindsight bias too extensively, because we have covered it before and the idea is familiar to most. Simply put, once we know the outcome of an event, our memory of the past is forever altered. As with consistency bias, we use the lens of the present to see the past. It’s the idea that we “knew it all along” — when we really didn’t.

A large part of hindsight bias has to do with the narrative fallacy and our own natural wiring in favor of causality. We really like to know why things happen, and when given a clear causal link in the present (Say, we hear our neighbor shot his wife because she cheated on him), the lens of hindsight does the rest (I always knew he was a bad guy!). In the process, we forget that we must not have thought he was such a bad guy, since we let him babysit our kids every weekend. That is hindsight bias. We’re all subject to it unless we start examining our past with more detail or keeping a written record.

Egocentric bias

The egocentric bias is our tendency to see the past in such a way that we, the rememberer, look better than we really are or really should. We are not neutral observers of our own past, we are instead highly biased and motivated to see ourselves in a certain light.

The self’s preeminent role in encoding and retrieval, combined with a powerful tendency for people to view themselves positively, creates fertile ground of memory biases that allow people to remember past experiences in a self-enhancing light. Consider, for example, college students who were led to believe that introversion is a desirable personality trait that predicts academic success, and then searched their memories for incidents in which they behaved in an introverted or extroverted manner. Compared with students who were led to believe that extroversion is a desirable trait, the introvert-success students more quickly generated memories in which they behaved like introverts than like extroverts. The memory search was biased by a desire to see the self positively, which led students to select past incidents containing the desired trait.

The egocentric bias occurs constantly and in almost any situation where it possibly can: It’s similar to what’s been called overconfidence in other arenas. We want to see ourselves in a positive light, and so we do. We mine our brain for evidence of our excellent qualities. We have positive maintaining illusions that keep our spirits up.

This is generally a good thing for our self-esteem, but as any divorced couple knows, it can also cause us to have a very skewed version of the past.

Bias from Stereotyping

In our series on the development of human personality, we discussed the idea of stereotyping as something human beings do constantly and automatically; the much-maligned concept is central to how we comprehend the world.

Stereotyping exists because it saves energy and space — it allows us to consolidate much of what we learn into categories with broadly accurate descriptions. As we learn new things, we either slot them into existing categories, create new categories, or slightly modify old categories (the one we like the least, because it requires the most work). This is no great insight.

But what is interesting is the degree to which stereotyping colors our memories themselves:

If I tell you that Julian, an artist, is creative, temperamental, generous, and fearless, you are more likely to recall the first two attributes, which fit the stereotype of an artist, than the latter two attributes, which do not. If I tell you that he is a skinhead, and list some of his characteristics, you’re more likely to remember that he is rebellious and aggressive than that he is lucky and modest. This congruity bias is especially likely to occur when people hold strong stereotypes about a particular group. A person with strong racial prejudices, for example, would be more likely to remember stereotypical features of an African American’s behavior than a less prejudiced person, and less likely to remember behaviors that don’t fit the stereotype.

Not only that, but when things happen which contradict our expectations, we are capable of distorting the past in such a way to make it come in line. When we try to remember a tale after we know how it ends, we’re more likely to distort the details of the story in such a way that the whole thing makes sense and fits our understanding of the world. This is related to the narrative fallacy and hindsight bias discussed above.


The final sin which Schacter discusses in his book is Persistence, the often difficult reality that some memories, especially negative ones, persist a lot longer than we wish. We’re not going to cover it here, but suggest you check out the book in its entirety to get the scoop.

And with that, we’re going to wrap up our series on the human memory. Take what you’ve learned, digest it, and then keep pushing deeper in your quest to understand human nature and the world around you.

The Many Ways Our Memory Fails Us (Part 2)

(Purchase a copy of the entire 3-part series in one sexy PDF for $3.99)


In part one, we began a conversation about the trappings of the human memory, using Daniel Schacter’s excellent The Seven Sins of Memory as our guide. We covered transience — the loss of memory due to time — and absent-mindedness — memories that were never encoded at all or were not available when needed. Let’s keep going with a couple more whoppers: Blocking and Misattribution.


Blocking is the phenomenon when something is indeed encoded in our memory and should be easily available in the given situation, but simply will not come to mind. We’re most familiar with blocking as the always frustrating “It’s on the tip of my tongue!

Unsurprisingly, blocking occurs most frequently when it comes to peoples’ names and occurs more frequently as we get older:

Twenty-year-olds, forty-year-olds, and seventy-year-olds kept diaries for a month in which they recorded spontaneously occurring retrieval blocks that were accompanied by the “tip of the tongue” sensation. Blocking occurred occasionally for the names of objects (for example, algae) and abstract words (for example, idiomatic). In all three groups, however, blocking occurred most frequently for proper names, with more blocks for people than for other proper names such as countries or cities. Proper name blocks occurred more frequently in the seventy-year-olds than in either of the other two groups.

This is not the worst sin our memory commits — excepting the times when we forget an important person’s name (which is admittedly not fun), blocking doesn’t cause the terrible practical results some of the other memory issues cause. But the reason blocking occurs does tells us something interesting about memory, something we intuitively know from other domains: We have a hard time learning things by rote or by force. We prefer associations and connections to form strong, lasting, easily available memories.

Why are names blocked from us so frequently, even more than objects, places, descriptions, and other nouns? For example, Schacter mentions experiments in which researchers show that we more easily forget a man’s name than his occupationeven if they’re the same word! (Baker/baker or Potter/potter, for example.)

It’s because relative to a descriptive noun like “baker,” which calls to mind all sorts of connotations, images, and associations, a person’s name has very little attached to it. We have no easy associations to make — it doesn’t tell us anything about the person or give us much to hang our hat on. It doesn’t really help us form an image or impression. And so we basically remember it by rote, which doesn’t always work that well.

Most models of name retrieval hold that activation of phonological representations [sound associations] occurs only after activation of conceptual and visual representations. This idea explains why people can often retrieve conceptual information about an object or person whom they cannot name, whereas the reverse does not occur. For example, diary studies indicate that people frequently recall a person’s occupation without remembering his name, but no instances have been documented in which a name is recalled without any conceptual knowledge about the person. In experiments in which people named pictures of famous individuals, participants who failed to retrieve the name “Charlton Heston” could often recall that he was an actor. Thus, when you block on the name “John Baker” you may very well recall that he is an attorney who enjoys golf, but it is highly unlikely that you would recall Baker’s name and fail to recall any of his personal attributes.

A person’s name is the weakest piece of information we have about them in our people-information lexicon, and thus the least available at any time, and the most susceptible to not being available as needed. It gets worse if it’s a name we haven’t needed to recall frequently or recently, as we all can probably attest to. (This also applies to the other types of words we block on less frequently — objects, places, etc.)

The only real way to avoid blocking problems is to create stronger associations when we learn names, or even re-encode names we already know by increasing their salience with a vivid image, even a silly one. (If you ever meet anyone named Baker…you know what to do.)

But the most important idea here is that information gains salience in our brain based on what it brings to mind. 

Whether or not blocking occurs in the sense implied by Freud’s idea of repressed memories, Schacter is non-committal about — it seems the issue was not, at the time of writing, settled.


The memory sin of misattribution has fairly serious consequences. Misattribution happens all the time and is a peculiar memory sin where we do remember something, but that thing is wrong, or possibly not even our own memory at all:

Sometimes we remember events that never happened, misattributing speedy processing of incoming information or vivid images that spring to mind, to memories of past events that did not occur. Sometimes we recall correctly what happened, but misattribute it to the wrong time and place. And at other times misattribution operates in a different direction: we mistakenly credit a spontaneous image or thought to our own imagination, when in reality we are recalling it–without awareness–from something we read or heard.

The most familiar, but benign, experience we’ve all had with misattribution is the curious case of deja vu. As of the writing of his book, Schacter felt there was no convincing explanation for why deja vu occurs, but we know that the brain is capable of thinking it’s recalling an event that happened previously, even if it hasn’t.

In the case of deja vu, it’s simply a bit of an annoyance. But the misattribution problem causes more serious problems elsewhere.

The major one is eyewitness testimony, which we now know is notoriously unreliable. It turns out that when eyewitnesses claim they “know what they saw!” it’s unlikely they remember as well as they claim. It’s not their fault and it’s not a lie — you do think you recall the details of a situation perfectly well. But your brain is tricking you, just like deja vu. How bad is the eyewitness testimony problem? It used to be pretty bad.

…consider two facts. First, according to estimates made in the late 1980s, each year in the United States more than seventy-five thousand criminal trials were decided on the basis of eyewitness testimony. Second, a recent analysis of forty cases in which DNA evidence established the innocence of wrongly imprisoned individuals revealed that thirty-six of them (90 percent) involved mistaken eyewitness identification. There are no doubt other such mistakes that have not been rectified.

What happens is that, in any situation where our memory stores away information, it doesn’t have the horsepower to do it with complete accuracy. There are just too many variables to sort through. So we remember the general aspects of what happened, and we remember some details, depending on how salient they were.

We recall that we met John, Jim, and Todd, who were all part of the sales team for John Deere. We might recall that John was the young one with glasses, Jim was the older bald one, and Todd talked the most. We might remember specific moments or details of the conversation which stuck out.

But we don’t get it all perfectly, and if it was an unmemorable meeting, with the transience of time, we start to lose the details. The combination of the specifics and the details is a process called memory binding, and it’s often the source of misattribution errors.

Let’s say we remember for sure that we curled our hair this morning. All of our usual cues tell us that we did — our hair is curly, it’s part of our morning routine, we remember thinking it needed to be done, etc. But…did we turn the curling iron off? We remember that we did, but is that yesterday’s memory or today’s?

This is a memory binding error. Our brain didn’t sufficiently “link up” the curling event and the turning off of the curler, so we’re left to wonder. This binding issue leads to other errors, like the memory conjunction error, where sometimes the binding process does occur, but it makes a mistake. We misattribute the strong familiarity:

Having met Mr. Wilson and Mr. Albert during your business meeting, you reply confidently the next day when an associate asks you the name of the company vice president: “Mr. Wilbert.” You remembered correctly pieces of the two surnames but mistakenly combined them into a new one. Cognitive psychologists have developed experimental procedures in which people exhibit precisely these kinds of erroneous conjunctions between features of different words, pictures, sentences, or even faces. Thus, having studied spaniel and varnish, people sometimes claim to remember Spanish.

What’s happening is a misattribution. We know we saw the syllables Span- and –nish and our memory tells us we must have heard Spanish. But we didn’t.

Back to the eyewitness testimony problem, what’s happening is we’re combining a general familiarity with a lack of specific recall, and our brain is recombining those into a misattribution. We recall a tall-ish man with some sort of facial hair, and then we’re shown 6 men in a lineup, and one is tall-ish with facial hair, and our brain tells us that must be the guy. We’re make a relative judgment: Which person here is closest to what I think I saw? Unfortunately, like the Spanish/varnish issue, we never actually saw the person we’ve identified as the perp.

None of this occurs with much conscious involvement, of course. It’s happening subconsciously, which is why good procedures are needed to overcome the problem. In the case of suspect lineups, the solution is to show the witness each member, one after another, and have them give a thumbs up or thumbs down immediately. This takes away the relative comparison and makes us consciously compare the suspect in front of us with our memory of the perpetrator.

The good thing about this solving this error is that people can be encouraged to search their memory more carefully. But it’s far from foolproof, even if we’re getting a very strong indication that we remember something.

And what helps prevent us from making too many errors is something Schacter calls the distinctiveness heuristic. If a distinctive thing supposedly happened, we usually reason we’d have a good memory of it. And usually, this is a very good heuristic to have. (Remember, salience always encourages memory formation.) As we discussed in Part One, a salient artifact gives us something to tie a memory to. If I meet someone wearing a bright rainbow-colored shirt, I’m a lot more likely to recall some details about them, simply because they stuck out.


As an aside, misattribution allows us one other interesting insight into the human brain: Our “people information” remembering is a specific, distinct module, one that can falter on its own, without harming any other modules. Schacter discusses a man with a delusion that many of the normal people around him were film stars. He even misattributed made-up famous-sounding names (like Sharon Sugar) to famous people, although he couldn’t put his finger on who they were.

But the man did not falsely recognize other things. Made up cities or made up words did not trip up his brain in the strange way people did. This (and other data) tells us that our ability to recognize people is a distinct “module” our brain uses, supporting one of Judith Rich Harris’s modules of human personality that we’ve discussed: The “people information lexicon” we develop throughout our lives.


One final misattribution is something called cryptomnesia — the opposite of deja vu. It’s when we think we recognize something as new and novel when we have indeed seen it before. Accidental plagiarizing can even result from cryptomnesia. (Try telling that to your school teachers!) Cryptomnesia falls into the same bucket as other misattributions in that we fail to recollect the source of information we’re recalling — the information and event where we first remembered it are not bound together properly. Let’s say we “invent” the melody to a song which already exists. The melody sounds wonderful and familiar, so we like it. But we mistakenly think it’s new.

In the end, Schacter reminds us to think carefully about the memories we “know” are true, and to try to remember specifics when possible:

We often need to sort out ambiguous signals, such as feelings of familiarity or fleeting images, that may originate in specific past experiences, or arise from subtle influences in the present. Relying on judgment and reasoning to come up with plausible attributions, we sometimes go astray.  When misattribution combines with another of memory’s sins — suggestibility — people can develop detailed and strongly held recollections of complex events that never occurred.

And with that, we will leave it here for now. Next time we’ll delve into suggestibility and bias, two more memory sins with a range of practical outcomes.

The Many Ways Our Memory Fails Us (Part 1)

(Purchase a copy of the entire 3-part series in one sexy PDF for $3.99)


Recently, we discussed some of the net advantages of our faulty, but incredibly useful, memory system. Thanks to Harvard’s brilliant memory-focused psychologist Daniel Schacter, we know not to be too harsh in judging its flaws. The system we’ve been endowed with, on the whole, works at its intended purpose, and a different one might not be a better one.

It isn’t optimal though, and since we’ve given it a “fair shake”, it is worth discussing where the errors actually lie, so we can work to improve them, or at least be aware of them.

In his fascinating book, Schacter lays out seven broad areas in which our memory regularly fails us. Let’s take a look at them so we can better understand ourselves and others, and maybe come up with a few optimal solutions. Perhaps the most important lesson will be that we must expect our memory to be periodically faulty, and take that into account in advance.

We’re going to cover a lot of ground, so this one will be a multi-parter. Let’s dig in.


The first regular memory error is called transience. This is one we’re all quite familiar with, but sometimes forget to account for: The forgetting that occurs with the passage of time. Much of our memory is indeed transient — things we don’t regularly need to recall or use get lost with time.

Schacter gives an example of the phenomenon:

On October 3, 1995, the most sensational criminal trial of our time reached a stunning conclusion: a jury acquitted O.J. Simpson of murder. Word of the not-guilty verdict spread quickly, nearly everyone reacted with either outrage or jubilation, and many people could talk about little else for weeks or days afterward. The Simpson verdict seemed like just the sort of momentous event that most of us would always remember vividly: how we reacted to it, and where we were when we heard the news.

Can you recall how you found out that Simpson had been acquitted? Chances are that you don’t remember, or that what you remember is wrong. Several days after the verdict, a group of California undergraduates provided researchers with detailed accounts of how they learned about the jury’s decision. When the researchers probed students’ memories again fifteen months later, only half recalled accurately how they found out about the decision. When asked again nearly three years after the verdict, less than 30 percent of students’ recollections were accurate; nearly half were dotted with major errors.

Soon after something happens, particularly something meaningful or impactful, we have a pretty accurate recollection of it. But the accuracy of that recollection declines on a curve over time — quickly at first, then slowing down. We go from remembering specifics to remembering the gist of what happened. (Again, on average — some detail is often left intact.) As the Simpson trial example shows, even in the case of a very memorable event, transience is high. Less memorable events are forgotten almost entirely.

What we typically do later on is fill in specific details of a specific event with what typically would happen in that situation. Schacter explains:

Try to answer in detail the following three questions: What do you do during a typical day at work? What did you do yesterday? And what did you do on that day one week earlier? When twelve employees in the engineering division of a large office-product manufacturer answered these questions, there was a dramatic difference in what they recalled from yesterday and a week earlier. The employees recalled fewer activities from a week ago than yesterday, and the ones they did recall from a week earlier tended to be part of a “typical” day. Atypical activities — departures from the daily script — were remembered much more frequently after a day than after a week. Memory after a day was close to a verbatim record of specific events; memory after a week was closer to a generic description of what usually happens.

So when we need to recall a memory, we tend to reconstruct as best as we can, starting with whatever “gist” is left over in our brains, and filling in the details by (often incorrectly) assuming that particular event was a lot like others. Generally, this is a correct assumption. There’s no reason to remember exactly what you ate last Thanksgiving, so turkey is a pretty reliable bet. Occasionally, though, transience gets us in trouble, as anyone who’s forgotten a name they should have remembered can attest.

How do we help solve the issue of transience?

Obviously, one easy solution, if it’s something we wish to remember specifically, and in an unaltered form, is to record it as specifically as possible and as soon as possible. That is the optimal solution, for time begins acting immediately to make our memories vague.

Another idea is visual imagery. The idea of using visual mneumonics is popular in the memory-improvement game; in other words, associating parts of a hoped-for memory with highly vivid imagery (an elephant squashing a clown!), which can be easily recalled later. Greek orators were famous for the technique.

The problem is that almost no one uses this on a day to day basis, because it’s very cognitively demanding. You must go through the process of making interesting and evocative associations every time you want to remember something — there’s no “general memory improvement” going on, which is what people are really interested in, where all future memories are more effectively encoded.

Another approach — associating and tying something you wish to remember with something else you already know to increase its availability later on — is also useful, but as with visual imagery, must be used each and every time.

In fact, so far as we can tell, the only “general memory improver” available to us is to create better habits of association — attaching vivid stories, images, and connections to things — the very habits we talk about frequently when we discuss the mental model approach. It won’t happen automatically.


The second memory failure is closely related to transience, but a little different in practice. Whereas transience entails remembering something that then fades, absent-mindedness is a process whereby the information is never properly encoded, or is simply overlooked at the point of recall.

Failed encoding explains phenomena like regularly misplacing our keys or glasses: The problem is not that the information faded, it’s that it never made it from our working memory into our long term memory. This often happens because we are distracted or otherwise not paying attention at the moment of encoding (e.g., when we take our glasses off).

Interestingly enough, although divided attention can prevent us from retaining particulars, we still may encode some basic familiarity: 

Familiarity entails a more primitive sense of knowing that something has happened previously, without dredging up particular details. In [a] restaurant, for example, you might have noticed at a nearby table someone you are certain you have met previously despite failing to recall such specifics as the person’s name or how you know her. Laboratory studies indicate that dividing attention during encoding has a drastic effect on subsequent recollection, and has little or no effect on familiarity.

This phenomenon probably happens because divided attention prevents us from elaborating on the particulars that are necessary for subsequent recollection, but allows us to record some rudimentary information that later gives rise to a sense of familiarity.

Schacter also points out something that older people might take solace in: Aging produces a similar cognitive effect to attention-dividedness. The reason older people start feeling they’ve misplaced their keys or checkbook constantly is that the brain’s decline in cognitive resources mirrors the “split attention” problem that causes all of us to misplace our keys or checkbook.

A related phenomenon to this poor encoding problem is one called change-blindness — failing to see differences in objects or scenes unfolding over time. Similar to the “slowly boiling a frog” issue most of us are familiar with, change-blindness causes us to fail to see subtle change. This is the Invisible Gorilla problem, made famous through its vivid demonstration by Daniel Simons and Christopher Chabris.

In fact, in another experiment, Simons was able to show that even in a real-life conversation, he could swap out one man for another in many instances without the conversational partner even noticing! Magicians and con-men regularly use this to fool and astonish.

What’s happening is shallow encoding — similar to the transience problem, we often encode only a superficial level of information related to what’s happening in front of our face, even when talking to a real person. Thus, subtly changing details are not registered because they were never encoded in the first place! (Sherlock Holmes made a career of countering this natural tendency by being super-observant.)

Generally, this is totally fine and OK. As a whole, the system serves us well. But the instances where it doesn’t can get us into trouble.


This brings up the problem of absent-mindedness in what psychologists call prospective memory — remembering something you need to do in the future. We’re all familiar with situations when we forget to do something we clearly “told ourselves” we needed to remember.

The typical antidote is using cues to help us remember: An event-based prospective memory goes like this: “When you see Harry today, tell him to call me.” A time-based prospective memory goes like this: “At 11PM, take the cookies out of the oven.”

It doesn’t always work, though. Time-based prospective memory is the worst of all: We’re not consistently good at remembering that “11PM = cookies” because other stuff will also be happening at 11PM! A time-based cue is insufficient.

For the same reason, an event-based cue will also fail to work if we’re not careful:

Consider the first event-based prospective memory. Frank has asked you to tell Harry to call him, but you have forgotten to do so. You indeed saw Harry in the office, but instead of remembering Frank’s message you were reminded of the bet you and Harry made concerning last night’s college basketball championship, gloating for several minutes over your victory before settling down to work.

“Harry” carries many associations other than “Tell him something for Frank.” Thus, we’re not guaranteed to recall it in the moment.

This knowledge allows us to construct an optimal solution to the prospective memory problem: Specific, distinctive cues that call to mind the exact action needed, at the time it is needed. All elements must be in place for the optimal solution.

Post-it notes with explicit directions put in an optimal place (somewhere a post-it note would not usually be found) tend to work well. A specific reminder on your phone that pops up exactly when needed will work.  As Schacter puts it, “The point is to transfer as many details as possible from working memory to written reminders.” Be specific, make it stand out, make it timely. Hoping for a spontaneous reminder to work means that, some percentage of the time, we will certainly commit an absent-minded error. It’s just the way our minds work.


Let’s pause there for now. In our next post on memory, we’ll cover the sins of Blocking and Misattribution, and some potential solutions. We recommend re-reading Part 1 at that time, and then Parts 1 and 2 when Part 3 comes out. One easy fact about memory is that repeated exposure is nearly always a good idea. In the meantime, try checking out the book in its entirety, if you want to read ahead.

Daniel Pink on Incentives and the Two Types of Motivation

Motivation is a tricky multifaceted thing. How do we motivate people to become the best they can be? How do we motivate ourselves? Sometimes when we are running towards a goal we suddenly lose steam and peter out before we cross the finish line. Why do we lose our motivation part way to achieving our goal?

Dan Pink wrote an excellent book on motivation called Drive: The Surprising Truth About What Motivates Us. We’ve talked about the book before but it’s worth going into a bit more detail.

When Pink discusses motivation he breaks it into two specific types: extrinsic and intrinsic.

Extrinsic motivation is driven by external forces such as money or praise. Intrinsic motivation is something that comes from within and can be as simple as the joy one feels after accomplishing a challenging task. Pink also describes two distinctly different types of tasks: algorithmic and heuristic. An algorithmic task is when you follow a set of instructions down a defined path that leads to a single conclusion. A heuristic task has no instructions or defined path, one must be creative and experiment with possibilities to complete the task.

As you can see the two types of motivations and tasks are quite different.

Let’s look at how they play against each other depending on what type of reward is offered.

Baseline Rewards

Money was once thought to be the best way to motivate an employee. If you wanted someone to stay with your company or to perform better you simply had to offer financial incentives. However, the issue of money as a motivator has become moot in many sectors. If you are a skilled worker you will quite easily be able to find a job in your desired salary range. Pink puts it succinctly:

Of course the starting point for any discussion of motivation in the workplace is a simple fact of life: People have to earn a living. Salary, contract payments, some benefits, a few perks are what I call “baseline rewards.” If someone’s baseline rewards aren’t adequate or equitable, her focus will be on the unfairness of her situation and the anxiety of her circumstance. You’ll get neither the predictability of extrinsic motivation nor the weirdness of intrinsic motivation. You’ll get very little motivation at all. The best use of money as a motivator is to pay people enough to take the issue of money off the table.

Once the baseline rewards have been sorted we are often offered other ‘carrots and sticks’ to nudge our behavior. Many of these rewards will actually achieve the opposite effect to what was intended.

‘If, then’ Rewards

‘If, then’ rewards are when we promise to deliver something to an individual once they complete a specific task. If you hit your sales goals this month then I will give you a bonus. There are inherent dangers with ‘if, then’ rewards. They tend to prompt a short term surge in motivation but actually dampen it over the long term. Just the fact of offering a reward for some form of effort sends the message that the work is, well, work. This can have a large negative impact on intrinsic motivation. Additionally, rewards by their very nature narrow our focus, we tend to ignore everything but the finish line. This is fine for algorithmic tasks but hurts us with heuristic based tasks.

Amabile and others have found that extrinsic rewards can be effective for algorithmic tasks – those that depend on following an existing formula to its logical conclusion. But for more right-brain undertakings – those that demand flexible problem-solving, inventiveness, or conceptual understanding – contingent rewards can be dangerous. Rewarded subjects often have a harder time seeing the periphery and crafting original solutions.


When we use goals to motivate us how does that affect how we think and behave?

Like all extrinsic motivators, goals narrow our focus. That’s one reason they can be effective; they concentrate the mind. But as we’ve seen, a narrowed focus exacts a cost. For complex or conceptual tasks, offering a reward can blinker the wide-ranging thinking necessary to come up with an innovative solution. Likewise, when an extrinsic goal is paramount – particularly a short-term, measurable one whose achievement delivers a big payoff – its presence can restrict our view of the broader dimensions of our behavior. As the cadre of business school professors write, ‘Substantial evidence demonstrates that in addition to motivating constructive effort, goal setting can induce unethical behavior.

The examples are legion, the researchers note. Sears imposes a sales quota on its auto repair staff – and workers respond by overcharging customers and completing unnecessary repairs. Enron sets lofty revenue goals – and the race to meet them by any means possible catalyzes the company’s collapse. Ford is so intent on producing a certain car at a certain weight at a certain price by a certain date that it omits safety checks and unleashes the dangerous Ford Pinto.

The problem with making extrinsic reward the only destination that matters is that some people will choose the quickest route there, even if it means taking the low road.

Indeed, most of the scandals and misbehavior that have seemed endemic to modern life involve shortcuts. Executives game their quarterly earnings so they can snag a performance bonus. Secondary school counselors doctor student transcripts so their seniors can get into college. Athletes inject themselves with steroids to post better numbers and trigger lucrative performance bonuses.

Contrast that approach with behavior sparked by intrinsic motivation. When the reward is the activity itself – deepening learning, delighting customers, doing one’s best – there are no shortcuts. The only route to the destination is the high road. In some sense, it’s impossible to act unethically because the person who’s disadvantaged isn’t a competitor but yourself.

“Most of the scandals and misbehavior that have seemed endemic to modern life involve shortcuts.” Click To Tweet

These same pressures that may nudge you towards unethical actions can also push you to make more risky decisions. The drive towards the goal can convince you to make decisions that in any other situation you would likely never consider. (See more about the dangers of goals.)

It’s not only the person who is being motivated with the reward that is hurt here. The person who is trying to encourage a certain type of behaviour also falls into a trap and is forced to try and course correct which, often, leaves them worse off than if they had never offered the reward in the first place.

The Russian economist Anton Suvorov has constructed an elaborate econometric model to demonstrate this effect, configured around what’s called ‘principal-agent theory.’ Think of the principal as the motivator – the employer, the teacher, the parent. Think of the agent as the motivatee – the employee, the student, the child. A principal essentially tries to get the agent to do what the principal wants, while the agent balances his own interests with whatever the principal is offering. Using a blizzard of complicated equations that test a variety of scenarios between principal and agent, Suvorov has reached conclusions that make intuitive sense to any parent who’s tried to get her kids to empty the garbage.

By offering a reward, a principal signals to the agent that the task is undesirable. (If the task were desirable, the agent wouldn’t need a prod.) But that initial signal, and the reward that goes with it, forces the principal onto a path that’s difficult to leave. Offer too small a reward and the agent won’t comply. But offer a reward that’s enticing enough to get the agent to act the first time, and the principal ‘is doomed to give it again in the second.’ There’s no going back. Pay your son to take out the trash – and you’ve pretty much guaranteed the kid will never do it again for free. What’s more, once the initial money buzz tapers off, you’ll likely have to increase the payment to continue compliance.

Even if you are able to trigger the better behaviour it will often disappear once incentives are removed.

In environments where extrinsic rewards are most salient, many people work only to the point that triggers the reward – and no further. So if students get a prize for reading three books, many won’t pick up a fourth, let alone embark on a lifetime of reading – just as executives who hit their quarterly numbers often won’t boost earnings a penny more, let alone contemplate that long-term health of their company. Likewise, several studies show that paying people to exercise, stop smoking, or take their medicines produces terrific results at first – but the healthy behavior disappears once the incentives are removed.

When Do Rewards Work?

Rewards can work for routine (algorithmic) tasks that require little creativity.

For routine tasks, which aren’t very interesting and don’t demand much creative thinking, rewards can provide a small motivational booster shot without the harmful side effects. In some ways, that’s just common sense. As Edward Deci, Richard Ryan, and Richard Koestner explain, ‘Rewards do not undermine people’s intrinsic motivation for dull tasks because there is little or no intrinsic motivation to be undermined.’

You will increase your chances for success when rewarding routine tasks using these three practices:

  1. Offer a rationale for why the task is necessary.
  2. Acknowledge that the task is boring.
  3. Allow people to complete the task their own way (think autonomy not control).

Any extrinsic reward should be unexpected and offered only once the task is complete. In many ways this is common sense as it is the opposite of the ‘if, then’ rewards allowing you to avoid its many failings (focus isn’t solely on the prize, motivation won’t wane if reward isn’t present during task, etc…). However, one word of caution – be careful if these rewards become expected, because at that point they are no different than the ‘if, then’ rewards.

Daniel Pink on Incentives and the Two Types of Motivation Click To Tweet