Over 500,000 people visited Farnam Street last month to expand their knowledge and improve their thinking. Work smarter, not harder with our free weekly newsletter that's full of time-tested knowledge you can add to your mental toolbox.
Over 500,000 people visited Farnam Street last month to expand their knowledge and improve their thinking. Work smarter, not harder with our free weekly newsletter that's full of time-tested knowledge you can add to your mental toolbox.
Rewarding reads on love, life, knowledge, history, the future, and tools for thinking. Out of all the books I read this year, here is a list of what I found most worth reading in 2016.
1. The Psychology of Man’s Possible Evolution
These lectures, which were originally called Six Psychological Lectures, were first privately printed in the 1940s. Of the first run of 150 copies, none were sold. The essays were published once again after Ouspensky’s death, and unlike last time became a hit. While the book is about psychology, it’s different than what we think of as psychology — “for thousands of years psychology existed under the name philosophy.” Consider this a study in what man may become — by working simultaneously on knowledge and inner unity.
2. The Island of Knowledge: The Limits of Science and the Search for Meaning
Imagine the sum of our knowledge as an Island in a vast and endless ocean. This is the Island of Knowledge. The coastline represents the boundary between the known and unknown. As we grow our understanding of the world, the Island grows and with it so does the shores of our ignorance. “We strive toward knowledge, always more knowledge,” Gleiser writes, “but must understand that we are, and will remain, surrounded by mystery.” The book is a fascinating and wide-ranging tour through scientific history. (Dig Deeper into this amazing read here.)
3. When Breath Becomes Air
It’s been a while since I’ve cried reading a book. This beautifully written memoir, by a young neurosurgeon diagnosed with terminal cancer, attempts to answer the question What makes a life worth living? If you read this and you’re not feeling something you’re probably a robot.
4. The Sovereign Individual: Mastering the Transition to the Information Age
The book, which argues “the information revolution will destroy the monopoly power of the nation-state as surely as the Gunpowder Revolution destroyed the Church’s monopoly,” is making the rounds in Silicon Valley and being passed around like candy. Even if its forecasts are controversial, the book is a good read and it’s full of interesting and detailed arguments. I have underlines on nearly every page. “Information societies,” the authors write, “promise to dramatically reduce the returns to violence … When the payoff for organizing violence at a large scale tumbles, the payoff from violence at a smaller scale is likely to jump. Violence will become more random and localized.” The Sovereign Individual, who, for the first time “can educate and motivate himself,” will be “almost entirely free to invest their own work and realize the full benefits of their own productivity.” An unleashing of human potential which will, the authors argue, shift the greatest source of wealth to ideas rather than physical capital — “anyone who thinks clearly will potentially be rich.” Interestingly, in this potential transition, the effects are “likely to be centered among those of the middle talent in currently rich countries. They particularly may come to feel that information technology poses a threat to their way of life.” The book predicts the death of politics, “weakened by the challenge from technology, the state will treat increasingly autonomous individuals, its former citizens, with the same range of ruthlessness and diplomacy it has heretofore displayed in its dealings with other governments.” As technology reshapes the world, it also “antiquates laws, reshapes morals, and alters preconceptions. This book explains how.”
5. To Kill a Mockingbird
I know, I know. Hear me out. Someone I respect mentioned that he thought Atticus Finch was the perfect blend of human characteristics. Tough and skilled, yet humble and understanding. He’s frequently rated as a “most admired” hero in fiction, yet he’s a lawyer competing with Jedis, Detectives, Spies, and Superheroes. Isn’t that kind of interesting? Since it had been at least 15 years since I’d read TKM, I wanted to go back and remember what made Atticus so admired. His courage, his humility, his understanding of people. I forgot just how perceptive Finch was when it came to what we’d call “group social dynamics” — he forgives the individual members of the mob that show up to hurt Tom Robinson simply because he understands that mob psychology is capable of overwhelming otherwise good people. How many of us would be able to do that? Atticus Finch is certainly a fictional, and perhaps “unattainably” moral hero. But I will point out that not only do real life “Finch’s” exist, but that even if we don’t “arrive” at a Finchian level of heroic integrity and calm temperament, it’s certainly a goal worth pursuing. Wise words from the book Rules for a Knight sums it up best: “To head north, a knight may use the North Star to guide him, but he will not arrive at the North Star. A knight’s duty is to proceed in that direction.” (Here are some of the lessons I took away from the book.)
6. Lee Kuan Yew: The Grand Master’s Insights on China, the United States, and the World
If you’re not familiar with Lee Kuan Yew, he’s the “Father of Modern Singapore,” the man who took a small, poor island just north of the equator in Southeast Asia with GDP per capita of ~$500 in 1965 and turned it into a modern powerhouse with GDP per capita of over $70,000 as of 2014, with some of the lowest rates of corruption and highest rates of economic freedom in the world. Finding out how he did it is worth anyone’s time. This book is a short introduction to his style of thinking: A series of excerpts of his thoughts on modern China, the modern U.S., Islamic Terrorism, economics, and a few other things. It’s a wonderful little collection. (We’ve actually posted about it before.) Consider this an appetizer (a delicious one) for the main course: From Third World to First, Yew’s full account of the rise of Singapore. (Dig deeper here.)
7. An Illustrated Book of Bad Arguments
Perfect summer reading for adults and kids alike. One friend of mine has created a family game where they all try to spot the reasoning flaws of others. The person with the most points at the end of the week gets to pick where they go for dinner. I have a suspicion his kids will turn out to be politicians or lawyers.
8. Intuition Pumps and Other Tools for Thinking
Dan Dennett is one of the most well known cognitive scientists on the planet . This book is a collection of 77 short essays on different “thinking tools,” basically thought experiments Dennett uses to slice through tough problems, including some tools for thinking about computing, thinking about meaning, and thinking about consciousness. Like Richard Feynman’s great books, this one acts as a window into a brilliant mind and how it handles interesting and difficult problems. If you only walk away with a few new mental tools, it’s well worth the time spent. (You can learn a lot more about Dennett here, here, and here.)
9. The Seven Sins of Memory (How the Mind Forgets and Remembers)
I found this in the bibliography of Judith Rich Harris’ No Two Alike. Schacter is a psychology professor at Harvard who runs the Schacter Memory Lab. The book explores the seven “issues” we tend to find with regard to our memory: Absent-mindedness, transience, blocking, misattribution, suggestibility, bias, and persistence. The fallibility of memory is so fascinating: We rely on it so heavily and trust it so deeply, yet as Schacter shows, it’s extremely faulty. It’s not just about forgetting where you left your keys. Modern criminologists know that eyewitness testimony is deeply flawed. Some of our deepest and most hard-won memories — the things we know are true — are frequently wrong or distorted. Learning to calibrate our confidence in our own memory is not at all easy. Very interesting topic to explore. (We did a three part series on this book. Introduction and parts One, Two, and Three).
10. Talk Lean: Shorter Meetings. Quicker Results. Better Relations
This book is full of useful tips on listening better, being candid and courteous, and learning what derails meetings, conversations, and relationships with people at work. Don’t worry. It’s not about leaving things unsaid that might be displeasing for other people. In fact, leaving things unsaid is often more detrimental to the relationship than airing them out. Rather, it’s about finding a way to say them so people will hear them and not feel defensive. If you want to get right to the point and not alienate people, this book will help you. I know because this is something, personally, I struggle with at times.
11. The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World
I recently had a fascinating multi-hour dinner with the author, Pedro Domingos, on where knowledge comes from. Historically, at least, the answer has been evolution, experience, and culture. Now, however, there is a new source of knowledge: Machine learning. The book offers an accessible overview of the different ways of machine learning and the search for a master, unifying, theory. The book also covers how machine learning works and gives Pedro’s thoughts on where we’re headed. (Dig deeper in this podcast.)
12. Why Don’t We Learn from History?
This is a short (~120pp) book by the military historian and strategist B.H. Liddell Hart, a man who not only wrote military history but surely influenced it, especially in Germany in the World War period. He wrote this short synthesis at the end of his life and didn’t have a chance to finish it, but the result is still fascinating. Hart takes a “negative” view of history; in other words, What went wrong? How can we avoid it? The result of that study, as he writes in the introduction, is that “History teaches us personal philosophy.” Those who learn vicariously as well as directly have a big leg up. Something to take to heart. I plan to read more of his works.
13. A Powerful Mind: The Self-Education of George Washington
What a great book idea by Adrienne Harrison. There are a zillion biographies of GW out there, with Chernow's getting a lot of praise recently. But Harrison narrows in on Washington’s self-didactic nature. Why did he read so much? How did he educate himself? Any self-motivated learner is probably going to enjoy this.
14. Sapiens: A Brief History of Humankind
One of the best books I’ve come across in a long time. Sapiens is a work of “Big History” — in the style of Jared Diamond’s Guns, Germs, and Steel — that seeks to understand humanity in a deep way. Many of Professor Harari’s solutions will be uncomfortable for some to read, there is no attempt at political correctness, but his diagnosis of human history is undeniably interesting and at least partially correct. He draws on many fields to arrive at his conclusions; a grand method of synthesis that will be familiar to long-time Farnam Street readers. The book is almost impossible to summarize given the multitude of ideas presented. But then again, most great books are. (Dig deeper into this amazing read here, here, and here.)
15. Becoming Wise: An Inquiry into the Mystery and Art of Living — A refreshing signal in world of noise that should be read and immediately re-read. There is so much goodness in here that scarcely will you find more than a page or two in my copy without a mark, bent page, or highlight. The entire book offers texture to thoughts you knew you had but didn't know how to express.
16. The Happiness Trap: How to Stop Struggling and Start Living
The way most of us search for and attempt to hold onto fleeting moments of happiness ends up ensuring that we’re miserable. A great practical book on developing mindfulness, which is so important in many aspects of your life, including satisfaction. Might be the best self-help book I’ve read.
(Purchase a copy of the entire 3-part series in one sexy PDF for $3.99)
In the first two parts of our series on memory, we covered four major “sins” committed by our memories: Absent-Mindedness, Transience, Misattribution, and Blocking, using Daniel Schacter's The Seven Sins of Memory as our guide.
We're going to finish it off today with three other sins: Suggestibility, Bias, and Persistence, hopefully leaving us with a full understanding of our memory and where it fails us from time to time.
As its name suggests, the sin of suggestibility refers to our brain's tendency to misremember the source of memories:
Suggestibility in memory refers to an individual's tendency to incorporate misleading information from external sources — other people, written materials or pictures, even the media — into personal recollections. Suggestibility is closely related to misattribution in the sense that the conversion of suggestions into inaccurate memories must involve misattribution. However, misattribution often occurs in the absence of overt suggestion, making suggestibility a distinct sin of memory.
Suggestibility is such a difficult phenomenon because the memories we've pulled from outside sources seem as truly real as our own. Take the case of a “false veteran” which Schacter describes in the book:
On May 31, 2000, a front-page story in the New York Times described the baffling case of Edward Daly, a Korean War veteran who made up elaborate — but imaginary — stories about his battle exploits, including his involvement in a terrible massacre in which he had not actually participated. While weaving his delusional tale, Daly talked to veterans who had participated in the massacre and “reminded” them of his heroic deeds. His suggestions infiltrated their memories. “I know that Daly was there,” pleaded one veteran. “I know that. I know that.”
The key word here is infiltrated. This brings to mind the wonderful Christopher Nolan movie Inception, about a group of experts who seek to infiltrate the minds of sleeping targets in order to change their memories. The movie is fictional but there is a subtle reality to the idea: With enough work, an idea that is merely suggested to us in one context can seem like our own idea or our own memory.
Take suggestive questioning, a problem with criminal investigations. The investigator talks to an eyewitness and, hoping to jog their memory, asks a series of leading questions, arriving at the answer he was hoping for. But is it genuine? Not always.
Schacter describes a psychology experiment wherein participants see a video of a robbery and then are fed misleading suggestions about the robbery soon after, such as the idea that the victim of the robbery was wearing a white apron. Amazingly, even when people could recognize that the apron idea was merely suggested to them, many people still regurgitated the suggested idea!
Previous experiments had shown that suggestive questions produce memory distortion by creating source memory problems like those in the previous chapter: participants misattribute information presented only in suggestive questions about the original videotape. [The psychologist Philip] Higham's results provide an additional twist. He found that when people took a memory test just minutes after receiving the misleading question, and thus still correctly recalled that the “white apron” was suggested by the experimenter, they sometimes insisted nevertheless that the attendant wore a white apron in the video itself. In fact, they made this mistake just as often as people who took the memory test two days after receiving misleading suggestions, and who had more time to forget that the white apron was merely suggested. The findings testify to the power of misleading suggestions: they can create false memories of an event even when people recall that the misinformation was suggested.
The problem of overconfidence also plays a role in suggestion and memory errors. Take an experiment where subjects are shown a man entering a department store and then told he murdered a security guard. After being shown a photo lineup (which did not contain the gunman), some were told they chose correctly and some were told they chose incorrectly. Guess which group was more confident and trustful of their memories afterwards?
It was, of course, the group that received reinforcement. Not only were they more confident, but they felt they had better command of the details of the gunman's appearance, even though they were as wrong as the group that received no positive feedback. This has vast practical applications. (Consider a jury taking into account the testimony of a very confident eyewitness, reinforced by police with an agenda.)
One more interesting idea in reference to suggestibility: Like the DiCaprio-led clan in the movie Inception, psychologists have been able to successfully “implant” false memories of childhood in many subjects based merely on suggestion alone. This should make you think carefully about what you think you remember about the distant past:
[The psychologist Ira] Hyman asked college students about various childhood experiences that, according to their parents, had actually happened, and also asked about a false event that, their parents confirmed, had never happened. For instance, students were asked: “When you were five you were at the wedding reception of some friends of the family and you were running around with some other kids, when you bumped into the table and spilled the punch bowl on the parents of the bride.” Participants accurately remembered almost all of the true events, but initially reported no memory of the false events.
However, approximately 20 to 40 percent of participants in different experimental conditions eventually came to describe some memory of the false event in later interviews. In one experiment, more than half of the participants who produced false memories describe them as “clear” recollections that included specific details of the central even, such as remembering exactly where or how one spilled the punch. Just under half reported “partial” false memories, which included some details but no specific memory of the central event.
Thus is the “power of the suggestion.”
The Sin of Bias
The problem of bias will be familiar to regular readers. In some form or another, we're subject to mental biases every single day, most of which are benign, some of which are harmful, and most of which are not hard to understand. Biases specific to memory are so good to study because they're so easy and natural to fall into. Because we trust our memory so deeply, they often go unquestioned. But we might want to be careful:
The sin of bias refers to distorting influences of our present knowledge, beliefs, feelings on new experiences, or our later memories of them. In the stifling psychological climate of 1984, the Ministry of Truth used memory as a pawn in the service of party rule. Much in the same manner, biases in remembering past experiences reveal how memory can serve as a pawn for the ruling masters of our cognitive systems.
There are four biases we're subject to in this realm: Consistency and change bias, hindsight bias, egocentric bias, and stereotyping bias.
Consistency and Change Bias
The first is a consistency bias: We re-write our memories of the past based on how we feel in the present. In one experiment after another, this has undoubtedly been proven true. It's probably something of a coping mechanism: If we saw the past with complete accuracy, we might not be such happy individuals.
We often re-write the past so that it seems we've always felt like we feel now, that we always believed what we believe now:
This consistency bias has turned up in several different contexts. Recalling past experiences of pain, for instance, is powerfully influenced by current pain level. When patients afflicted by chronic pain are experiencing high levels of pain in the present, they are biased to recall similarly high levels of pain in the past; when present pain isn't so bad, past pain experiences seem more benign, too. Attitudes towards political and social issues also reflect consistency bias. People whose views on political issues have changed over time often recall incorrectly past attitudes as highly similar to present ones. In fact, memories of past political views are sometimes more closely related to present views than what they actually believed in the past.
Think about your stance five or ten years ago on some major issue like sentencing for drug-related crime. Can your recall specifically what you believed? For most people, they believe they have stayed consistent on the issue. But easily performed experiments show that a large percentage of people who think “all is the same” have actually changed their tune significantly over time. Such is the bias towards consistency.
This affects relationships fairly significantly: Schacter shows that our current feelings about our partner color our memories of our past feelings.
Consider a study that followed nearly four hundred Michigan couples through the first years of their marriage. In those couples who expressed growing unhappiness over the four years of the study, men mistakenly recalled the beginnings of their marriages as negative even though they said they were happy at the time. “Such biases can lead to a dangerous “downward spiral,” noted the researchers who conducted the study. “The worse your current view of your partner is, the worse your memories are, which only further confirms your negative attitudes.”
In other contexts, we sometimes lean in the other direction: We think things have changed more than they really have. We think the past was much better than it is today, or much worse than it is today.
Schacter discusses a twenty-year study done with a group of women between 1969 and 1989, assessing how they felt about their marriages throughout. Turns out, their recollections of the past were constantly on the move, but the false recollection did seem to serve a purpose: Keeping the marriage alive.
When reflecting back on the first ten years of their marriages, wives showed a change bias: They remembered their initial assessments as worse than they actually were. The bias made their present feelings seem an improvement by comparison, even though the wives actually felt more negatively ten years into the marriage than they had at the beginning. When they had been married for twenty years and reflected back on their second ten years of marriage, the women now showed a consistency bias: they mistakenly recalled that feelings from ten years earlier were similar to their present ones. In reality, however, they felt more negatively after twenty years of marriage than after ten. Both types of bias helped women cope with their marriages.
The purpose of all this is to reduce our cognitive dissonance: That mental discomfort we get when we have conflicting ideas. (“I need to stay married” / “My marriage isn't working” for example.)
We won't go into hindsight bias too extensively, because we have covered it before and the idea is familiar to most. Simply put, once we know the outcome of an event, our memory of the past is forever altered. As with consistency bias, we use the lens of the present to see the past. It's the idea that we “knew it all along” — when we really didn't.
A large part of hindsight bias has to do with the narrative fallacy and our own natural wiring in favor of causality. We really like to know why things happen, and when given a clear causal link in the present (Say, we hear our neighbor shot his wife because she cheated on him), the lens of hindsight does the rest (I always knew he was a bad guy!). In the process, we forget that we must not have thought he was such a bad guy, since we let him babysit our kids every weekend. That is hindsight bias. We're all subject to it unless we start examining our past with more detail or keeping a written record.
The egocentric bias is our tendency to see the past in such a way that we, the rememberer, look better than we really are or really should. We are not neutral observers of our own past, we are instead highly biased and motivated to see ourselves in a certain light.
The self's preeminent role in encoding and retrieval, combined with a powerful tendency for people to view themselves positively, creates fertile ground of memory biases that allow people to remember past experiences in a self-enhancing light. Consider, for example, college students who were led to believe that introversion is a desirable personality trait that predicts academic success, and then searched their memories for incidents in which they behaved in an introverted or extroverted manner. Compared with students who were led to believe that extroversion is a desirable trait, the introvert-success students more quickly generated memories in which they behaved like introverts than like extroverts. The memory search was biased by a desire to see the self positively, which led students to select past incidents containing the desired trait.
The egocentric bias occurs constantly and in almost any situation where it possibly can: It's similar to what's been called overconfidence in other arenas. We want to see ourselves in a positive light, and so we do. We mine our brain for evidence of our excellent qualities. We have positive maintaining illusions that keep our spirits up.
This is generally a good thing for our self-esteem, but as any divorced couple knows, it can also cause us to have a very skewed version of the past.
Bias from Stereotyping
In our series on the development of human personality, we discussed the idea of stereotyping as something human beings do constantly and automatically; the much-maligned concept is central to how we comprehend the world.
Stereotyping exists because it saves energy and space — it allows us to consolidate much of what we learn into categories with broadly accurate descriptions. As we learn new things, we either slot them into existing categories, create new categories, or slightly modify old categories (the one we like the least, because it requires the most work). This is no great insight.
But what is interesting is the degree to which stereotyping colors our memories themselves:
If I tell you that Julian, an artist, is creative, temperamental, generous, and fearless, you are more likely to recall the first two attributes, which fit the stereotype of an artist, than the latter two attributes, which do not. If I tell you that he is a skinhead, and list some of his characteristics, you're more likely to remember that he is rebellious and aggressive than that he is lucky and modest. This congruity bias is especially likely to occur when people hold strong stereotypes about a particular group. A person with strong racial prejudices, for example, would be more likely to remember stereotypical features of an African American's behavior than a less prejudiced person, and less likely to remember behaviors that don't fit the stereotype.
Not only that, but when things happen which contradict our expectations, we are capable of distorting the past in such a way to make it come in line. When we try to remember a tale after we know how it ends, we're more likely to distort the details of the story in such a way that the whole thing makes sense and fits our understanding of the world. This is related to the narrative fallacy and hindsight bias discussed above.
The final sin which Schacter discusses in his book is Persistence, the often difficult reality that some memories, especially negative ones, persist a lot longer than we wish. We're not going to cover it here, but suggest you check out the book in its entirety to get the scoop.
And with that, we're going to wrap up our series on the human memory. Take what you've learned, digest it, and then keep pushing deeper in your quest to understand human nature and the world around you.
(Purchase a copy of the entire 3-part series in one sexy PDF for $3.99)
In part one, we began a conversation about the trappings of the human memory, using Daniel Schacter's excellent The Seven Sins of Memory as our guide. (We've also covered some reasons why our memory is pretty darn good.) We covered transience — the loss of memory due to time — and absent-mindedness — memories that were never encoded at all or were not available when needed. Let's keep going with a couple more whoppers: Blocking and Misattribution.
Blocking is the phenomenon when something is indeed encoded in our memory and should be easily available in the given situation, but simply will not come to mind. We're most familiar with blocking as the always frustrating “It's on the tip of my tongue!”
Unsurprisingly, blocking occurs most frequently when it comes to names and indeed occurs more frequently as we get older:
Twenty-year-olds, forty-year-olds, and seventy-year-olds kept diaries for a month in which they recorded spontaneously occurring retrieval blocks that were accompanied by the “tip of the tongue” sensation. Blocking occurred occasionally for the names of objects (for example, algae) and abstract words (for example, idiomatic). In all three groups, however, blocking occurred most frequently for proper names, with more blocks for people than for other proper names such as countries or cities. Proper name blocks occurred more frequently in the seventy-year-olds than in either of the other two groups.
This is not the worst sin our memory commits — excepting the times when we forget an important person's name (which is admittedly not fun), blocking doesn't cause the terrible practical results some of the other memory issues cause. But the reason blocking occurs does tells us something interesting about memory, something we intuitively know from other domains: We have a hard time learning things by rote or by force. We prefer associations and connections to form strong, lasting, easily available memories.
Why are names blocked from us so frequently, even more than objects, places, descriptions, and other nouns? For example, Schacter mentions experiments in which researchers show that we more easily forget a man's name than his occupation…even if they're the same word! (Baker/baker or Potter/potter, for example.)
It's because relative to a descriptive noun like “baker,” which calls to mind all sorts of connotations, images, and associations, a person's name has very little attached to it. We have no easy associations to make — it doesn't tell us anything about the person or give us much to hang our hat on. It doesn't really help us form an image or impression. And so we basically remember it by rote, which doesn't always work that well.
Most models of name retrieval hold that activation of phonological representations [sound associations] occurs only after activation of conceptual and visual representations. This idea explains why people can often retrieve conceptual information about an object or person whom they cannot name, whereas the reverse does not occur. For example, diary studies indicate that people frequently recall a person's occupation without remembering his name, but no instances have been documented in which a name is recalled without any conceptual knowledge about the person. In experiments in which people named pictures of famous individuals, participants who failed to retrieve the name “Charlton Heston” could often recall that he was an actor. Thus, when you block on the name “John Baker” you may very well recall that he is an attorney who enjoys golf, but it is highly unlikely that you would recall Baker's name and fail to recall any of his personal attributes.
A person's name is the weakest piece of information we have about them in our people-information lexicon, and thus the least available at any time, and the most susceptible to not being available as needed. It gets worse if it's a name we haven't needed to recall frequently or recently, as we all can probably attest to. (This also applies to the other types of words we block on less frequently — objects, places, etc.)
The only real way to avoid blocking problems is to create stronger associations when we learn names, or even re-encode names we already know by increasing their salience with a vivid image, even a silly one. (If you ever meet anyone named Baker…you know what to do.)
But the most important idea here is that information gains salience in our brain based on what it brings to mind.
Whether or not blocking occurs in the sense implied by Freud's idea of repressed memories, Schacter is non-committal about — it seems the issue was not, at the time of writing, settled.
The memory sin of misattribution has fairly serious consequences. Misattribution happens all the time and is a peculiar memory sin where we do remember something, but that thing is wrong, or possibly not even our own memory at all:
Sometimes we remember events that never happened, misattributing speedy processing of incoming information or vivid images that spring to mind, to memories of past events that did not occur. Sometimes we recall correctly what happened, but misattribute it to the wrong time and place. And at other times misattribution operates in a different direction: we mistakenly credit a spontaneous image or thought to our own imagination, when in reality we are recalling it–without awareness–from something we read or heard.
The most familiar, but benign, experience we've all had with misattribution is the curious case of deja vu. As of the writing of his book, Schacter felt there was no convincing explanation for why deja vu occurs, but we know that the brain is capable of thinking it's recalling an event that happened previously, even if it hasn't.
In the case of deja vu, it's simply a bit of an annoyance. But the misattribution problem causes more serious problems elsewhere.
The major one is eyewitness testimony, which we now know is notoriously unreliable. It turns out that when eyewitnesses claim they “know what they saw!” it's unlikely they remember as well as they claim. It's not their fault and it's not a lie — you do think you recall the details of a situation perfectly well. But your brain is tricking you, just like deja vu. How bad is the eyewitness testimony problem? It used to be pretty bad.
…consider two facts. First, according to estimates made in the late 1980s, each year in the United States more than seventy-five thousand criminal trials were decided on the basis of eyewitness testimony. Second, a recent analysis of forty cases in which DNA evidence established the innocence of wrongly imprisoned individuals revealed that thirty-six of them (90 percent) involved mistaken eyewitness identification. There are no doubt other such mistakes that have not been rectified.
What happens is that, in any situation where our memory stores away information, it doesn't have the horsepower to do it with complete accuracy. There are just too many variables to sort through. So we remember the general aspects of what happened, and we remember some details, depending on how salient they were.
We recall that we met John, Jim, and Todd, who were all part of the sales team for John Deere. We might recall that John was the young one with glasses, Jim was the older bald one, and Todd talked the most. We might remember specific moments or details of the conversation which stuck out.
But we don't get it all perfectly, and if it was an unmemorable meeting, with the transience of time, we start to lose the details. The combination of the specifics and the details is a process called memory binding, and it's often the source of misattribution errors.
Let's say we remember for sure that we curled our hair this morning. All of our usual cues tell us that we did — our hair is curly, it's part of our morning routine, we remember thinking it needed to be done, etc. But…did we turn the curling iron off? We remember that we did, but is that yesterday's memory or today's?
This is a memory binding error. Our brain didn't sufficiently “link up” the curling event and the turning off of the curler, so we're left to wonder. This binding issue leads to other errors, like the memory conjunction error, where sometimes the binding process does occur, but it makes a mistake. We misattribute the strong familiarity:
Having met Mr. Wilson and Mr. Albert during your business meeting, you reply confidently the next day when an associate asks you the name of the company vice president: “Mr. Wilbert.” You remembered correctly pieces of the two surnames but mistakenly combined them into a new one. Cognitive psychologists have developed experimental procedures in which people exhibit precisely these kinds of erroneous conjunctions between features of different words, pictures, sentences, or even faces. Thus, having studied spaniel and varnish, people sometimes claim to remember Spanish.
What's happening is a misattribution. We know we saw the syllables Span- and –nish and our memory tells us we must have heard Spanish. But we didn't.
Back to the eyewitness testimony problem, what's happening is we're combining a general familiarity with a lack of specific recall, and our brain is recombining those into a misattribution. We recall a tall-ish man with some sort of facial hair, and then we're shown 6 men in a lineup, and one is tall-ish with facial hair, and our brain tells us that must be the guy. We make a relative judgment: Which person here is closest to what I think I saw? Unfortunately, like the Spanish/varnish issue, we never actually saw the person we've identified as the perp.
None of this occurs with much conscious involvement, of course. It's happening subconsciously, which is why good procedures are needed to overcome the problem. In the case of suspect lineups, the solution is to show the witness each suspect, one after another, and have them give a thumbs up or thumbs down immediately. This takes away the relative comparison and makes us consciously compare the suspect in front of us with our memory of the perpetrator.
The good thing about this error is that people can be encouraged to search their memory more carefully. But it's far from foolproof, even if we're getting a very strong indication that we remember something.
And what helps prevent us from making too many errors is something Schacter calls the distinctiveness heuristic. If a distinctive thing supposedly happened, we usually reason we'd have a good memory of it. And usually this is a very good heuristic to have. (Remember, salience always encourages memory formation.) As we discussed in Part One, a salient artifact gives us something to tie a memory to. If I meet someone wearing a bright rainbow-colored shirt, I'm a lot more likely to recall some details about them, simply because they stuck out.
As an aside, misattribution allows us one other interesting insight into the human brain: Our “people information” remembering is a specific, distinct module, one that can falter on its own, without harming any other modules. Schacter discusses a man with a delusion that many of the normal people around him were film stars. He even misattributed made-up famous-sounding names (like Sharon Sugar) to famous people, although he couldn't put his finger on who they were.
But the man did not falsely recognize other things. Made up cities or made up words did not trip up his brain in the strange way people did. This (and other data) tells us that our ability to recognize people is a distinct “module” our brain uses, supporting one of Judith Rich Harris's ideas about human personality that we've discussed: The “people information lexicon” we develop throughout our lives is a uniquely important module we use.
One final misattribution is something called cryptomnesia — essentially the opposite of deja vu. It's when we think we recognize something as new and novel even though we've seen it before. Accidental plagiarizing can even result from cryptomnesia. (Try telling that to your school teachers!) Cryptomnesia falls into the same bucket as other misattributions in that we fail to recollect the source of information we're recalling — the information and event where we first remembered it are not bound together properly. Let's say we “invent” the melody to a song which already exists. The melody sounds wonderful and familiar, so we like it. But we mistakenly think it's new.
In the end, Schacter reminds us to think carefully about the memories we “know” are true, and to try to remember specifics when possible:
We often need to sort out ambiguous signals, such as feelings of familiarity or fleeting images, that may originate in specific past experiences, or arise from subtle influences in the present. Relying on judgment and reasoning to come up with plausible attributions, we sometimes go astray. When misattribution combines with another of memory's sins — suggestibility — people can develop detailed and strongly held recollections of complex events that never occurred.
And with that, we will leave it here for now. Next time we'll delve into suggestibility and bias, two more memory sins with a range of practical outcomes.
(Purchase a copy of the entire 3-part series in one sexy PDF for $3.99)
Recently, we discussed some of the net advantages of our faulty, but incredibly useful, memory system. Thanks to Harvard's brilliant memory-focused psychologist Daniel Schacter, we know not to be too harsh in judging its flaws. The system we've been endowed with, on the whole, works at its intended purpose, and a different one might not be a better one.
It isn't optimal though, and since we've given it a “fair shake”, it is worth discussing where the errors actually lie, so we can work to improve them, or at least be aware of them.
In his fascinating book, Schacter lays out seven broad areas in which our memory regularly fails us. Let's take a look at them so we can better understand ourselves and others, and maybe come up with a few optimal solutions. Perhaps the most important lesson will be that we must expect our memory to be periodically faulty, and take that into account in advance.
We're going to cover a lot of ground, so this one will be a multi-parter. Let's dig in.
The first regular memory error is called transience. This is one we're all quite familiar with, but sometimes forget to account for: The forgetting that occurs with the passage of time. Much of our memory is indeed transient — things we don't regularly need to recall or use get lost with time.
Schacter gives an example of the phenomenon:
On October 3, 1995, the most sensational criminal trial of our time reached a stunning conclusion: a jury acquitted O.J. Simpson of murder. Word of the not-guilty verdict spread quickly, nearly everyone reacted with either outrage or jubilation, and many people could talk about little else for weeks or days afterward. The Simpson verdict seemed like just the sort of momentous event that most of us would always remember vividly: how we reacted to it, and where we were when we heard the news.
Can you recall how you found out that Simpson had been acquitted? Chances are that you don't remember, or that what you remember is wrong. Several days after the verdict, a group of California undergraduates provided researchers with detailed accounts of how they learned about the jury's decision. When the researchers probed students' memories again fifteen months later, only half recalled accurately how they found out about the decision. When asked again nearly three years after the verdict, less than 30 percent of students' recollections were accurate; nearly half were dotted with major errors.
Soon after something happens, particularly something meaningful or impactful, we have a pretty accurate recollection of it. But the accuracy of that recollection declines on a curve over time — quickly at first, then slowing down. We go from remembering specifics to remembering the gist of what happened. (Again, on average — some detail is often left intact.) As the Simpson trial example shows, even in the case of a very memorable event, transience is high. Less memorable events are forgotten almost entirely.
What we typically do later on is fill in specific details of a specific event with what typically would happen in that situation. Schacter explains:
Try to answer in detail the following three questions: What do you do during a typical day at work? What did you do yesterday? And what did you do on that day one week earlier? When twelve employees in the engineering division of a large office-product manufacturer answered these questions, there was a dramatic difference in what they recalled from yesterday and a week earlier. The employees recalled fewer activities from a week ago than yesterday, and the ones they did recall from a week earlier tended to be part of a “typical” day. Atypical activities — departures from the daily script — were remembered much more frequently after a day than after a week. Memory after a day was close to a verbatim record of specific events; memory after a week was closer to a generic description of what usually happens.
So when we need to recall a memory, we tend to reconstruct as best as we can, starting with whatever “gist” is left over in our brains, and filling in the details by (often incorrectly) assuming that particular event was a lot like others. Generally, this is a correct assumption. There's no reason to remember exactly what you ate last Thanksgiving, so turkey is a pretty reliable bet. Occasionally, though, transience gets us in trouble, as anyone who's forgotten a name they should have remembered can attest.
How do we help solve the issue of transience?
Obviously, one easy solution, if it's something we wish to remember specifically, and in an unaltered form, is to record it as specifically as possible and as soon as possible. That is the optimal solution, for time begins acting immediately to make our memories vague.
Another idea is visual imagery. The idea of using visual mneumonics is popular in the memory-improvement game; in other words, associating parts of a hoped-for memory with highly vivid imagery (an elephant squashing a clown!), which can be easily recalled later. Greek orators were famous for the technique.
The problem is that almost no one uses this on a day to day basis, because it's very cognitively demanding. You must go through the process of making interesting and evocative associations every time you want to remember something — there's no “general memory improvement” going on, which is what people are really interested in, where all future memories are more effectively encoded.
Another approach — associating and tying something you wish to remember with something else you already know to increase its availability later on — is also useful, but as with visual imagery, must be used each and every time.
In fact, so far as we can tell, the only “general memory improver” available to us is to create better habits of association — attaching vivid stories, images, and connections to things — the very habits we talk about frequently when we discuss the mental model approach. It won't happen automatically.
The second memory failure is closely related to transience, but a little different in practice. Whereas transience entails remembering something that then fades, absent-mindedness is a process whereby the information is never properly encoded, or is simply overlooked at the point of recall.
Failed encoding explains phenomena like regularly misplacing our keys or glasses: The problem is not that the information faded, it's that it never made it from our working memory into our long term memory. This often happens because we are distracted or otherwise not paying attention at the moment of encoding (e.g., when we take our glasses off).
Interestingly enough, although divided attention can prevent us from retaining particulars, we still may encode some basic familiarity:
Familiarity entails a more primitive sense of knowing that something has happened previously, without dredging up particular details. In [a] restaurant, for example, you might have noticed at a nearby table someone you are certain you have met previously despite failing to recall such specifics as the person's name or how you know her. Laboratory studies indicate that dividing attention during encoding has a drastic effect on subsequent recollection, and has little or no effect on familiarity.
This phenomenon probably happens because divided attention prevents us from elaborating on the particulars that are necessary for subsequent recollection, but allows us to record some rudimentary information that later gives rise to a sense of familiarity.
Schacter also points out something that older people might take solace in: Aging produces a similar cognitive effect to attention-dividedness. The reason older people start feeling they've misplaced their keys or checkbook constantly is that the brain's decline in cognitive resources mirrors the “split attention” problem that causes all of us to misplace our keys or checkbook.
A related phenomenon to this poor encoding problem is one called change-blindness — failing to see differences in objects or scenes unfolding over time. Similar to the “slowly boiling a frog” issue most of us are familiar with, change-blindness causes us to fail to see subtle change. This is the Invisible Gorilla problem, made famous through its vivid demonstration by Daniel Simons and Christopher Chabris.
In fact, in another experiment, Simons was able to show that even in a real-life conversation, he could swap out one man for another in many instances without the conversational partner even noticing! Magicians and con-men regularly use this to fool and astonish.
What's happening is shallow encoding — similar to the transience problem, we often encode only a superficial level of information related to what's happening in front of our face, even when talking to a real person. Thus, subtly changing details are not registered because they were never encoded in the first place! (Sherlock Holmes made a career of countering this natural tendency by being super-observant.)
Generally, this is totally fine and OK. As a whole, the system serves us well. But the instances where it doesn't can get us into trouble.
This brings up the problem of absent-mindedness in what psychologists call prospective memory — remembering something you need to do in the future. We're all familiar with situations when we forget to do something we clearly “told ourselves” we needed to remember.
The typical antidote is using cues to help us remember: An event-based prospective memory goes like this: “When you see Harry today, tell him to call me.” A time-based prospective memory goes like this: “At 11PM, take the cookies out of the oven.”
It doesn't always work, though. Time-based prospective memory is the worst of all: We're not consistently good at remembering that “11PM = cookies” because other stuff will also be happening at 11PM! A time-based cue is insufficient.
For the same reason, an event-based cue will also fail to work if we're not careful:
Consider the first event-based prospective memory. Frank has asked you to tell Harry to call him, but you have forgotten to do so. You indeed saw Harry in the office, but instead of remembering Frank's message you were reminded of the bet you and Harry made concerning last night's college basketball championship, gloating for several minutes over your victory before settling down to work.
“Harry” carries many associations other than “Tell him something for Frank.” Thus, we're not guaranteed to recall it in the moment.
This knowledge allows us to construct an optimal solution to the prospective memory problem: Specific, distinctive cues that call to mind the exact action needed, at the time it is needed. All elements must be in place for the optimal solution.
Post-it notes with explicit directions put in an optimal place (somewhere a post-it note would not usually be found) tend to work well. A specific reminder on your phone that pops up exactly when needed will work. As Schacter puts it, “The point is to transfer as many details as possible from working memory to written reminders.” Be specific, make it stand out, make it timely. Hoping for a spontaneous reminder to work means that, some percentage of the time, we will certainly commit an absent-minded error. It's just the way our minds work.
Let's pause there for now. In our next post on memory, we'll cover the sins of Blocking and Misattribution, and some potential solutions. In Part Three, we check out the sins of Suggestibility, Bias, and Persistence. In the meantime, try checking out the book in its entirety, if you want to read ahead.
“Though the behaviors…seem perverse, they reflect reliance on a type of navigation that serves the animals quite well in most situations.”
— Daniel Schacter
The Harvard psychologist Daniel Schacter has some brilliant insights into the human memory.
His wonderful book The Seven Sins of Memory presents the case that our memories fail us in regular, repeated, and predictable ways. We forget things we think we should know; we think we saw things we didn't see; we can't remember where we left our keys; we can't remember _____'s name; we think Susan told us something that Steven did.
It's easy to get a little down on our poor brains. Between cognitive biases, memory problems, emotional control, drug addiction, and brain disease, it's natural to wonder how the hell our species has been so successful.
Not so fast. Schacter argues that we shouldn't be so dismissive of the imperfect system we've been endowed with:
The very pervasiveness of memory's imperfections, amply illustrated in the preceding pages, can easily lead to the conclusion that Mother Nature committed colossal blunders in burdening us with such a dysfunctional system. John Anderson, a cognitive psychologist at Carnegie-Mellon University, summarizes the prevailing perception that memory's sins reflect poorly on its design: “over the years we have participated in many talks with artificial intelligence researchers about the prospects of using human models to guide the development of artificial intelligence programs. Invariably, the remark is made, “Well, of course, we would not want our system to have something so unreliable as human memory.”
It is tempting to agree with this characterization, especially if you've just wasted valuable time looking for misplaced keys, read the statistics on wrongful imprisonment resulting from eyewitness miscalculation, or woken up in the middle of the night persistently recalling a slip-up at work. But along with Anderson, I believe that this view is misguided: It is a mistake to conceive of the seven sins as design flaws that expose memory as a fundamentally defective system. To the contrary, I suggest that the seven sins are by-products of otherwise adaptive features of memory, a price we pay for processes and functions that serve us well in many respects.
Schacter starts by pointing out that all creatures have systems running on autopilot, which researchers love to exploit:
For instance, train a rat to navigate a maze to find a food reward at the end, and then place a pile of food halfway into the maze. The rat will run right past the pile of food as if it did not even exist, continuing to the end, where it seeks its just reward! Why not stop at the halfway point and enjoy the reward then? Hauser suggests that the rat is operating in this situation on the basis of “dead reckoning” — a method of navigating in which the animal keeps a literal record of where it has gone by constantly updating the speed, distance, and direction it has traveled.
A similarly comical error occurs when a pup is taken from a gerbil nest containing several other pups and is placed in a nearby cup. The mother searches for her lost baby, and while she is away, the nest is displaced a short distance. When the mother and lost pup return, she uses dead reckoning to head straight for the nest's old location. Ignoring the screams and smells of the other pups just a short distance away, she searches for them at the old location. Hauser contends that the mother is driven by signals from her spatial system.
The reason for this bizarre behavior is that, in general, it works! Natural selection is pretty crafty and makes one simple value judgement: Does the thing provide a reproductive advantage to the individual (or group) or doesn't it? In nature, a gerbil will rarely see its nest moved like that — it's the artifice of the lab experiment that exposes the “auto-pilot” nature of the gerbil's action.
It works the same way with us. The main thing to remember is that our mental systems are, by and large, working to our advantage. If we had memories that could recall all instances of the past with perfect precision, we'd be so inundated with information that we'd be paralyzed:
Consider the following experiment. Try to recall an episode from your life that involves a table. What do you remember, and how long did it take to come up with the memory? You probably had little difficult coming up with a specific incident — perhaps a conversation at the dinner table last night, or a discussion at the conference table this morning. Now imagine that the cue “table” brought forth all the memories that you have stored away involving a table. There are probably hundreds or thousands of such incidents. What if they all sprung to mind within seconds of considering the cue? A system that operated in this manner would likely result in mass confusion produced by an incessant coming to mind of numerous competing traces. It would be a bit like using an Internet search engine, typing in a word that has many matches in a worldwide data base, and then sorting through the thousands of entries that the query elicits. We wouldn't want a memory system that produces this kind of data overload. Robert and Elizabeth Bjork have argued persuasively that the operation of inhibitory processes helps to protect us from such chaos.
The same goes for emotional experiences. We often lament that we take intensely emotional experiences hard; that we're unable to shake the feeling certain situations imprint on us. PTSD is a particularly acute case of intense experience causing long-lasting mental harm. Yet this same system probably, on average, does us great good in survival:
Although intrusive recollections of trauma can be disabling, it is critically important that emotionally arousing experiences, which sometimes occur in response to life-threatening dangers, persist over time. The amygdala and related structures contribute to the persistence of such experiences by modulating memory formation, sometimes resulting in memories we wish we could forget. But this system boosts the likelihood that we will recall easily and quickly information about threatening or traumatic events whose recollection may one day be crucial for survival. Remembering life-threatening events persistently — where the incident occurred, who or what was responsible for it — boosts our chances of avoiding future recurrences.
Our brain has limitations, and with those limitations come trade-offs. One of the trade-offs our brain makes is to prioritize which information to hold on to, and which to let go of. It must do this — as stated above, we'd be overloaded with information without this ability. The brain has evolved to prioritize information which is:
Thus, we do forget things. The phenomenon of eyewitness testimony being unreliable can at least partially be explained by the fact that, when the event occurred, the witness probably did not know they'd need to remember it. There was no reason, in the moment, for that information to make an imprint. We have trouble recalling details of things that have not imprinted very deeply.
There are cases where people do have elements of what might seem like a “more optimal system” of memory, and generally they do not function well in the real world. Schacter gives us two in his book. The first is the famous mnemonist Shereshevski:
But what if all events were registered in elaborate detail, regardless of the level or type of processing to which they were subjected? The result would be a potentially overwhelming clutter of useless details, as happened in the famous case of the mnemonist Shereshevski. Described by Russian neuropsychologist Alexander Luria, who studied him for years, Shereshevski formed and retained highly detailed memories of virtually everything that happened to him — both the important and the trivial. Yet he was unable to function at an abstract level because he was inundated with unimportant details of his experiences — details that are best denied entry to the system in the first place. An elaboration-dependent system ensures that only those events that are important enough to warrant extensive encoding have a high likelihood of subsequent recollection.
The other case comes from more severely autistic individuals. When tested, autistic individuals make less conflagrations of the type that normally functioning individuals make, less mistaking that we heard sweet when we actually heard candy, or stool when we actually heard chair. These little misattributions are our brain working as it should, remembering the “gist” of things when the literal thing isn't terribly important.
One symptom of autism is difficulty “generalizing” the way others are able to; difficulty developing the “gist” of situations and categories that, generally speaking, is highly helpful to a normally functioning individual. Instead, autism can cause many to take things extremely literally, and to have a great memory for rote factual information. (Picture Raymond Babbitt in Rain Man.) The trade is probably not desirable for most people — our system tends to serve us pretty well on the whole.
There's at least one other way our system “saves us from ourselves” on average — our overestimation of self. Social psychologists love to demonstrate cases where humans overestimate their ability to drive, invest, make love, and so on. It even has a (correct) name: Overconfidence.
Yet without some measure of “overconfidence,” most of us would be quite depressed. In fact, when depressed individuals are studied, their tendency towards extreme realism is one thing frequently found:
On the face of it, these biases would appear to loosen our grasp on reality and thus represent a worrisome, even dangerous tendency. After all, good mental health is usually associated with accurate perceptions of reality, whereas mental disorders and madness are associated with distorted perceptions of reality.
But as the social psychologist Shelley Taylor has argued in her work on “positive illusions,” overly optimistic views of the self appear to promote mental health rather than undermine it. Far from functioning in an impaired or suboptimal manner, people who are most susceptible to positive illusions generally do well in many aspects of their lives. Depressed patients, in contrast, tend to lack the positive illusions that are characteristic of non-depressed individuals.
Remembering the past in an overly positive manner may encourage us to meet new challenges by promoting an overly optimistic view of the future, whereas remembering the past more accurately or negatively can leave us discouraged. Clearly there must be limits to such effects, because wildly distorted optimistic biases would eventually lead to trouble. But as Taylor points out, positive illusions are generally mild and are important contributors to our sense of well-being. To the extent memory bias promotes satisfaction with our lives, it can be considered an adaptive component of the cognitive system.
So here's to the human brain: Flawed, certainly, but we must not forget that it does a pretty good job of getting us through the day alive and (mostly) well.
Still Interested? Check out Daniel Schacter's fabulous The Seven Sins of Memory.