Over 400,000 people visited Farnam Street last month to learn how to make better decisions, create new ideas, and avoid stupid errors. With more than 100,000 subscribers to our popular weekly digest, we've become an online intellectual hub. To learn more about we what do, start here.

Tag Archives: Memory

To Learn, Retrieve

Mike Ebersold is a neurosurgeon. In neurosurgery and indeed life there is an essential kind of learning that only comes from reflection on personal experience.

In the book Make It Stick: The Science of Successful Learning, the authors capture Ebersold’s description:

A lot of times something would come up in surgery that I had difficulty with, and then I’d go home that night thinking about what happened and what could I do, for example, to improve the way a suturing went. How can I take a bigger bite with my needle, or a smaller bite, or should the stitches be closer together? What if I modified it this way or that way? Then the next day back, I’d try that and see if it worked better. Or even if it wasn’t the next day, at least I’ve thought through this, and in so doing I’ve not only revisited things that I learned from lectures or from watching others performing surgery but also I’ve complemented that by adding something of my own to it that I missed during the teaching process.

“Reflection,” Ebersold says, “can involve several cognitive activities that lead to stronger learning: retrieving knowledge and earlier training from memory, connecting these to new experiences, and visualizing and mentally rehearsing what you might do differently next time.”

The authors of Make It Stick continue:

To make sure the new learning is available when it’s needed, Ebersold points out, “you memorize the list of things that you need to worry about in a given situation: steps A, B, C, and D,” and you drill on them. Then there comes a time when you get into a tight situation and it’s no longer a matter of thinking through the steps, it’s a matter of reflexively taking the correct action.

“Unless you keep recalling this maneuver,” Ebersold notes, “it will not become a reflex. Like a race car driver in a tight situation or a quarterback dodging a tackle, you’ve got to act out of reflex before you’ve even had time to think. Recalling it over and over, practicing it over and over. That’s just so important.”

The Testing Effect

The power of retrieval as a learning tool is known among psychologists as the testing effect. In its most common form, testing is used to measure learning and assign grades in school, but we’ve long known that the act of retrieving knowledge from memory has the effect of making that knowledge easier to call up again in the future.

Aristotle Exercise

Francis Bacon and William James also wrote about this phenomenon. Retrieval makes things stick better than re-exposure to the original material. This is the testing effect.

To be most effective, retrieval must be repeated again and again, in spaced out sessions so that the recall, rather than becoming a mindless recitation, requires some cognitive effort. Repeated recall appears to help memory consolidate into a cohesive representation in the brain and to strengthen and multiply the neural routes by which the knowledge can later be retrieved. In recent decades, studies have confirmed what Mike Ebersold and every seasoned quarterback, jet pilot, and teenaged texter knows from experience—that repeated retrieval can so embed knowledge and skills that they become reflexive: the brain acts before the mind has time to think.

Learning or Just Recalling Information?

In 2010 the New York Times reported on a scientific study that showed that students who read a passage of text and then took a test asking them to recall what they had read retained an astonishing 50 percent more of the information a week later than students who had not been tested.

This would seem like good news, but here’s how it was greeted in many online comments:

  • “Once again, another author confuses learning with recalling information.”
  • “I personally would like to avoid as many tests as possible, especially with my grade on the line. Trying to learn in a stressful environment is no way to help retain information.”
  • “Nobody should care whether memorization is enhanced by practice testing or not. Our children cannot do much of anything anymore.”

Forget memorization, many commenters argued; education should be about high-order skills. Hmmm. If memorization is irrelevant to complex problem solving, don’t tell your neurosurgeon. The frustration many people feel toward standardized, “dipstick” tests given for the sole purpose of measuring learning is understandable, but it steers us away from appreciating one of the most potent learning tools available to us. Pitting the learning of basic knowledge against the development of creative thinking is a false choice. Both need to be cultivated. The stronger one’s knowledge about the subject at hand, the more nuanced one’s creativity can be in addressing a new problem. Just as knowledge amounts to little without the exercise of ingenuity and imagination, creativity absent a sturdy foundation of knowledge builds a shaky house.

The Takeaway

Practice at retrieving new knowledge or skill from memory is a potent tool for learning and durable retention. This is true for anything the brain is asked to remember and call up again in the future—facts, complex concepts, problem-solving techniques, motor skills.

Effortful retrieval makes for stronger learning and retention. We’re easily seduced into believing that learning is better when it’s easier, but the research shows the opposite: when the mind has to work, learning sticks better. The greater the effort to retrieve learning, provided that you succeed, the more that learning is strengthened by retrieval. After an initial test, delaying subsequent retrieval practice is more potent for reinforcing retention than immediate practice, because delayed retrieval requires more effort.

Repeated retrieval not only makes memories more durable but produces knowledge that can be retrieved more readily, in more varied settings, and applied to a wider variety of problems.

While cramming can produce better scores on an immediate exam, the advantage quickly fades because there is much greater forgetting after rereading than after retrieval practice. The benefits of retrieval practice are long-term.

Simply including one test (retrieval practice) in a class yields a large improvement in final exam scores, and gains continue to increase as the frequency of classroom testing increases.

Testing doesn’t need to be initiated by the instructor. Students can practice retrieval anywhere; no quizzes in the classroom are necessary. Think flashcards—the way second graders learn the multiplication tables can work just as well for learners at any age to quiz themselves on anatomy, mathematics, or law. Self-testing may be unappealing because it takes more effort than rereading, but as noted already, the greater the effort at retrieval, the more will be retained.

Students who take practice tests have a better grasp of their progress than those who simply reread the material. Similarly, such testing enables an instructor to spot gaps and misconceptions and adapt instruction to correct them.

Giving students corrective feedback after tests keeps them from incorrectly retaining material they have misunderstood and produces better learning of the correct answers.

Students in classes that incorporate low-stakes quizzing come to embrace the practice. Students who are tested frequently rate their classes more favorably.

Make It Stick: The Science of Successful Learning is worth reading in its entirety.

A Plunge and Squish View of the Mind


How can we bring our knowledge to bear on a problem? Does this resemble how we accumulate knowledge in the first place? A thoughtful passage by David Gelernter in Mirror Worlds: or the Day Software Puts the Universe in a Shoebox…How It Will Happen and What It Will Mean explores these questions.

In your mind particulars turn into generalities gradually, imperceptibly—like snow at the bottom of a drift turning into ice. If you don’t know any general rules, if you’ve merely experienced something once, then that once will have to do. You may remember one example, or a collection of particular examples, or a general rule. These states blend together: When you’ve mastered the rule, you can still recall some individual experiences if you need to. Any respectable mind simulation must accommodate all three states. Any one of them might be the final state for some particular (perfectly respectable) mind. (Many people have been to Disneyland once, a fair number have been there a couple of times, and a few, no doubt, have been to Disneyland so often that the individual visits blend together into a single melted ice-cream puddle of a visit to Disneyland rule or script or principle or whatever. All three states are real.)

Plunge-and-squish adapts to whatever you have on hand. If there is a single relevant memory, plunge finds it. If there are several, squish constructs a modest generalization, one that captures the quirks of its particular elements. If there are many, squish constructs a sound, broad-based generalization. You may even wind up with a perma-squish abstraction, if this particular squish happens frequently enough and the elements blend smoothly together. It all happens automatically.

You need plunge and squish.

It’s worth pausing here to explain in a little more detail plunge and squish. Plunge is when you take a new case—”one attribute or many attributes, doesn’t matter”—and plunge it into the memory pool. “The plunged-in case attracts memories from all over: The ‘force fields’ inside the system get warped in such a way that every stored memory (every case in the database) is re-oriented with respect to the plunged-in “bait.” The most relevant memories approach closest; and the less-relevant ones recede into the distance.” Squish, on the other hand, means “to look at the closest cases that are attracted by a plunge, and compact them together into a single ‘super case.’ We take all these nearby memories (in other words) and superimpose them.”

One more point: Whatever stack of memories you have on hand, you can cut the deck in a million ways. You can reshuffle it endlessly. You can, if you need to, synthesize a general rule at a moment’s notice. You see an asphalt spreader on the next block. You develop an expectation: The next block will smell like [the smell of fresh asphalt…}. What happened—did you wrack your brain for that important general principle, squirrelled away for just such an occasion—fact number three million twenty-one thousand and seven—fresh asphalt usually smells like…? Or did you synthesize this rule by doing a plunge-and-squish on the spot?

Clearly you can cobble together an abstraction, a category or an expectation at a moment’s notice. You can create new categories to order whenever they are needed. (Unpleasant vacations? Objects that look like metal but aren’t?…) Any realistic mind simulation must know how to do this.

Gotta have plunge; gotta have squish.

And so we arrive, finally, at two radically different pictures of the mind. In the mind-map view, there is a dense intertwined superstructure of categories, rules and generalizations, with the odd specific, particular fact hanging from the branches like the occasional bird-pecked apple. In the plunge-and-squish view, there are slowly-shifting, wandering and reforming snowdrifts instead, built without superstructure out of a billion crystal flakes—a billion particular experiences. New experiences sift constantly downwards onto the snowscape and old ones settle imperceptibly into ice-clear universal, and the whole scene is alive and constantly, subtly changing.

It’s too soon to say which view is right. Both approaches need a lot more work. Both have produced interesting results. …

Real vs. Simulated Memories

Blue Brain

Software memory is increasingly doing more and more for us. Yet it lacks one important element of human memory: emotion.

This thought-provoking excerpt comes from Mirror Worlds: or the Day Software Puts the Universe in a Shoebox…How It Will Happen and What It Will Mean, a book recommended by tech luminary Marc Andreessen, (famous VC, sits on the board of Facebook, and HP).

When an expert remembers a patient, he doesn’t remember a mere list of words. He remembers an experience, a whole galaxy of related perceptions. No doubt he remembers certain words—perhaps a name, a diagnosis, maybe some others. But he also remembers what the patient looked like, sounded like; how the encounter made him feel (confident, confused?) … Clearly these unrecorded perceptions have tremendous information content. People can revisit their experiences, examine their stored perceptions in retrospect. In reducing a “memory” to mere words, and a quick-march parade step of attribute, value, attribute, value at that, we are giving up a great deal. We are reducing a vast mountaintop panorama to a grainy little black-and-white photograph.

There is, too, a huge distance between simulated remembering—pulling cases out of the database—and the real thing. To a human being, an experience means a set of coherent sensations, which are wrapped up and sent back to the storeroom for later recollection. Remembering is the reverse: A set of coherent sensations is trundled out of storage and replayed—those archived sensations are re-experienced. The experience is less vivid on tape (so to speak) than it was in person, and portions of the original may be smudged or completely missing, but nonetheless—the Rememberer gets, in essence, another dose of the original experience. For human beings, in other words, remembering isn’t merely retrieving, it is re-experiencing.

And this fact is important, because it obviously impinges (probably in a large way) on how people do their remembering. Why do you “choose” to recall something? Well for one thing, certain memories
make you feel good. The original experience included a “feeling good” sensation, and so the tape has “feel good” recorded on it, and when you recall the memory—you feel good. And likewise, one reason you choose (or unconsciously decide) not to recall certain memories is that they have “feel bad” recorded on them, and so remembering them makes you feel bad. (If you don’t believe me check with Freud, who based the better part of a profoundly significant career on this observation, more or less.) It’s obvious that the emotions recorded in a memory have at least something to do with steering your solitary rambles through Memory Woods.

But obviously, the software version of remembering has no emotional compass. To some extent, that’s good: Software won’t suppress, repress or forget some illuminating case because (say) it made a complete fool of itself when the case was first presented. Objectivity is powerful.

On the other hand, we are brushing up here against a limitation that has a distinctly fundamental look. We want our Mirror Worlds to “remember” intelligently—to draw just the right precedent or two from a huge database. But human beings draw on reason and emotion when they perform all acts of remembering. An emotion can be a concise, nuanced shorthand for a whole tangle of facts and perceptions that you never bothered to sort out. How did you feel on your first day at work or school, your child’s second birthday, last year’s first snowfall? Later you might remember that scene; you might be reminded merely by the fact that you now feel the same as you did then. Why do you feel the same? If you think carefully, perhaps you can trace down the objective similarities between the two experiences. But their emotional resemblance was your original clue. And it’s quite plausible that “expertise” works this way also, at least occasionally: I’m reminded of a past case not because of any objective similarity, but rather because I now feel the same as I did then.

Remember Not to Trust Your Memory


Memories are the stories that we tell ourselves about the past. Sometimes they adjust and leave things out.

In an interesting passage in Think: Why You Should Question Everything, Guy P. Harrison talks about the fallibility of memory.

Did you know that you can’t trust even your most precious memories?

They may come to you in great detail and feel 100 percent accurate, but it doesn’t matter. They easily could be partial or total lies that your brain is telling you. Really, the personal past that your brain is supposed to be keeping safe for you is not what you think it is. Your memories are pieces and batches of information that your brain cobbles together and serves up to you, not to present the past as accurately as possible, but to provide you with information that you will likely find to be useful in the present. Functional value, not accuracy, is the priority. Your brain is like some power-crazed CIA desk jockey who feeds you memories on a need-to-know basis only. Daniel Schacter, a Harvard memory researcher, says that when the brain remembers, it does so in a way that is similar to how an archaeologist reconstructs a past scene relying on an artifact here, an artifact there. The end result might be informative and useful, but don’t expect it to be perfect. This is important because those who don’t know anything about how memory works already have one foot in fantasyland. Most people believe that our memory operates in a way that is similar to a video camera. They think that the sights, sounds, and feelings of our experiences are recorded on something like a hard drive in their heads. Totally wrong. When you remember your past, you don’t get to watch an accurately recorded replay.

To describe to people how memory really works, Harrison puts it this way:

Imagine a very tiny old man sitting by a very tiny campfire somewhere inside your head. He’s wearing a worn and raggedy hat and has a long, scruffy, gray beard. He looks a lot like one of those old California gold prospectors from the 1800s. He can be grumpy and uncooperative at times, but he’s the keeper of your memories and you are stuck with him. When you want or need to remember something from your past, you have to go through the old codger. Let’s say you want to recall that time when you scored the winning goal in a middle-school soccer match. You have to tap the old coot on the shoulder and ask him to tell you about it. He usually responds with something. But he doesn’t read from a faithfully recorded transcript, doesn’t review a comprehensive photo archive to create an accurate timeline, and doesn’t double-check his facts before speaking. He definitely doesn’t play a video recording of the game for you. Typically, he just launches into a tale about your glorious goal that won the big game. He throws up some images for you, so it’s kind of like a lecture or slideshow. Nice and useful, perhaps, but definitely not reliable

Thinking Straight in the Age of Information Overload

The Organized Mind

The Organized Mind: Thinking Straight in the Age of Information Overload, a book by Daniel Levitin, explores “how humans have coped with information and organization from the beginning of civilization. … It’s also the story of how the most successful members of society—from successful artists, athletes, and warriors, to business executives and highly credentialed professionals—have learned to maximize their creativity, and efficiency, by organizing their lives so that they spend less time on the mundane, and more time on the inspiring, comforting, and rewarding things in life.”


Memory is fallible. More than just remembering things wrongly, “we don’t even know we’re remembering them wrongly.”

The first humans who figured out how to write things down around 5,000 years ago were in essence trying to increase the capacity of their hippocampus, part of the brain’s memory system. They effectively extended the natural limits of human memory by preserving some of their memories on clay tablets and cave walls, and later, papyrus and parchment. Later, we developed other mechanisms —such as calendars, filing cabinets, computers, and smartphones— to help us organize and store the information we’ve written down. When our computer or smartphone starts to run slowly, we might buy a larger memory card. That memory is both a metaphor and a physical reality. We are off-loading a great deal of the processing that our neurons would normally do to an external device that then becomes an extension of our own brains, a neural enhancer.

These external memory mechanisms are generally of two types, either following the brain’s own organizational system or reinventing it, sometimes overcoming its limitations. Knowing which is which can enhance the way we use these systems, and so improve our ability to cope with information overload.

And once memory became external (written down and stored) our attention systems “were freed up to focus on something else.”

But we need a place (and a system) to organize all of this information.

The indexing problem is that there are several possibilities about where you store this report, based on your needs: It could be stored with other writings about plants, or with writings about family history, or with writings about cooking, or with writings about how to poison an enemy.

This brings us to two aspects of the human brain that are not given their due: richness and associative access.

Richness refers to the theory that a large number of the things you’ve ever thought or experienced are still in there, somewhere. Associative access means that your thoughts can be accessed in a number of different ways by semantic or perceptual associations— memories can be triggered by related words , by category names, by a smell, an old song or photograph, or even seemingly random neural firings that bring them up to consciousness.

Being able to access any memory regardless of where it is stored is what computer scientists call random access. DVDs and hard drives work this way; videotapes do not. You can jump to any spot in a movie on a DVD or hard drive by “pointing” at it. But to get to a particular point in a videotape, you need to go through every previous point first (sequential access). Our ability to randomly access our memory from multiple cues is especially powerful. Computer scientists call it relational memory. You may have heard of relational databases— that’s effectively what human memory is.


Having relational memory means that if I want to get you to think of a fire truck, I can induce the memory in many different ways. I might make the sound of a siren, or give you a verbal description (“ a large red truck with ladders on the side that typically responds to a certain kind of emergency”).

We categorize objects in a seemingly infinite number of ways. Each of those ways “has its own route to the neural node that represents fire truck in your brain.” Take a look at one way we can think of a firetruck.


Thinking about one memory or association activates more. This can be both a strength and a weakness.

If you are trying to retrieve a particular memory, the flood of activations can cause competition among different nodes, leaving you with a traffic jam of neural nodes trying to get through to consciousness, and you end up with nothing.

Organizing our Lives

The ancients Greeks came up with memory palaces and the method of loci to improve memory. The Egyptians became experts at externalizing information, inventing perhaps the biggest pre-google repository of knowledge, the library.

We don’t know why these simultaneous explosions of intellectual activity occurred when they did (perhaps daily human experience had hit a certain level of complexity). But the human need to organize our lives, our environment, even our thoughts, remains strong. This need isn’t simply learned, it is a biological imperative— animals organize their environments instinctively.

But the odd thing about the mind is that it doesn’t, on its own, organize things the way you might want it to. It’s largely an unconscious process.

It comes preconfigured, and although it has enormous flexibility, it is built on a system that evolved over hundreds of thousands of years to deal with different kinds and different amounts of information than we have today. To be more specific: The brain isn’t organized the way you might set up your home office or bathroom medicine cabinet. You can’t just put things anywhere you want to. The evolved architecture of the brain is haphazard and disjointed, and incorporates multiple systems, each of which has a mind of its own (so to speak). Evolution doesn’t design things and it doesn’t build systems— it settles on systems that, historically, conveyed a survival benefit (and if a better way comes along, it will adopt that). There is no overarching, grand planner engineering the systems so that they work harmoniously together. The brain is more like a big, old house with piecemeal renovations done on every floor, and less like new construction.

Consider this, then, as an analogy: You have an old house and everything is a bit outdated, but you’re satisfied. You add a room air conditioner during one particularly hot summer. A few years later, when you have more money, you decide to add a central air-conditioning system. But you don’t remove that room unit in the bedroom— why would you ? It might come in handy and it’s already there, bolted to the wall. Then a few years later, you have a catastrophic plumbing problem—pipes burst in the walls. The plumbers need to break open the walls and run new pipes, but your central air-conditioning system is now in the way, where some of their pipes would ideally go. So they run the pipes through the attic, the long way around. This works fine until one particularly cold winter when your uninsulated attic causes your pipes to freeze. These pipes wouldn’t have frozen if you had run them through the walls, which you couldn’t do because of the central air-conditioning. If you had planned all this from the start, you would have done things differently, but you didn’t— you added things one thing at a time, as and when you needed them.

Or you can use Sherlock Holmes’ analogy of a memory attic. As Holmes tells Watson, “I consider that a man’s brain originally is like a little empty attic, and you have to stock it with such furniture as your choose.”

Levitin argues that we should learn “how our brain organizes information so that we can use what we have, rather than fight against it.” We do this primarily through the key processes of encoding and retrieval.

(Our brains are) built as a hodgepodge of different systems, each one solving a particular adaptive problem. Occasionally they work together, occasionally they’re in conflict, and occasionally they aren’t even talking to one another. Two of the key ways that we can control and improve the process are to pay special attention to the way we enter information into our memory— encoding—and the way we pull it out— retrieval.

We’re busier than ever. That’s not to say that it’s information overload, as there are arguments to why that doesn’t exist. Our internal to-do list is never satisfied. We’re overwhelmed with things disguised as wisdom or even information and we’re forced to sort through the nonsense. Levitin implies that one consequence to this approach is that we’re losing things. Our keys. Our driver’s licenses. Our iPhone. And it’s not just physical things. “we also forget things we were supposed to remember, important things like the password to our e-mail or a website, the PIN for our cash cards— the cognitive equivalent of losing our keys.”

These are important and hard to replace things.

We don’t tend to have general memory failures; we have specific, temporary memory failures for one or two things. During those frantic few minutes when you’re searching for your lost keys, you (probably) still remember your name and address, where your television set is, and what you had for breakfast —it’s just this one memory that has been aggravatingly lost. There is evidence that some things are typically lost far more often than others: We tend to lose our car keys but not our car, we lose our wallet or cell phone more often than the stapler on our desk or soup spoons in the kitchen, we lose track of coats and sweaters and shoes more often than pants. Understanding how the brain’s attentional and memory systems interact can go a long way toward minimizing memory lapses.

These simple facts about the kinds of things we tend to lose and those that we don’t can tell us a lot about how our brains work, and a lot about why things go wrong.

The way this works is fascinating. Levitin also hits on a topic that has long interested me. “Companies,” he writes, “are like expanded brains, with individual workers functioning something like neurons.”

Companies tend to be collections of individuals united to a common set of goals, with each worker performing a specialized function. Businesses typically do better than individuals at day-to-day tasks because of distributed processing. In a large business, there is a department for paying bills on time (accounts payable), and another for keeping track of keys (physical plant or security). Although the individual workers are fallible, systems and redundancies are usually in place, or should be, to ensure that no one person’s momentary distraction or lack of organization brings everything to a grinding halt. Of course, business organizations are not always prefectly organized, and occasionally, through the same cognitive blocks that cause us to lose our car keys, businesses lose things, too— profits, clients, competitive positions in the marketplace.

In today’s world it’s hard to keep up. We have pin numbers, phone numbers, email addresses, multiple to-do lists, small physical objects to keep track of, kids to pick up, books to read, videos to watch, nearly infinite websites to browse, and so on. Most of us, however, are still largely using the systems to organize and maintain this knowledge that were put into place in a less informatic time.

The Organized Mind: Thinking Straight in the Age of Information Overload shows us how to organize our time better, “not just so we can be more efficient but so we can find more time for fun, for play, for meaningful relationships, and for creativity.”

Harold Macmillan: The Fragility of Memory

Geoffrey Madan

Harold Macmillan beautifully describes the fragility of human memory in the foreword to Geoffrey Madan’s Notebooks, an early 1980s commonplace book.

Those of us who have reached extreme old age become gradually reconciled to increasing infirmities, mental and physical. The body develops, with each passing year, fresh weaknesses. Our legs no longer carry us; eyesight begins to fail, and hearing becomes feebler. Even with the mind, the process of thought seems largely to decrease in its power and intensity; and if we are wise we come to accept these frailties and develop, like all invalids, our own particular skills in avoiding or minimizing them. But there is one aspect of the mind which seems to operate in a peculiar fashion. While memory becomes gradually weaker in respect of recent happenings and even of the leading events of middle age, yet it appears to become increasingly strong as regards the years of childhood and youth. It is as if the new entries played into an ageing computer become gradually less effective while the original stores remain as strong as ever.

This phenomenon has the result that as the memory of so many much more important matters begins to fade, those of many years ago become sharper than before. The recent writings on the tablets of the mind grow quickly weak as if made by a light brush or soft pencil. Those of the earliest years become more and more deeply etched. The pictures which they recall are as fresh as ever. Indeed they seem to strengthen with each passing year.

He goes on to offer insight on how people with special temperaments often fall through the cracks of history.

In every age there have been men whose memory will always be recorded for outstanding achievements in war, in art, in politics, or in literature. There will be other figures — more shadowy, more difficult to reconstruct, and yet as important in their contribution to the social and intellectual life of their time. Those who have not known them, or heard them speak, find it difficult to realize why the memory of such figures is cherished by their contemporaries.

Geoffrey Madan’s Notebooks then serves two purposes. To his friends it makes it easy to recall his peculiarities. To others it gives us a glimpse into the commonplace book of an interesting bibliophile with a peculiar sense of humour.

Geoffrey Madan 2