Tag: Memory

The Many Ways Our Memory Fails Us (Part 2)

(Purchase a copy of the entire 3-part series in one sexy PDF for $3.99)

***

In part one, we began a conversation about the trappings of the human memory, using Daniel Schacter's excellent The Seven Sins of Memory as our guide. (We've also covered some reasons why our memory is pretty darn good.) We covered transience — the loss of memory due to time — and absent-mindedness — memories that were never encoded at all or were not available when needed. Let's keep going with a couple more whoppers: Blocking and Misattribution.

Blocking

Blocking is the phenomenon when something is indeed encoded in our memory and should be easily available in the given situation, but simply will not come to mind. We're most familiar with blocking as the always frustrating “It's on the tip of my tongue!

Unsurprisingly, blocking occurs most frequently when it comes to names and indeed occurs more frequently as we get older:

Twenty-year-olds, forty-year-olds, and seventy-year-olds kept diaries for a month in which they recorded spontaneously occurring retrieval blocks that were accompanied by the “tip of the tongue” sensation. Blocking occurred occasionally for the names of objects (for example, algae) and abstract words (for example, idiomatic). In all three groups, however, blocking occurred most frequently for proper names, with more blocks for people than for other proper names such as countries or cities. Proper name blocks occurred more frequently in the seventy-year-olds than in either of the other two groups.

This is not the worst sin our memory commits — excepting the times when we forget an important person's name (which is admittedly not fun), blocking doesn't cause the terrible practical results some of the other memory issues cause. But the reason blocking occurs does tells us something interesting about memory, something we intuitively know from other domains: We have a hard time learning things by rote or by force. We prefer associations and connections to form strong, lasting, easily available memories.

Why are names blocked from us so frequently, even more than objects, places, descriptions, and other nouns? For example, Schacter mentions experiments in which researchers show that we more easily forget a man's name than his occupationeven if they're the same word! (Baker/baker or Potter/potter, for example.)

It's because relative to a descriptive noun like “baker,” which calls to mind all sorts of connotations, images, and associations, a person's name has very little attached to it. We have no easy associations to make — it doesn't tell us anything about the person or give us much to hang our hat on. It doesn't really help us form an image or impression. And so we basically remember it by rote, which doesn't always work that well.

Most models of name retrieval hold that activation of phonological representations [sound associations] occurs only after activation of conceptual and visual representations. This idea explains why people can often retrieve conceptual information about an object or person whom they cannot name, whereas the reverse does not occur. For example, diary studies indicate that people frequently recall a person's occupation without remembering his name, but no instances have been documented in which a name is recalled without any conceptual knowledge about the person. In experiments in which people named pictures of famous individuals, participants who failed to retrieve the name “Charlton Heston” could often recall that he was an actor. Thus, when you block on the name “John Baker” you may very well recall that he is an attorney who enjoys golf, but it is highly unlikely that you would recall Baker's name and fail to recall any of his personal attributes.

A person's name is the weakest piece of information we have about them in our people-information lexicon, and thus the least available at any time, and the most susceptible to not being available as needed. It gets worse if it's a name we haven't needed to recall frequently or recently, as we all can probably attest to. (This also applies to the other types of words we block on less frequently — objects, places, etc.)

The only real way to avoid blocking problems is to create stronger associations when we learn names, or even re-encode names we already know by increasing their salience with a vivid image, even a silly one. (If you ever meet anyone named Baker…you know what to do.)

But the most important idea here is that information gains salience in our brain based on what it brings to mind. 

Whether or not blocking occurs in the sense implied by Freud's idea of repressed memories, Schacter is non-committal about — it seems the issue was not, at the time of writing, settled.

Misattribution

The memory sin of misattribution has fairly serious consequences. Misattribution happens all the time and is a peculiar memory sin where we do remember something, but that thing is wrong, or possibly not even our own memory at all:

Sometimes we remember events that never happened, misattributing speedy processing of incoming information or vivid images that spring to mind, to memories of past events that did not occur. Sometimes we recall correctly what happened, but misattribute it to the wrong time and place. And at other times misattribution operates in a different direction: we mistakenly credit a spontaneous image or thought to our own imagination, when in reality we are recalling it–without awareness–from something we read or heard.

The most familiar, but benign, experience we've all had with misattribution is the curious case of deja vu. As of the writing of his book, Schacter felt there was no convincing explanation for why deja vu occurs, but we know that the brain is capable of thinking it's recalling an event that happened previously, even if it hasn't.

In the case of deja vu, it's simply a bit of an annoyance. But the misattribution problem causes more serious problems elsewhere.

The major one is eyewitness testimony, which we now know is notoriously unreliable. It turns out that when eyewitnesses claim they “know what they saw!” it's unlikely they remember as well as they claim. It's not their fault and it's not a lie — you do think you recall the details of a situation perfectly well. But your brain is tricking you, just like deja vu. How bad is the eyewitness testimony problem? It used to be pretty bad.

…consider two facts. First, according to estimates made in the late 1980s, each year in the United States more than seventy-five thousand criminal trials were decided on the basis of eyewitness testimony. Second, a recent analysis of forty cases in which DNA evidence established the innocence of wrongly imprisoned individuals revealed that thirty-six of them (90 percent) involved mistaken eyewitness identification. There are no doubt other such mistakes that have not been rectified.

What happens is that, in any situation where our memory stores away information, it doesn't have the horsepower to do it with complete accuracy. There are just too many variables to sort through. So we remember the general aspects of what happened, and we remember some details, depending on how salient they were.

We recall that we met John, Jim, and Todd, who were all part of the sales team for John Deere. We might recall that John was the young one with glasses, Jim was the older bald one, and Todd talked the most. We might remember specific moments or details of the conversation which stuck out.

But we don't get it all perfectly, and if it was an unmemorable meeting, with the transience of time, we start to lose the details. The combination of the specifics and the details is a process called memory binding, and it's often the source of misattribution errors.

Let's say we remember for sure that we curled our hair this morning. All of our usual cues tell us that we did — our hair is curly, it's part of our morning routine, we remember thinking it needed to be done, etc. But…did we turn the curling iron off? We remember that we did, but is that yesterday's memory or today's?

This is a memory binding error. Our brain didn't sufficiently “link up” the curling event and the turning off of the curler, so we're left to wonder. This binding issue leads to other errors, like the memory conjunction error, where sometimes the binding process does occur, but it makes a mistake. We misattribute the strong familiarity:

Having met Mr. Wilson and Mr. Albert during your business meeting, you reply confidently the next day when an associate asks you the name of the company vice president: “Mr. Wilbert.” You remembered correctly pieces of the two surnames but mistakenly combined them into a new one. Cognitive psychologists have developed experimental procedures in which people exhibit precisely these kinds of erroneous conjunctions between features of different words, pictures, sentences, or even faces. Thus, having studied spaniel and varnish, people sometimes claim to remember Spanish.

What's happening is a misattribution. We know we saw the syllables Span- and –nish and our memory tells us we must have heard Spanish. But we didn't.

Back to the eyewitness testimony problem, what's happening is we're combining a general familiarity with a lack of specific recall, and our brain is recombining those into a misattribution. We recall a tall-ish man with some sort of facial hair, and then we're shown 6 men in a lineup, and one is tall-ish with facial hair, and our brain tells us that must be the guy. We make a relative judgment: Which person here is closest to what I think I saw? Unfortunately, like the Spanish/varnish issue, we never actually saw the person we've identified as the perp.

None of this occurs with much conscious involvement, of course. It's happening subconsciously, which is why good procedures are needed to overcome the problem. In the case of suspect lineups, the solution is to show the witness each suspect, one after another, and have them give a thumbs up or thumbs down immediately. This takes away the relative comparison and makes us consciously compare the suspect in front of us with our memory of the perpetrator.

The good thing about this error is that people can be encouraged to search their memory more carefully. But it's far from foolproof, even if we're getting a very strong indication that we remember something.

And what helps prevent us from making too many errors is something Schacter calls the distinctiveness heuristic. If a distinctive thing supposedly happened, we usually reason we'd have a good memory of it. And usually this is a very good heuristic to have. (Remember, salience always encourages memory formation.) As we discussed in Part One, a salient artifact gives us something to tie a memory to. If I meet someone wearing a bright rainbow-colored shirt, I'm a lot more likely to recall some details about them, simply because they stuck out.

***

As an aside, misattribution allows us one other interesting insight into the human brain: Our “people information” remembering is a specific, distinct module, one that can falter on its own, without harming any other modules. Schacter discusses a man with a delusion that many of the normal people around him were film stars. He even misattributed made-up famous-sounding names (like Sharon Sugar) to famous people, although he couldn't put his finger on who they were.

But the man did not falsely recognize other things. Made up cities or made up words did not trip up his brain in the strange way people did. This (and other data) tells us that our ability to recognize people is a distinct “module” our brain uses, supporting one of Judith Rich Harris's ideas about human personality that we've discussed: The “people information lexicon” we develop throughout our lives is a uniquely important module we use.

***

One final misattribution is something called cryptomnesia — essentially the opposite of deja vu. It's when we think we recognize something as new and novel even though we've seen it before. Accidental plagiarizing can even result from cryptomnesia. (Try telling that to your school teachers!) Cryptomnesia falls into the same bucket as other misattributions in that we fail to recollect the source of information we're recalling — the information and event where we first remembered it are not bound together properly. Let's say we “invent” the melody to a song which already exists. The melody sounds wonderful and familiar, so we like it. But we mistakenly think it's new.

In the end, Schacter reminds us to think carefully about the memories we “know” are true, and to try to remember specifics when possible:

We often need to sort out ambiguous signals, such as feelings of familiarity or fleeting images, that may originate in specific past experiences, or arise from subtle influences in the present. Relying on judgment and reasoning to come up with plausible attributions, we sometimes go astray.  When misattribution combines with another of memory's sins — suggestibility — people can develop detailed and strongly held recollections of complex events that never occurred.

And with that, we will leave it here for now. Next time we'll delve into suggestibility and bias, two more memory sins with a range of practical outcomes.

The Many Ways Our Memory Fails Us (Part 1)

(Purchase a copy of the entire 3-part series in one sexy PDF for $3.99)

***

Recently, we discussed some of the net advantages of our faulty, but incredibly useful, memory system. Thanks to Harvard's brilliant memory-focused psychologist Daniel Schacter, we know not to be too harsh in judging its flaws. The system we've been endowed with, on the whole, works at its intended purpose, and a different one might not be a better one.

It isn't optimal though, and since we've given it a “fair shake”, it is worth discussing where the errors actually lie, so we can work to improve them, or at least be aware of them.

In his fascinating book, Schacter lays out seven broad areas in which our memory regularly fails us. Let's take a look at them so we can better understand ourselves and others, and maybe come up with a few optimal solutions. Perhaps the most important lesson will be that we must expect our memory to be periodically faulty, and take that into account in advance.

We're going to cover a lot of ground, so this one will be a multi-parter. Let's dig in.

Transience

The first regular memory error is called transience. This is one we're all quite familiar with, but sometimes forget to account for: The forgetting that occurs with the passage of time. Much of our memory is indeed transient — things we don't regularly need to recall or use get lost with time.

Schacter gives an example of the phenomenon:

On October 3, 1995, the most sensational criminal trial of our time reached a stunning conclusion: a jury acquitted O.J. Simpson of murder. Word of the not-guilty verdict spread quickly, nearly everyone reacted with either outrage or jubilation, and many people could talk about little else for weeks or days afterward. The Simpson verdict seemed like just the sort of momentous event that most of us would always remember vividly: how we reacted to it, and where we were when we heard the news.

Can you recall how you found out that Simpson had been acquitted? Chances are that you don't remember, or that what you remember is wrong. Several days after the verdict, a group of California undergraduates provided researchers with detailed accounts of how they learned about the jury's decision. When the researchers probed students' memories again fifteen months later, only half recalled accurately how they found out about the decision. When asked again nearly three years after the verdict, less than 30 percent of students' recollections were accurate; nearly half were dotted with major errors.

Soon after something happens, particularly something meaningful or impactful, we have a pretty accurate recollection of it. But the accuracy of that recollection declines on a curve over time — quickly at first, then slowing down. We go from remembering specifics to remembering the gist of what happened. (Again, on average — some detail is often left intact.) As the Simpson trial example shows, even in the case of a very memorable event, transience is high. Less memorable events are forgotten almost entirely.

What we typically do later on is fill in specific details of a specific event with what typically would happen in that situation. Schacter explains:

Try to answer in detail the following three questions: What do you do during a typical day at work? What did you do yesterday? And what did you do on that day one week earlier? When twelve employees in the engineering division of a large office-product manufacturer answered these questions, there was a dramatic difference in what they recalled from yesterday and a week earlier. The employees recalled fewer activities from a week ago than yesterday, and the ones they did recall from a week earlier tended to be part of a “typical” day. Atypical activities — departures from the daily script — were remembered much more frequently after a day than after a week. Memory after a day was close to a verbatim record of specific events; memory after a week was closer to a generic description of what usually happens.

So when we need to recall a memory, we tend to reconstruct as best as we can, starting with whatever “gist” is left over in our brains, and filling in the details by (often incorrectly) assuming that particular event was a lot like others. Generally, this is a correct assumption. There's no reason to remember exactly what you ate last Thanksgiving, so turkey is a pretty reliable bet. Occasionally, though, transience gets us in trouble, as anyone who's forgotten a name they should have remembered can attest.

How do we help solve the issue of transience?

Obviously, one easy solution, if it's something we wish to remember specifically, and in an unaltered form, is to record it as specifically as possible and as soon as possible. That is the optimal solution, for time begins acting immediately to make our memories vague.

Another idea is visual imagery. The idea of using visual mneumonics is popular in the memory-improvement game; in other words, associating parts of a hoped-for memory with highly vivid imagery (an elephant squashing a clown!), which can be easily recalled later. Greek orators were famous for the technique.

The problem is that almost no one uses this on a day to day basis, because it's very cognitively demanding. You must go through the process of making interesting and evocative associations every time you want to remember something — there's no “general memory improvement” going on, which is what people are really interested in, where all future memories are more effectively encoded.

Another approach — associating and tying something you wish to remember with something else you already know to increase its availability later on — is also useful, but as with visual imagery, must be used each and every time.

In fact, so far as we can tell, the only “general memory improver” available to us is to create better habits of association — attaching vivid stories, images, and connections to things — the very habits we talk about frequently when we discuss the mental model approach. It won't happen automatically.

Absent-Mindedness

The second memory failure is closely related to transience, but a little different in practice. Whereas transience entails remembering something that then fades, absent-mindedness is a process whereby the information is never properly encoded, or is simply overlooked at the point of recall.

Failed encoding explains phenomena like regularly misplacing our keys or glasses: The problem is not that the information faded, it's that it never made it from our working memory into our long term memory. This often happens because we are distracted or otherwise not paying attention at the moment of encoding (e.g., when we take our glasses off).

Interestingly enough, although divided attention can prevent us from retaining particulars, we still may encode some basic familiarity: 

Familiarity entails a more primitive sense of knowing that something has happened previously, without dredging up particular details. In [a] restaurant, for example, you might have noticed at a nearby table someone you are certain you have met previously despite failing to recall such specifics as the person's name or how you know her. Laboratory studies indicate that dividing attention during encoding has a drastic effect on subsequent recollection, and has little or no effect on familiarity.

This phenomenon probably happens because divided attention prevents us from elaborating on the particulars that are necessary for subsequent recollection, but allows us to record some rudimentary information that later gives rise to a sense of familiarity.

Schacter also points out something that older people might take solace in: Aging produces a similar cognitive effect to attention-dividedness. The reason older people start feeling they've misplaced their keys or checkbook constantly is that the brain's decline in cognitive resources mirrors the “split attention” problem that causes all of us to misplace our keys or checkbook.

A related phenomenon to this poor encoding problem is one called change-blindness — failing to see differences in objects or scenes unfolding over time. Similar to the “slowly boiling a frog” issue most of us are familiar with, change-blindness causes us to fail to see subtle change. This is the Invisible Gorilla problem, made famous through its vivid demonstration by Daniel Simons and Christopher Chabris.

In fact, in another experiment, Simons was able to show that even in a real-life conversation, he could swap out one man for another in many instances without the conversational partner even noticing! Magicians and con-men regularly use this to fool and astonish.

What's happening is shallow encoding — similar to the transience problem, we often encode only a superficial level of information related to what's happening in front of our face, even when talking to a real person. Thus, subtly changing details are not registered because they were never encoded in the first place! (Sherlock Holmes made a career of countering this natural tendency by being super-observant.)

Generally, this is totally fine and OK. As a whole, the system serves us well. But the instances where it doesn't can get us into trouble.

***

This brings up the problem of absent-mindedness in what psychologists call prospective memory — remembering something you need to do in the future. We're all familiar with situations when we forget to do something we clearly “told ourselves” we needed to remember.

The typical antidote is using cues to help us remember: An event-based prospective memory goes like this: “When you see Harry today, tell him to call me.” A time-based prospective memory goes like this: “At 11PM, take the cookies out of the oven.”

It doesn't always work, though. Time-based prospective memory is the worst of all: We're not consistently good at remembering that “11PM = cookies” because other stuff will also be happening at 11PM! A time-based cue is insufficient.

For the same reason, an event-based cue will also fail to work if we're not careful:

Consider the first event-based prospective memory. Frank has asked you to tell Harry to call him, but you have forgotten to do so. You indeed saw Harry in the office, but instead of remembering Frank's message you were reminded of the bet you and Harry made concerning last night's college basketball championship, gloating for several minutes over your victory before settling down to work.

“Harry” carries many associations other than “Tell him something for Frank.” Thus, we're not guaranteed to recall it in the moment.

This knowledge allows us to construct an optimal solution to the prospective memory problem: Specific, distinctive cues that call to mind the exact action needed, at the time it is needed. All elements must be in place for the optimal solution.

Post-it notes with explicit directions put in an optimal place (somewhere a post-it note would not usually be found) tend to work well. A specific reminder on your phone that pops up exactly when needed will work.  As Schacter puts it, “The point is to transfer as many details as possible from working memory to written reminders.” Be specific, make it stand out, make it timely. Hoping for a spontaneous reminder to work means that, some percentage of the time, we will certainly commit an absent-minded error. It's just the way our minds work.

***

Let's pause there for now. In our next post on memory, we'll cover the sins of Blocking and Misattribution, and some potential solutions. In Part Three, we check out the sins of Suggestibility, Bias, and Persistence. In the meantime, try checking out the book in its entirety, if you want to read ahead.

Is Our Faulty Memory Really So Bad?

“Though the behaviors…seem perverse, they reflect reliance on a type of navigation that serves the animals quite well in most situations.”
— Daniel Schacter

***

[This is the first of a four part series on memory. Also see Parts One, Two, and Three on the challenges of memory.]

The Harvard psychologist Daniel Schacter has some brilliant insights into the human memory.

His wonderful book The Seven Sins of Memory presents the case that our memories fail us in regular, repeated, and predictable ways. We forget things we think we should know; we think we saw things we didn't see; we can't remember where we left our keys; we can't remember _____'s name; we think Susan told us something that Steven did.

It's easy to get a little down on our poor brains. Between cognitive biases, memory problems, emotional control, drug addiction, and brain disease, it's natural to wonder how the hell our species has been so successful.

Not so fast. Schacter argues that we shouldn't be so dismissive of the imperfect system we've been endowed with:

The very pervasiveness of memory's imperfections, amply illustrated in the preceding pages, can easily lead to the conclusion that Mother Nature committed colossal blunders in burdening us with such a dysfunctional system. John Anderson, a cognitive psychologist at Carnegie-Mellon University, summarizes the prevailing perception that memory's sins reflect poorly on its design: “over the years we have participated in many talks with artificial intelligence researchers about the prospects of using human models to guide the development of artificial intelligence programs. Invariably, the remark is made, “Well, of course, we would not want our system to have something so unreliable as human memory.”

It is tempting to agree with this characterization, especially if you've just wasted valuable time looking for misplaced keys, read the statistics on wrongful imprisonment resulting from eyewitness miscalculation, or woken up in the middle of the night persistently recalling a slip-up at work. But along with Anderson, I believe that this view is misguided: It is a mistake to conceive of the seven sins as design flaws that expose memory as a fundamentally defective system. To the contrary, I suggest that the seven sins are by-products of otherwise adaptive features of memory, a price we pay for processes and functions that serve us well in many respects.

Schacter starts by pointing out that all creatures have systems running on autopilot, which researchers love to exploit:

For instance, train a rat to navigate a maze to find a food reward at the end, and then place a pile of food halfway into the maze. The rat will run right past the pile of food as if it did not even exist, continuing to the end, where it seeks its just reward! Why not stop at the halfway point and enjoy the reward then? Hauser suggests that the rat is operating in this situation on the basis of “dead reckoning” — a method of navigating in which the animal keeps a literal record of where it has gone by constantly updating the speed, distance, and direction it has traveled.

A similarly comical error occurs when a pup is taken from a gerbil nest containing several other pups and is placed in a nearby cup. The mother searches for her lost baby, and while she is away, the nest is displaced a short distance. When the mother and lost pup return, she uses dead reckoning to head straight for the nest's old location. Ignoring the screams and smells of the other pups just a short distance away, she searches for them at the old location. Hauser contends that the mother is driven by signals from her spatial system.

The reason for this bizarre behavior is that, in general, it works! Natural selection is pretty crafty and makes one simple value judgement: Does the thing provide a reproductive advantage to the individual (or group) or doesn't it? In nature, a gerbil will rarely see its nest moved like that — it's the artifice of the lab experiment that exposes the “auto-pilot” nature of the gerbil's action.

It works the same way with us. The main thing to remember is that our mental systems are, by and large, working to our advantage. If we had memories that could recall all instances of the past with perfect precision, we'd be so inundated with information that we'd be paralyzed:

Consider the following experiment. Try to recall an episode from your life that involves a table. What do you remember, and how long did it take to come up with the memory? You probably had little difficult coming up with a specific incident — perhaps a conversation at the dinner table last night, or a discussion at the conference table this morning. Now imagine that the cue “table” brought forth all the memories that you have stored away involving a table. There are probably hundreds or thousands of such incidents. What if they all sprung to mind within seconds of considering the cue? A system that operated in this manner would likely result in mass confusion produced by an incessant coming to mind of numerous competing traces. It would be a bit like using an Internet search engine, typing in a word that has many matches in a worldwide data base, and then sorting through the thousands of entries that the query elicits. We wouldn't want a memory system that produces this kind of data overload. Robert and Elizabeth Bjork have argued persuasively that the operation of inhibitory processes helps to protect us from such chaos.

The same goes for emotional experiences. We often lament that we take intensely emotional experiences hard; that we're unable to shake the feeling certain situations imprint on us. PTSD is a particularly acute case of intense experience causing long-lasting mental harm. Yet this same system probably, on average, does us great good in survival:

Although intrusive recollections of trauma can be disabling, it is critically important that emotionally arousing experiences, which sometimes occur in response to life-threatening dangers, persist over time. The amygdala and related structures contribute to the persistence of such experiences by modulating memory formation, sometimes resulting in memories we wish we could forget. But this system boosts the likelihood that we will recall easily and quickly information about threatening or traumatic events whose recollection may one day be crucial for survival. Remembering life-threatening events persistently — where the incident occurred, who or what was responsible for it — boosts our chances of avoiding future recurrences.

Our brain has limitations, and with those limitations come trade-offs. One of the trade-offs our brain makes is to prioritize which information to hold on to, and which to let go of. It must do this — as stated above, we'd be overloaded with information without this ability. The brain has evolved to prioritize information which is:

  1. Used frequently
  2. Used recently
  3. Likely to be needed

Thus, we do forget things. The phenomenon of eyewitness testimony being unreliable can at least partially be explained by the fact that, when the event occurred, the witness probably did not know they'd need to remember it. There was no reason, in the moment, for that information to make an imprint. We have trouble recalling details of things that have not imprinted very deeply.

There are cases where people do have elements of what might seem like a “more optimal system” of memory, and generally they do not function well in the real world. Schacter gives us two in his book. The first is the famous mnemonist Shereshevski:

But what if all events were registered in elaborate detail, regardless of the level or type of processing to which they were subjected? The result would be a potentially overwhelming clutter of useless details, as happened in the famous case of the mnemonist Shereshevski. Described by Russian neuropsychologist Alexander Luria, who studied him for years, Shereshevski formed and retained highly detailed memories of virtually everything that happened to him — both the important and the trivial. Yet he was unable to function at an abstract level because he was inundated with unimportant details of his experiences — details that are best denied entry to the system in the first place. An elaboration-dependent system ensures that only those events that are important enough to warrant extensive encoding have a high likelihood of subsequent recollection.

The other case comes from more severely autistic individuals. When tested, autistic individuals make less conflagrations of the type that normally functioning individuals make, less mistaking that we heard sweet when we actually heard candy, or stool when we actually heard chair. These little misattributions are our brain working as it should, remembering the “gist” of things when the literal thing isn't terribly important.

One symptom of autism is difficulty “generalizing” the way others are able to; difficulty developing the “gist” of situations and categories that, generally speaking, is highly helpful to a normally functioning individual. Instead, autism can cause many to take things extremely literally, and to have a great memory for rote factual information. (Picture Raymond Babbitt in Rain Man.) The trade is probably not desirable for most people — our system tends to serve us pretty well on the whole.

***

There's at least one other way our system “saves us from ourselves” on average — our overestimation of self. Social psychologists love to demonstrate cases where humans overestimate their ability to drive, invest, make love, and so on. It even has a (correct) name: Overconfidence.

Yet without some measure of “overconfidence,” most of us would be quite depressed. In fact, when depressed individuals are studied, their tendency towards extreme realism is one thing frequently found:

On the face of it, these biases would appear to loosen our grasp on reality and thus represent a worrisome, even dangerous tendency. After all, good mental health is usually associated with accurate perceptions of reality, whereas mental disorders and madness are associated with distorted perceptions of reality.

But as the social psychologist Shelley Taylor has argued in her work on “positive illusions,” overly optimistic views of the self appear to promote mental health rather than undermine it. Far from functioning in an impaired or suboptimal manner, people who are most susceptible to positive illusions generally do well in many aspects of their lives. Depressed patients, in contrast, tend to lack the positive illusions that are characteristic of non-depressed individuals.

Remembering the past in an overly positive manner may encourage us to meet new challenges by promoting an overly optimistic view of the future, whereas remembering the past more accurately or negatively can leave us discouraged. Clearly there must be limits to such effects, because wildly distorted optimistic biases would eventually lead to trouble. But as Taylor points out, positive illusions are generally mild and are important contributors to our sense of well-being. To the extent memory bias promotes satisfaction with our lives, it can be considered an adaptive component of the cognitive system.

So here's to the human brain: Flawed, certainly, but we must not forget that it does a pretty good job of getting us through the day alive and (mostly) well.

This is the first of a four part series on memory. Now check out Parts One, Two, and Three on the challenges of memory.

***

Still Interested? Check out Daniel Schacter's fabulous The Seven Sins of Memory.

To Learn, Retrieve

Mike Ebersold is a neurosurgeon. In neurosurgery and indeed life there is an essential kind of learning that only comes from reflection on personal experience.

In the book Make It Stick: The Science of Successful Learning, the authors capture Ebersold's description:

A lot of times something would come up in surgery that I had difficulty with, and then I’d go home that night thinking about what happened and what could I do, for example, to improve the way a suturing went. How can I take a bigger bite with my needle, or a smaller bite, or should the stitches be closer together? What if I modified it this way or that way? Then the next day back, I’d try that and see if it worked better. Or even if it wasn’t the next day, at least I’ve thought through this, and in so doing I’ve not only revisited things that I learned from lectures or from watching others performing surgery but also I’ve complemented that by adding something of my own to it that I missed during the teaching process.

“Reflection,” Ebersold says, “can involve several cognitive activities that lead to stronger learning: retrieving knowledge and earlier training from memory, connecting these to new experiences, and visualizing and mentally rehearsing what you might do differently next time.”

The authors of Make It Stick continue:

To make sure the new learning is available when it’s needed, Ebersold points out, “you memorize the list of things that you need to worry about in a given situation: steps A, B, C, and D,” and you drill on them. Then there comes a time when you get into a tight situation and it’s no longer a matter of thinking through the steps, it’s a matter of reflexively taking the correct action.

“Unless you keep recalling this maneuver,” Ebersold notes, “it will not become a reflex. Like a race car driver in a tight situation or a quarterback dodging a tackle, you’ve got to act out of reflex before you’ve even had time to think. Recalling it over and over, practicing it over and over. That’s just so important.”

The Testing Effect

The power of retrieval as a learning tool is known among psychologists as the testing effect. In its most common form, testing is used to measure learning and assign grades in school, but we’ve long known that the act of retrieving knowledge from memory has the effect of making that knowledge easier to call up again in the future.

Aristotle Exercise

Francis Bacon and William James also wrote about this phenomenon. Retrieval makes things stick better than re-exposure to the original material. This is the testing effect.

To be most effective, retrieval must be repeated again and again, in spaced out sessions so that the recall, rather than becoming a mindless recitation, requires some cognitive effort. Repeated recall appears to help memory consolidate into a cohesive representation in the brain and to strengthen and multiply the neural routes by which the knowledge can later be retrieved. In recent decades, studies have confirmed what Mike Ebersold and every seasoned quarterback, jet pilot, and teenaged texter knows from experience—that repeated retrieval can so embed knowledge and skills that they become reflexive: the brain acts before the mind has time to think.

Learning or Just Recalling Information?

In 2010 the New York Times reported on a scientific study that showed that students who read a passage of text and then took a test asking them to recall what they had read retained an astonishing 50 percent more of the information a week later than students who had not been tested.

This would seem like good news, but here’s how it was greeted in many online comments:

  • “Once again, another author confuses learning with recalling information.”
  • “I personally would like to avoid as many tests as possible, especially with my grade on the line. Trying to learn in a stressful environment is no way to help retain information.”
  • “Nobody should care whether memorization is enhanced by practice testing or not. Our children cannot do much of anything anymore.”

Forget memorization, many commenters argued; education should be about high-order skills. Hmmm. If memorization is irrelevant to complex problem solving, don’t tell your neurosurgeon. The frustration many people feel toward standardized, “dipstick” tests given for the sole purpose of measuring learning is understandable, but it steers us away from appreciating one of the most potent learning tools available to us. Pitting the learning of basic knowledge against the development of creative thinking is a false choice. Both need to be cultivated. The stronger one’s knowledge about the subject at hand, the more nuanced one’s creativity can be in addressing a new problem. Just as knowledge amounts to little without the exercise of ingenuity and imagination, creativity absent a sturdy foundation of knowledge builds a shaky house.

The Takeaway

Practice at retrieving new knowledge or skill from memory is a potent tool for learning and durable retention. This is true for anything the brain is asked to remember and call up again in the future—facts, complex concepts, problem-solving techniques, motor skills.

Effortful retrieval makes for stronger learning and retention. We’re easily seduced into believing that learning is better when it’s easier, but the research shows the opposite: when the mind has to work, learning sticks better. The greater the effort to retrieve learning, provided that you succeed, the more that learning is strengthened by retrieval. After an initial test, delaying subsequent retrieval practice is more potent for reinforcing retention than immediate practice, because delayed retrieval requires more effort.

Repeated retrieval not only makes memories more durable but produces knowledge that can be retrieved more readily, in more varied settings, and applied to a wider variety of problems.

While cramming can produce better scores on an immediate exam, the advantage quickly fades because there is much greater forgetting after rereading than after retrieval practice. The benefits of retrieval practice are long-term.

Simply including one test (retrieval practice) in a class yields a large improvement in final exam scores, and gains continue to increase as the frequency of classroom testing increases.

Testing doesn’t need to be initiated by the instructor. Students can practice retrieval anywhere; no quizzes in the classroom are necessary. Think flashcards—the way second graders learn the multiplication tables can work just as well for learners at any age to quiz themselves on anatomy, mathematics, or law. Self-testing may be unappealing because it takes more effort than rereading, but as noted already, the greater the effort at retrieval, the more will be retained.

Students who take practice tests have a better grasp of their progress than those who simply reread the material. Similarly, such testing enables an instructor to spot gaps and misconceptions and adapt instruction to correct them.

Giving students corrective feedback after tests keeps them from incorrectly retaining material they have misunderstood and produces better learning of the correct answers.

Students in classes that incorporate low-stakes quizzing come to embrace the practice. Students who are tested frequently rate their classes more favorably.

Make It Stick: The Science of Successful Learning is worth reading in its entirety.

A Plunge and Squish View of the Mind

_MG_5833

How can we bring our knowledge to bear on a problem? Does this resemble how we accumulate knowledge in the first place? A thoughtful passage by David Gelernter in Mirror Worlds: or the Day Software Puts the Universe in a Shoebox…How It Will Happen and What It Will Mean explores these questions.

In your mind particulars turn into generalities gradually, imperceptibly—like snow at the bottom of a drift turning into ice. If you don't know any general rules, if you've merely experienced something once, then that once will have to do. You may remember one example, or a collection of particular examples, or a general rule. These states blend together: When you've mastered the rule, you can still recall some individual experiences if you need to. Any respectable mind simulation must accommodate all three states. Any one of them might be the final state for some particular (perfectly respectable) mind. (Many people have been to Disneyland once, a fair number have been there a couple of times, and a few, no doubt, have been to Disneyland so often that the individual visits blend together into a single melted ice-cream puddle of a visit to Disneyland rule or script or principle or whatever. All three states are real.)

Plunge-and-squish adapts to whatever you have on hand. If there is a single relevant memory, plunge finds it. If there are several, squish constructs a modest generalization, one that captures the quirks of its particular elements. If there are many, squish constructs a sound, broad-based generalization. You may even wind up with a perma-squish abstraction, if this particular squish happens frequently enough and the elements blend smoothly together. It all happens automatically.

You need plunge and squish.

It's worth pausing here to explain in a little more detail plunge and squish. Plunge is when you take a new case—”one attribute or many attributes, doesn't matter”—and plunge it into the memory pool. “The plunged-in case attracts memories from all over: The ‘force fields' inside the system get warped in such a way that every stored memory (every case in the database) is re-oriented with respect to the plunged-in “bait.” The most relevant memories approach closest; and the less-relevant ones recede into the distance.” Squish, on the other hand, means “to look at the closest cases that are attracted by a plunge, and compact them together into a single ‘super case.' We take all these nearby memories (in other words) and superimpose them.”

One more point: Whatever stack of memories you have on hand, you can cut the deck in a million ways. You can reshuffle it endlessly. You can, if you need to, synthesize a general rule at a moment's notice. You see an asphalt spreader on the next block. You develop an expectation: The next block will smell like [the smell of fresh asphalt…}. What happened—did you wrack your brain for that important general principle, squirrelled away for just such an occasion—fact number three million twenty-one thousand and seven—fresh asphalt usually smells like…? Or did you synthesize this rule by doing a plunge-and-squish on the spot?

Clearly you can cobble together an abstraction, a category or an expectation at a moment's notice. You can create new categories to order whenever they are needed. (Unpleasant vacations? Objects that look like metal but aren't?…) Any realistic mind simulation must know how to do this.

Gotta have plunge; gotta have squish.

And so we arrive, finally, at two radically different pictures of the mind. In the mind-map view, there is a dense intertwined superstructure of categories, rules and generalizations, with the odd specific, particular fact hanging from the branches like the occasional bird-pecked apple. In the plunge-and-squish view, there are slowly-shifting, wandering and reforming snowdrifts instead, built without superstructure out of a billion crystal flakes—a billion particular experiences. New experiences sift constantly downwards onto the snowscape and old ones settle imperceptibly into ice-clear universal, and the whole scene is alive and constantly, subtly changing.

It's too soon to say which view is right. Both approaches need a lot more work. Both have produced interesting results. …

Real vs. Simulated Memories

Blue Brain

Software memory is increasingly doing more and more for us. Yet it lacks one important element of human memory: emotion.

This thought-provoking excerpt comes from Mirror Worlds: or the Day Software Puts the Universe in a Shoebox…How It Will Happen and What It Will Mean, a book recommended by Marc Andreessen.

When an expert remembers a patient, he doesn't remember a mere list of words. He remembers an experience, a whole galaxy of related perceptions. No doubt he remembers certain words—perhaps a name, a diagnosis, maybe some others. But he also remembers what the patient looked like, sounded like; how the encounter made him feel (confident, confused?) … Clearly these unrecorded perceptions have tremendous information content. People can revisit their experiences, examine their stored perceptions in retrospect. In reducing a “memory” to mere words, and a quick-march parade step of attribute, value, attribute, value at that, we are giving up a great deal. We are reducing a vast mountaintop panorama to a grainy little black-and-white photograph.

There is, too, a huge distance between simulated remembering—pulling cases out of the database—and the real thing. To a human being, an experience means a set of coherent sensations, which are wrapped up and sent back to the storeroom for later recollection. Remembering is the reverse: A set of coherent sensations is trundled out of storage and replayed—those archived sensations are re-experienced. The experience is less vivid on tape (so to speak) than it was in person, and portions of the original may be smudged or completely missing, but nonetheless—the Rememberer gets, in essence, another dose of the original experience. For human beings, in other words, remembering isn't merely retrieving, it is re-experiencing.

And this fact is important because it obviously impinges (probably in a large way) on how people do their remembering. Why do you “choose” to recall something? Well for one thing, certain memories make you feel good. The original experience included a “feeling good” sensation, and so the tape has “feel good” recorded on it, and when you recall the memory—you feel good. And likewise, one reason you choose (or unconsciously decide) not to recall certain memories is that they have “feel bad” recorded on them, and so remembering them makes you feel bad. (If you don't believe me check with Freud, who based the better part of a profoundly significant career on this observation, more or less.) It's obvious that the emotions recorded in a memory have at least something to do with steering your solitary rambles through Memory Woods.

But obviously, the software version of remembering has no emotional compass. To some extent, that's good: Software won't suppress, repress or forget some illuminating case because (say) it made a complete fool of itself when the case was first presented. Objectivity is powerful.

On the other hand, we are brushing up here against a limitation that has a distinctly fundamental look. We want our Mirror Worlds to “remember” intelligently—to draw just the right precedent or two from a huge database. But human beings draw on reason and emotion when they perform all acts of remembering. An emotion can be a concise, nuanced shorthand for a whole tangle of facts and perceptions that you never bothered to sort out. How did you feel on your first day at work or school, your child's second birthday, last year's first snowfall? Later you might remember that scene; you might be reminded merely by the fact that you now feel the same as you did then. Why do you feel the same? If you think carefully, perhaps you can trace down the objective similarities between the two experiences. But their emotional resemblance was your original clue. And it's quite plausible that “expertise” works this way also, at least occasionally: I'm reminded of a past case, not because of any objective similarity, but rather because I now feel the same as I did then.