Over 500,000 people visited Farnam Street last month to expand their knowledge and improve their thinking. Work smarter, not harder with our free weekly newsletter that's full of time-tested knowledge you can add to your mental toolbox.


Tag Archives: Memory

The Many Ways Our Memory Fails Us (Part 1)

(Purchase a copy of the entire 3-part series in one sexy PDF for $3.99)

***

Recently, we discussed some of the net advantages of our faulty, but incredibly useful, memory system. Thanks to Harvard's brilliant memory-focused psychologist Daniel Schacter, we know not to be too harsh in judging its flaws. The system we've been endowed with, on the whole, works at its intended purpose, and a different one might not be a better one.

It isn't optimal though, and since we've given it a “fair shake”, it is worth discussing where the errors actually lie, so we can work to improve them, or at least be aware of them.

In his fascinating book, Schacter lays out seven broad areas in which our memory regularly fails us. Let's take a look at them so we can better understand ourselves and others, and maybe come up with a few optimal solutions. Perhaps the most important lesson will be that we must expect our memory to be periodically faulty, and take that into account in advance.

We're going to cover a lot of ground, so this one will be a multi-parter. Let's dig in.

Transience

The first regular memory error is called transience. This is one we're all quite familiar with, but sometimes forget to account for: The forgetting that occurs with the passage of time. Much of our memory is indeed transient — things we don't regularly need to recall or use get lost with time.

Schacter gives an example of the phenomenon:

On October 3, 1995, the most sensational criminal trial of our time reached a stunning conclusion: a jury acquitted O.J. Simpson of murder. Word of the not-guilty verdict spread quickly, nearly everyone reacted with either outrage or jubilation, and many people could talk about little else for weeks or days afterward. The Simpson verdict seemed like just the sort of momentous event that most of us would always remember vividly: how we reacted to it, and where we were when we heard the news.

Can you recall how you found out that Simpson had been acquitted? Chances are that you don't remember, or that what you remember is wrong. Several days after the verdict, a group of California undergraduates provided researchers with detailed accounts of how they learned about the jury's decision. When the researchers probed students' memories again fifteen months later, only half recalled accurately how they found out about the decision. When asked again nearly three years after the verdict, less than 30 percent of students' recollections were accurate; nearly half were dotted with major errors.

Soon after something happens, particularly something meaningful or impactful, we have a pretty accurate recollection of it. But the accuracy of that recollection declines on a curve over time — quickly at first, then slowing down. We go from remembering specifics to remembering the gist of what happened. (Again, on average — some detail is often left intact.) As the Simpson trial example shows, even in the case of a very memorable event, transience is high. Less memorable events are forgotten almost entirely.

What we typically do later on is fill in specific details of a specific event with what typically would happen in that situation. Schacter explains:

Try to answer in detail the following three questions: What do you do during a typical day at work? What did you do yesterday? And what did you do on that day one week earlier? When twelve employees in the engineering division of a large office-product manufacturer answered these questions, there was a dramatic difference in what they recalled from yesterday and a week earlier. The employees recalled fewer activities from a week ago than yesterday, and the ones they did recall from a week earlier tended to be part of a “typical” day. Atypical activities — departures from the daily script — were remembered much more frequently after a day than after a week. Memory after a day was close to a verbatim record of specific events; memory after a week was closer to a generic description of what usually happens.

So when we need to recall a memory, we tend to reconstruct as best as we can, starting with whatever “gist” is left over in our brains, and filling in the details by (often incorrectly) assuming that particular event was a lot like others. Generally, this is a correct assumption. There's no reason to remember exactly what you ate last Thanksgiving, so turkey is a pretty reliable bet. Occasionally, though, transience gets us in trouble, as anyone who's forgotten a name they should have remembered can attest.

How do we help solve the issue of transience?

Obviously, one easy solution, if it's something we wish to remember specifically, and in an unaltered form, is to record it as specifically as possible and as soon as possible. That is the optimal solution, for time begins acting immediately to make our memories vague.

Another idea is visual imagery. The idea of using visual mneumonics is popular in the memory-improvement game; in other words, associating parts of a hoped-for memory with highly vivid imagery (an elephant squashing a clown!), which can be easily recalled later. Greek orators were famous for the technique.

The problem is that almost no one uses this on a day to day basis, because it's very cognitively demanding. You must go through the process of making interesting and evocative associations every time you want to remember something — there's no “general memory improvement” going on, which is what people are really interested in, where all future memories are more effectively encoded.

Another approach — associating and tying something you wish to remember with something else you already know to increase its availability later on — is also useful, but as with visual imagery, must be used each and every time.

In fact, so far as we can tell, the only “general memory improver” available to us is to create better habits of association — attaching vivid stories, images, and connections to things — the very habits we talk about frequently when we discuss the mental model approach. It won't happen automatically.

Absent-Mindedness

The second memory failure is closely related to transience, but a little different in practice. Whereas transience entails remembering something that then fades, absent-mindedness is a process whereby the information is never properly encoded, or is simply overlooked at the point of recall.

Failed encoding explains phenomena like regularly misplacing our keys or glasses: The problem is not that the information faded, it's that it never made it from our working memory into our long term memory. This often happens because we are distracted or otherwise not paying attention at the moment of encoding (e.g., when we take our glasses off).

Interestingly enough, although divided attention can prevent us from retaining particulars, we still may encode some basic familiarity: 

Familiarity entails a more primitive sense of knowing that something has happened previously, without dredging up particular details. In [a] restaurant, for example, you might have noticed at a nearby table someone you are certain you have met previously despite failing to recall such specifics as the person's name or how you know her. Laboratory studies indicate that dividing attention during encoding has a drastic effect on subsequent recollection, and has little or no effect on familiarity.

This phenomenon probably happens because divided attention prevents us from elaborating on the particulars that are necessary for subsequent recollection, but allows us to record some rudimentary information that later gives rise to a sense of familiarity.

Schacter also points out something that older people might take solace in: Aging produces a similar cognitive effect to attention-dividedness. The reason older people start feeling they've misplaced their keys or checkbook constantly is that the brain's decline in cognitive resources mirrors the “split attention” problem that causes all of us to misplace our keys or checkbook.

A related phenomenon to this poor encoding problem is one called change-blindness — failing to see differences in objects or scenes unfolding over time. Similar to the “slowly boiling a frog” issue most of us are familiar with, change-blindness causes us to fail to see subtle change. This is the Invisible Gorilla problem, made famous through its vivid demonstration by Daniel Simons and Christopher Chabris.

In fact, in another experiment, Simons was able to show that even in a real-life conversation, he could swap out one man for another in many instances without the conversational partner even noticing! Magicians and con-men regularly use this to fool and astonish.

What's happening is shallow encoding — similar to the transience problem, we often encode only a superficial level of information related to what's happening in front of our face, even when talking to a real person. Thus, subtly changing details are not registered because they were never encoded in the first place! (Sherlock Holmes made a career of countering this natural tendency by being super-observant.)

Generally, this is totally fine and OK. As a whole, the system serves us well. But the instances where it doesn't can get us into trouble.

***

This brings up the problem of absent-mindedness in what psychologists call prospective memory — remembering something you need to do in the future. We're all familiar with situations when we forget to do something we clearly “told ourselves” we needed to remember.

The typical antidote is using cues to help us remember: An event-based prospective memory goes like this: “When you see Harry today, tell him to call me.” A time-based prospective memory goes like this: “At 11PM, take the cookies out of the oven.”

It doesn't always work, though. Time-based prospective memory is the worst of all: We're not consistently good at remembering that “11PM = cookies” because other stuff will also be happening at 11PM! A time-based cue is insufficient.

For the same reason, an event-based cue will also fail to work if we're not careful:

Consider the first event-based prospective memory. Frank has asked you to tell Harry to call him, but you have forgotten to do so. You indeed saw Harry in the office, but instead of remembering Frank's message you were reminded of the bet you and Harry made concerning last night's college basketball championship, gloating for several minutes over your victory before settling down to work.

“Harry” carries many associations other than “Tell him something for Frank.” Thus, we're not guaranteed to recall it in the moment.

This knowledge allows us to construct an optimal solution to the prospective memory problem: Specific, distinctive cues that call to mind the exact action needed, at the time it is needed. All elements must be in place for the optimal solution.

Post-it notes with explicit directions put in an optimal place (somewhere a post-it note would not usually be found) tend to work well. A specific reminder on your phone that pops up exactly when needed will work.  As Schacter puts it, “The point is to transfer as many details as possible from working memory to written reminders.” Be specific, make it stand out, make it timely. Hoping for a spontaneous reminder to work means that, some percentage of the time, we will certainly commit an absent-minded error. It's just the way our minds work.

***

Let's pause there for now. In our next post on memory, we'll cover the sins of Blocking and Misattribution, and some potential solutions. In Part Three, we check out the sins of Suggestibility, Bias, and Persistence. In the meantime, try checking out the book in its entirety, if you want to read ahead.

Is Our Faulty Memory Really So Bad?

“Though the behaviors…seem perverse, they reflect reliance on a type of navigation that serves the animals quite well in most situations.”
— Daniel Schacter

***

[This is the first of a four part series on memory. Also see Parts One, Two, and Three on the challenges of memory.]

The Harvard psychologist Daniel Schacter has some brilliant insights into the human memory.

His wonderful book The Seven Sins of Memory presents the case that our memories fail us in regular, repeated, and predictable ways. We forget things we think we should know; we think we saw things we didn't see; we can't remember where we left our keys; we can't remember _____'s name; we think Susan told us something that Steven did.

It's easy to get a little down on our poor brains. Between cognitive biases, memory problems, emotional control, drug addiction, and brain disease, it's natural to wonder how the hell our species has been so successful.

Not so fast. Schacter argues that we shouldn't be so dismissive of the imperfect system we've been endowed with:

The very pervasiveness of memory's imperfections, amply illustrated in the preceding pages, can easily lead to the conclusion that Mother Nature committed colossal blunders in burdening us with such a dysfunctional system. John Anderson, a cognitive psychologist at Carnegie-Mellon University, summarizes the prevailing perception that memory's sins reflect poorly on its design: “over the years we have participated in many talks with artificial intelligence researchers about the prospects of using human models to guide the development of artificial intelligence programs. Invariably, the remark is made, “Well, of course, we would not want our system to have something so unreliable as human memory.”

It is tempting to agree with this characterization, especially if you've just wasted valuable time looking for misplaced keys, read the statistics on wrongful imprisonment resulting from eyewitness miscalculation, or woken up in the middle of the night persistently recalling a slip-up at work. But along with Anderson, I believe that this view is misguided: It is a mistake to conceive of the seven sins as design flaws that expose memory as a fundamentally defective system. To the contrary, I suggest that the seven sins are by-products of otherwise adaptive features of memory, a price we pay for processes and functions that serve us well in many respects.

Schacter starts by pointing out that all creatures have systems running on autopilot, which researchers love to exploit:

For instance, train a rat to navigate a maze to find a food reward at the end, and then place a pile of food halfway into the maze. The rat will run right past the pile of food as if it did not even exist, continuing to the end, where it seeks its just reward! Why not stop at the halfway point and enjoy the reward then? Hauser suggests that the rat is operating in this situation on the basis of “dead reckoning” — a method of navigating in which the animal keeps a literal record of where it has gone by constantly updating the speed, distance, and direction it has traveled.

A similarly comical error occurs when a pup is taken from a gerbil nest containing several other pups and is placed in a nearby cup. The mother searches for her lost baby, and while she is away, the nest is displaced a short distance. When the mother and lost pup return, she uses dead reckoning to head straight for the nest's old location. Ignoring the screams and smells of the other pups just a short distance away, she searches for them at the old location. Hauser contends that the mother is driven by signals from her spatial system.

The reason for this bizarre behavior is that, in general, it works! Natural selection is pretty crafty and makes one simple value judgement: Does the thing provide a reproductive advantage to the individual (or group) or doesn't it? In nature, a gerbil will rarely see its nest moved like that — it's the artifice of the lab experiment that exposes the “auto-pilot” nature of the gerbil's action.

It works the same way with us. The main thing to remember is that our mental systems are, by and large, working to our advantage. If we had memories that could recall all instances of the past with perfect precision, we'd be so inundated with information that we'd be paralyzed:

Consider the following experiment. Try to recall an episode from your life that involves a table. What do you remember, and how long did it take to come up with the memory? You probably had little difficult coming up with a specific incident — perhaps a conversation at the dinner table last night, or a discussion at the conference table this morning. Now imagine that the cue “table” brought forth all the memories that you have stored away involving a table. There are probably hundreds or thousands of such incidents. What if they all sprung to mind within seconds of considering the cue? A system that operated in this manner would likely result in mass confusion produced by an incessant coming to mind of numerous competing traces. It would be a bit like using an Internet search engine, typing in a word that has many matches in a worldwide data base, and then sorting through the thousands of entries that the query elicits. We wouldn't want a memory system that produces this kind of data overload. Robert and Elizabeth Bjork have argued persuasively that the operation of inhibitory processes helps to protect us from such chaos.

The same goes for emotional experiences. We often lament that we take intensely emotional experiences hard; that we're unable to shake the feeling certain situations imprint on us. PTSD is a particularly acute case of intense experience causing long-lasting mental harm. Yet this same system probably, on average, does us great good in survival:

Although intrusive recollections of trauma can be disabling, it is critically important that emotionally arousing experiences, which sometimes occur in response to life-threatening dangers, persist over time. The amygdala and related structures contribute to the persistence of such experiences by modulating memory formation, sometimes resulting in memories we wish we could forget. But this system boosts the likelihood that we will recall easily and quickly information about threatening or traumatic events whose recollection may one day be crucial for survival. Remembering life-threatening events persistently — where the incident occurred, who or what was responsible for it — boosts our chances of avoiding future recurrences.

Our brain has limitations, and with those limitations come trade-offs. One of the trade-offs our brain makes is to prioritize which information to hold on to, and which to let go of. It must do this — as stated above, we'd be overloaded with information without this ability. The brain has evolved to prioritize information which is:

  1. Used frequently
  2. Used recently
  3. Likely to be needed

Thus, we do forget things. The phenomenon of eyewitness testimony being unreliable can at least partially be explained by the fact that, when the event occurred, the witness probably did not know they'd need to remember it. There was no reason, in the moment, for that information to make an imprint. We have trouble recalling details of things that have not imprinted very deeply.

There are cases where people do have elements of what might seem like a “more optimal system” of memory, and generally they do not function well in the real world. Schacter gives us two in his book. The first is the famous mnemonist Shereshevski:

But what if all events were registered in elaborate detail, regardless of the level or type of processing to which they were subjected? The result would be a potentially overwhelming clutter of useless details, as happened in the famous case of the mnemonist Shereshevski. Described by Russian neuropsychologist Alexander Luria, who studied him for years, Shereshevski formed and retained highly detailed memories of virtually everything that happened to him — both the important and the trivial. Yet he was unable to function at an abstract level because he was inundated with unimportant details of his experiences — details that are best denied entry to the system in the first place. An elaboration-dependent system ensures that only those events that are important enough to warrant extensive encoding have a high likelihood of subsequent recollection.

The other case comes from more severely autistic individuals. When tested, autistic individuals make less conflagrations of the type that normally functioning individuals make, less mistaking that we heard sweet when we actually heard candy, or stool when we actually heard chair. These little misattributions are our brain working as it should, remembering the “gist” of things when the literal thing isn't terribly important.

One symptom of autism is difficulty “generalizing” the way others are able to; difficulty developing the “gist” of situations and categories that, generally speaking, is highly helpful to a normally functioning individual. Instead, autism can cause many to take things extremely literally, and to have a great memory for rote factual information. (Picture Raymond Babbitt in Rain Man.) The trade is probably not desirable for most people — our system tends to serve us pretty well on the whole.

***

There's at least one other way our system “saves us from ourselves” on average — our overestimation of self. Social psychologists love to demonstrate cases where humans overestimate their ability to drive, invest, make love, and so on. It even has a (correct) name: Overconfidence.

Yet without some measure of “overconfidence,” most of us would be quite depressed. In fact, when depressed individuals are studied, their tendency towards extreme realism is one thing frequently found:

On the face of it, these biases would appear to loosen our grasp on reality and thus represent a worrisome, even dangerous tendency. After all, good mental health is usually associated with accurate perceptions of reality, whereas mental disorders and madness are associated with distorted perceptions of reality.

But as the social psychologist Shelley Taylor has argued in her work on “positive illusions,” overly optimistic views of the self appear to promote mental health rather than undermine it. Far from functioning in an impaired or suboptimal manner, people who are most susceptible to positive illusions generally do well in many aspects of their lives. Depressed patients, in contrast, tend to lack the positive illusions that are characteristic of non-depressed individuals.

Remembering the past in an overly positive manner may encourage us to meet new challenges by promoting an overly optimistic view of the future, whereas remembering the past more accurately or negatively can leave us discouraged. Clearly there must be limits to such effects, because wildly distorted optimistic biases would eventually lead to trouble. But as Taylor points out, positive illusions are generally mild and are important contributors to our sense of well-being. To the extent memory bias promotes satisfaction with our lives, it can be considered an adaptive component of the cognitive system.

So here's to the human brain: Flawed, certainly, but we must not forget that it does a pretty good job of getting us through the day alive and (mostly) well.

This is the first of a four part series on memory. Now check out Parts One, Two, and Three on the challenges of memory.

***

Still Interested? Check out Daniel Schacter's fabulous The Seven Sins of Memory.

To Learn, Retrieve

Mike Ebersold is a neurosurgeon. In neurosurgery and indeed life there is an essential kind of learning that only comes from reflection on personal experience.

In the book Make It Stick: The Science of Successful Learning, the authors capture Ebersold's description:

A lot of times something would come up in surgery that I had difficulty with, and then I’d go home that night thinking about what happened and what could I do, for example, to improve the way a suturing went. How can I take a bigger bite with my needle, or a smaller bite, or should the stitches be closer together? What if I modified it this way or that way? Then the next day back, I’d try that and see if it worked better. Or even if it wasn’t the next day, at least I’ve thought through this, and in so doing I’ve not only revisited things that I learned from lectures or from watching others performing surgery but also I’ve complemented that by adding something of my own to it that I missed during the teaching process.

“Reflection,” Ebersold says, “can involve several cognitive activities that lead to stronger learning: retrieving knowledge and earlier training from memory, connecting these to new experiences, and visualizing and mentally rehearsing what you might do differently next time.”

The authors of Make It Stick continue:

To make sure the new learning is available when it’s needed, Ebersold points out, “you memorize the list of things that you need to worry about in a given situation: steps A, B, C, and D,” and you drill on them. Then there comes a time when you get into a tight situation and it’s no longer a matter of thinking through the steps, it’s a matter of reflexively taking the correct action.

“Unless you keep recalling this maneuver,” Ebersold notes, “it will not become a reflex. Like a race car driver in a tight situation or a quarterback dodging a tackle, you’ve got to act out of reflex before you’ve even had time to think. Recalling it over and over, practicing it over and over. That’s just so important.”

The Testing Effect

The power of retrieval as a learning tool is known among psychologists as the testing effect. In its most common form, testing is used to measure learning and assign grades in school, but we’ve long known that the act of retrieving knowledge from memory has the effect of making that knowledge easier to call up again in the future.

Aristotle Exercise

Francis Bacon and William James also wrote about this phenomenon. Retrieval makes things stick better than re-exposure to the original material. This is the testing effect.

To be most effective, retrieval must be repeated again and again, in spaced out sessions so that the recall, rather than becoming a mindless recitation, requires some cognitive effort. Repeated recall appears to help memory consolidate into a cohesive representation in the brain and to strengthen and multiply the neural routes by which the knowledge can later be retrieved. In recent decades, studies have confirmed what Mike Ebersold and every seasoned quarterback, jet pilot, and teenaged texter knows from experience—that repeated retrieval can so embed knowledge and skills that they become reflexive: the brain acts before the mind has time to think.

Learning or Just Recalling Information?

In 2010 the New York Times reported on a scientific study that showed that students who read a passage of text and then took a test asking them to recall what they had read retained an astonishing 50 percent more of the information a week later than students who had not been tested.

This would seem like good news, but here’s how it was greeted in many online comments:

  • “Once again, another author confuses learning with recalling information.”
  • “I personally would like to avoid as many tests as possible, especially with my grade on the line. Trying to learn in a stressful environment is no way to help retain information.”
  • “Nobody should care whether memorization is enhanced by practice testing or not. Our children cannot do much of anything anymore.”

Forget memorization, many commenters argued; education should be about high-order skills. Hmmm. If memorization is irrelevant to complex problem solving, don’t tell your neurosurgeon. The frustration many people feel toward standardized, “dipstick” tests given for the sole purpose of measuring learning is understandable, but it steers us away from appreciating one of the most potent learning tools available to us. Pitting the learning of basic knowledge against the development of creative thinking is a false choice. Both need to be cultivated. The stronger one’s knowledge about the subject at hand, the more nuanced one’s creativity can be in addressing a new problem. Just as knowledge amounts to little without the exercise of ingenuity and imagination, creativity absent a sturdy foundation of knowledge builds a shaky house.

The Takeaway

Practice at retrieving new knowledge or skill from memory is a potent tool for learning and durable retention. This is true for anything the brain is asked to remember and call up again in the future—facts, complex concepts, problem-solving techniques, motor skills.

Effortful retrieval makes for stronger learning and retention. We’re easily seduced into believing that learning is better when it’s easier, but the research shows the opposite: when the mind has to work, learning sticks better. The greater the effort to retrieve learning, provided that you succeed, the more that learning is strengthened by retrieval. After an initial test, delaying subsequent retrieval practice is more potent for reinforcing retention than immediate practice, because delayed retrieval requires more effort.

Repeated retrieval not only makes memories more durable but produces knowledge that can be retrieved more readily, in more varied settings, and applied to a wider variety of problems.

While cramming can produce better scores on an immediate exam, the advantage quickly fades because there is much greater forgetting after rereading than after retrieval practice. The benefits of retrieval practice are long-term.

Simply including one test (retrieval practice) in a class yields a large improvement in final exam scores, and gains continue to increase as the frequency of classroom testing increases.

Testing doesn’t need to be initiated by the instructor. Students can practice retrieval anywhere; no quizzes in the classroom are necessary. Think flashcards—the way second graders learn the multiplication tables can work just as well for learners at any age to quiz themselves on anatomy, mathematics, or law. Self-testing may be unappealing because it takes more effort than rereading, but as noted already, the greater the effort at retrieval, the more will be retained.

Students who take practice tests have a better grasp of their progress than those who simply reread the material. Similarly, such testing enables an instructor to spot gaps and misconceptions and adapt instruction to correct them.

Giving students corrective feedback after tests keeps them from incorrectly retaining material they have misunderstood and produces better learning of the correct answers.

Students in classes that incorporate low-stakes quizzing come to embrace the practice. Students who are tested frequently rate their classes more favorably.

Make It Stick: The Science of Successful Learning is worth reading in its entirety.

A Plunge and Squish View of the Mind

_MG_5833

How can we bring our knowledge to bear on a problem? Does this resemble how we accumulate knowledge in the first place? A thoughtful passage by David Gelernter in Mirror Worlds: or the Day Software Puts the Universe in a Shoebox…How It Will Happen and What It Will Mean explores these questions.

In your mind particulars turn into generalities gradually, imperceptibly—like snow at the bottom of a drift turning into ice. If you don't know any general rules, if you've merely experienced something once, then that once will have to do. You may remember one example, or a collection of particular examples, or a general rule. These states blend together: When you've mastered the rule, you can still recall some individual experiences if you need to. Any respectable mind simulation must accommodate all three states. Any one of them might be the final state for some particular (perfectly respectable) mind. (Many people have been to Disneyland once, a fair number have been there a couple of times, and a few, no doubt, have been to Disneyland so often that the individual visits blend together into a single melted ice-cream puddle of a visit to Disneyland rule or script or principle or whatever. All three states are real.)

Plunge-and-squish adapts to whatever you have on hand. If there is a single relevant memory, plunge finds it. If there are several, squish constructs a modest generalization, one that captures the quirks of its particular elements. If there are many, squish constructs a sound, broad-based generalization. You may even wind up with a perma-squish abstraction, if this particular squish happens frequently enough and the elements blend smoothly together. It all happens automatically.

You need plunge and squish.

It's worth pausing here to explain in a little more detail plunge and squish. Plunge is when you take a new case—”one attribute or many attributes, doesn't matter”—and plunge it into the memory pool. “The plunged-in case attracts memories from all over: The ‘force fields' inside the system get warped in such a way that every stored memory (every case in the database) is re-oriented with respect to the plunged-in “bait.” The most relevant memories approach closest; and the less-relevant ones recede into the distance.” Squish, on the other hand, means “to look at the closest cases that are attracted by a plunge, and compact them together into a single ‘super case.' We take all these nearby memories (in other words) and superimpose them.”

One more point: Whatever stack of memories you have on hand, you can cut the deck in a million ways. You can reshuffle it endlessly. You can, if you need to, synthesize a general rule at a moment's notice. You see an asphalt spreader on the next block. You develop an expectation: The next block will smell like [the smell of fresh asphalt…}. What happened—did you wrack your brain for that important general principle, squirrelled away for just such an occasion—fact number three million twenty-one thousand and seven—fresh asphalt usually smells like…? Or did you synthesize this rule by doing a plunge-and-squish on the spot?

Clearly you can cobble together an abstraction, a category or an expectation at a moment's notice. You can create new categories to order whenever they are needed. (Unpleasant vacations? Objects that look like metal but aren't?…) Any realistic mind simulation must know how to do this.

Gotta have plunge; gotta have squish.

And so we arrive, finally, at two radically different pictures of the mind. In the mind-map view, there is a dense intertwined superstructure of categories, rules and generalizations, with the odd specific, particular fact hanging from the branches like the occasional bird-pecked apple. In the plunge-and-squish view, there are slowly-shifting, wandering and reforming snowdrifts instead, built without superstructure out of a billion crystal flakes—a billion particular experiences. New experiences sift constantly downwards onto the snowscape and old ones settle imperceptibly into ice-clear universal, and the whole scene is alive and constantly, subtly changing.

It's too soon to say which view is right. Both approaches need a lot more work. Both have produced interesting results. …

Real vs. Simulated Memories

Blue Brain

Software memory is increasingly doing more and more for us. Yet it lacks one important element of human memory: emotion.

This thought-provoking excerpt comes from Mirror Worlds: or the Day Software Puts the Universe in a Shoebox…How It Will Happen and What It Will Mean, a book recommended by tech luminary Marc Andreessen, (famous VC, sits on the board of Facebook, and HP).

When an expert remembers a patient, he doesn't remember a mere list of words. He remembers an experience, a whole galaxy of related perceptions. No doubt he remembers certain words—perhaps a name, a diagnosis, maybe some others. But he also remembers what the patient looked like, sounded like; how the encounter made him feel (confident, confused?) … Clearly these unrecorded perceptions have tremendous information content. People can revisit their experiences, examine their stored perceptions in retrospect. In reducing a “memory” to mere words, and a quick-march parade step of attribute, value, attribute, value at that, we are giving up a great deal. We are reducing a vast mountaintop panorama to a grainy little black-and-white photograph.

There is, too, a huge distance between simulated remembering—pulling cases out of the database—and the real thing. To a human being, an experience means a set of coherent sensations, which are wrapped up and sent back to the storeroom for later recollection. Remembering is the reverse: A set of coherent sensations is trundled out of storage and replayed—those archived sensations are re-experienced. The experience is less vivid on tape (so to speak) than it was in person, and portions of the original may be smudged or completely missing, but nonetheless—the Rememberer gets, in essence, another dose of the original experience. For human beings, in other words, remembering isn't merely retrieving, it is re-experiencing.

And this fact is important, because it obviously impinges (probably in a large way) on how people do their remembering. Why do you “choose” to recall something? Well for one thing, certain memories
make you feel good. The original experience included a “feeling good” sensation, and so the tape has “feel good” recorded on it, and when you recall the memory—you feel good. And likewise, one reason you choose (or unconsciously decide) not to recall certain memories is that they have “feel bad” recorded on them, and so remembering them makes you feel bad. (If you don't believe me check with Freud, who based the better part of a profoundly significant career on this observation, more or less.) It's obvious that the emotions recorded in a memory have at least something to do with steering your solitary rambles through Memory Woods.

But obviously, the software version of remembering has no emotional compass. To some extent, that's good: Software won't suppress, repress or forget some illuminating case because (say) it made a complete fool of itself when the case was first presented. Objectivity is powerful.

On the other hand, we are brushing up here against a limitation that has a distinctly fundamental look. We want our Mirror Worlds to “remember” intelligently—to draw just the right precedent or two from a huge database. But human beings draw on reason and emotion when they perform all acts of remembering. An emotion can be a concise, nuanced shorthand for a whole tangle of facts and perceptions that you never bothered to sort out. How did you feel on your first day at work or school, your child's second birthday, last year's first snowfall? Later you might remember that scene; you might be reminded merely by the fact that you now feel the same as you did then. Why do you feel the same? If you think carefully, perhaps you can trace down the objective similarities between the two experiences. But their emotional resemblance was your original clue. And it's quite plausible that “expertise” works this way also, at least occasionally: I'm reminded of a past case not because of any objective similarity, but rather because I now feel the same as I did then.

Remember Not to Trust Your Memory

think

Memories are the stories that we tell ourselves about the past. Sometimes they adjust and leave things out.

In an interesting passage in Think: Why You Should Question Everything, Guy P. Harrison talks about the fallibility of memory.

Did you know that you can't trust even your most precious memories?

They may come to you in great detail and feel 100 percent accurate, but it doesn't matter. They easily could be partial or total lies that your brain is telling you. Really, the personal past that your brain is supposed to be keeping safe for you is not what you think it is. Your memories are pieces and batches of information that your brain cobbles together and serves up to you, not to present the past as accurately as possible, but to provide you with information that you will likely find to be useful in the present. Functional value, not accuracy, is the priority. Your brain is like some power-crazed CIA desk jockey who feeds you memories on a need-to-know basis only. Daniel Schacter, a Harvard memory researcher, says that when the brain remembers, it does so in a way that is similar to how an archaeologist reconstructs a past scene relying on an artifact here, an artifact there. The end result might be informative and useful, but don't expect it to be perfect. This is important because those who don't know anything about how memory works already have one foot in fantasyland. Most people believe that our memory operates in a way that is similar to a video camera. They think that the sights, sounds, and feelings of our experiences are recorded on something like a hard drive in their heads. Totally wrong. When you remember your past, you don't get to watch an accurately recorded replay.

To describe to people how memory really works, Harrison puts it this way:

Imagine a very tiny old man sitting by a very tiny campfire somewhere inside your head. He's wearing a worn and raggedy hat and has a long, scruffy, gray beard. He looks a lot like one of those old California gold prospectors from the 1800s. He can be grumpy and uncooperative at times, but he's the keeper of your memories and you are stuck with him. When you want or need to remember something from your past, you have to go through the old codger. Let's say you want to recall that time when you scored the winning goal in a middle-school soccer match. You have to tap the old coot on the shoulder and ask him to tell you about it. He usually responds with something. But he doesn't read from a faithfully recorded transcript, doesn't review a comprehensive photo archive to create an accurate timeline, and doesn't double-check his facts before speaking. He definitely doesn't play a video recording of the game for you. Typically, he just launches into a tale about your glorious goal that won the big game. He throws up some images for you, so it's kind of like a lecture or slideshow. Nice and useful, perhaps, but definitely not reliable