Farnam Street helps you make better decisions, innovate, and avoid stupidity.

With over 350,000 monthly readers and more than 87,000 subscribers to our popular weekly digest, we've become an online intellectual hub.

Category Archives: Thinking

The Need for Biological Thinking to Solve Complex Problems

“Biological thinking and physics thinking are distinct, and often complementary, approaches to the world, and ones that are appropriate for different kinds of systems.”

***

How should we think about complexity? Should we use a biological or physics system? The answer, of course, is that it depends. It’s important to have both tools available at your disposal.

These are the questions that Samuel Arbesman explores in his fascinating book Overcomplicated: Technology at the Limits of Comprehension.

[B]iological systems are generally more complicated than those in physics. In physics, the components are often identical—think of a system of nothing but gas particles, for example, or a single monolithic material, like a diamond. Beyond that, the types of interactions can often be uniform throughout an entire system, such as satellites orbiting a planet.

Biology is different and there is something meaningful to be learned from a biological approach to thinking.

In biology, there are a huge number of types of components, such as the diversity of proteins in a cell or the distinct types of tissues within a single creature; when studying, say, the mating behavior of blue whales, marine biologists may have to consider everything from their DNA to the temperature of the oceans. Not only is each component in a biological system distinctive, but it is also a lot harder to disentangle from the whole. For example, you can look at the nucleus of an amoeba and try to understand it on its own, but you generally need the rest of the organism to have a sense of how the nucleus fits into the operation of the amoeba, how it provides the core genetic information involved in the many functions of the entire cell.

Arbesman makes an interesting point here when it comes to how we should look at technology. As the interconnections and complexity of technology increases, it increasingly resembles a biological system rather than a physics one. There is another difference.

[B]iological systems are distinct from many physical systems in that they have a history. Living things evolve over time. While the objects of physics clearly do not emerge from thin air—astrophysicists even talk about the evolution of stars—biological systems are especially subject to evolutionary pressures; in fact, that is one of their defining features. The complicated structures of biology have the forms they do because of these complex historical paths, ones that have been affected by numerous factors over huge amounts of time. And often, because of the complex forms of living things, where any small change can create unexpected effects, the changes that have happened over time have been through tinkering: modifying a system in small ways to adapt to a new environment.

Biological systems are generally hacks that evolved to be good enough for a certain environment. They are far from pretty top-down designed systems. And to accommodate an ever-changing environment they are rarely the most optimal system on a mico-level, preferring to optimize for survival over any one particular attribute. And it’s not the survival of the individual that’s optimized, it’s the survival of the species.

Technologies can appear robust until they are confronted with some minor disturbance, causing a catastrophe. The same thing can happen to living things. For example, humans can adapt incredibly well to a large array of environments, but a tiny change in a person’s genome can cause dwarfism, and two copies of that mutation invariably cause death. We are of a different scale and material from a particle accelerator or a computer network, and yet these systems have profound similarities in their complexity and fragility.

Biological thinking, with a focus on details and diversity, is a necessary tool to deal with complexity.

The way biologists, particularly field biologists, study the massively complex diversity of organisms, taking into account their evolutionary trajectories, is therefore particularly appropriate for understanding our technologies. Field biologists often act as naturalists— collecting, recording, and cataloging what they find around them—but even more than that, when confronted with an enormously complex ecosystem, they don’t immediately try to understand it all in its totality. Instead, they recognize that they can study only a tiny part of such a system at a time, even if imperfectly. They’ll look at the interactions of a handful of species, for example, rather than examine the complete web of species within a single region. Field biologists are supremely aware of the assumptions they are making, and know they are looking at only a sliver of the complexity around them at any one moment.

[…]

When we’re dealing with different interacting levels of a system, seemingly minor details can rise to the top and become important to the system as a whole. We need “Field biologists” to catalog and study details and portions of our complex systems, including their failures and bugs. This kind of biological thinking not only leads to new insights, but might also be the primary way forward in a world of increasingly interconnected and incomprehensible technologies.

Waiting and observing isn’t enough.

Biologists will often be proactive, and inject the unexpected into a system to see how it reacts. For example, when biologists are trying to grow a specific type of bacteria, such as a variant that might produce a particular chemical, they will resort to a process known as mutagenesis. Mutagenesis is what it sounds like: actively trying to generate mutations, for example by irradiating the organisms or exposing them to toxic chemicals.

When systems are too complex for human understanding, often we need to insert randomness to discover the tolerances and limits of the system. One plus one doesn’t always equal two when you’re dealing with non-linear systems. For biologists, tinkering is the way to go.

One plus one doesn't always equal two when you're dealing with non-linear systems. Click To Tweet

As Stewart Brand noted about legacy systems, “Teasing a new function out of a legacy system is not done by command but by conducting a series of cautious experiments that with luck might converge toward the desired outcome.”

When Physics and Biology Meet

This doesn’t mean we should abandon the physics approach, searching for underlying regularities in complexity. The two systems complement one another rather than compete.

Arbesman recommends asking the following questions:

When attempting to understand a complex system, we must determine the proper resolution, or level of detail, at which to look at it. How fine-grained a level of detail are we focusing on? Do we focus on the individual enzyme molecules in a cell of a large organism, or do we focus on the organs and blood vessels? Do we focus on the binary signals winding their way through circuitry, or do we examine the overall shape and function of a computer program? At a larger scale, do we look at the general properties of a computer network, and ignore the individual machines and decisions that make up this structure?

When we need to abstract away a lot of the details we lean on physics thinking more. Think about it from an organizational perspective. The new employee at the lowest level is focused on the specific details of their job whereas the executive is focused on systems, strategy, culture, and flow — how things interact and reinforce one another. The details of the new employee’s job are lost on them.

We can’t use one system, whether biological or physics, exclusively. That’s a sure way to fragile thinking. Rather, we need to combine them.

In Cryptonomicon, a novel by Neal Stephenson, he makes exactly this point talking about the structure of the pantheon of Greek gods:

And yet there is something about the motley asymmetry of this pantheon that makes it more credible. Like the Periodic Table of the Elements or the family tree of the elementary particles, or just about any anatomical structure that you might pull up out of a cadaver, it has enough of a pattern to give our minds something to work on and yet an irregularity that indicates some kind of organic provenance—you have a sun god and a moon goddess, for example, which is all clean and symmetrical, and yet over here is Hera, who has no role whatsoever except to be a literal bitch goddess, and then there is Dionysus who isn’t even fully a god—he’s half human—but gets to be in the Pantheon anyway and sit on Olympus with the Gods, as if you went to the Supreme Court and found Bozo the Clown planted among the justices.

There is a balance and we need to find it.

Gradually Getting Closer to the Truth

You can use a big idea without a physics-like need for exact precision. The key to remember is moving closer to reality by updating.

Consider this excerpt from Philip Tetlock and Dan Gardner in Superforecasting

The superforecasters are a numerate bunch: many know about Bayes’ theorem and could deploy it if they felt it was worth the trouble. But they rarely crunch the numbers so explicitly. What matters far more to the superforecasters than Bayes’ theorem is Bayes’ core insight of gradually getting closer to the truth by constantly updating in proportion to the weight of the evidence.

So they know the numbers. This numerate filter is the second of Garrett Hardin‘s three filters we need to think about problems.

Hardin writes:

The numerate temperament is one that habitually looks for approximate dimensions, ratios, proportions, and rates of change in trying to grasp what is going on in the world.

[…]

Just as “literacy” is used here to mean more than merely reading and writing, so also will “numeracy” be used to mean more than measuring and counting. Examination of the origins of the sciences shows that many major discoveries were made with very little measuring and counting. The attitude science requires of its practitioners is respect, bordering on reverence, for ration, proportions, and rates of change.

Rough and ready back-of-the-envelope calculations are often sufficient to reveal the outline of a new and important scientific discovery … In truth, the essence of many of the major insights of science can be grasped with no more than child’s ability to measure, count, and calculate.

 

We can find another example in investing. Charlie Munger, commenting at the 1996 Berkshire Hathaway Annual Meeting, said: “Warren often talks about these discounted cash flows, but I’ve never seen him do one. If it isn’t perfectly obvious that it’s going to work out well if you do the calculation, then he tends to go on to the next idea.” Buffett retorted: “It’s true. If (the value of a company) doesn’t just scream out at you, it’s too close.”

Precision is easy to teach but it’s missing the point.

The Many Ways our Memory Fails Us (Part 3)

(Purchase a copy of the entire 3-part series in one sexy PDF for $3.99)

***

In the first two parts of our series on memory, we covered four major “sins” committed by our memories: Absent-Mindedness, Transience, Misattribution, and Blocking, using Daniel Schacter’s The Seven Sins of Memory as our guide.

We’re going to finish it off today with three other sins: Suggestibility, Bias, and Persistence, hopefully leaving us with a full understanding of our memory and where it fails us from time to time.

***

Suggestibility

As its name suggests, the sin of suggestibility refers to our brain’s tendency to misremember the source of memories:

Suggestibility in memory refers to an individual’s tendency to incorporate misleading information from external sources — other people, written materials or pictures, even the media — into personal recollections. Suggestibility is closely related to misattribution in the sense that the conversion of suggestions into inaccurate memories must involve misattribution. However, misattribution often occurs in the absence of overt suggestion, making suggestibility a distinct sin of memory.

Suggestibility is such a difficult phenomenon because the memories we’ve pulled from outside sources seem as truly real as our own. Take the case of a “false veteran” which Schacter describes in the book:

On May 31, 2000, a front-page story in the New York Times described the baffling case of Edward Daly, a Korean War veteran who made up elaborate — but imaginary — stories about his battle exploits, including his involvement in a terrible massacre in which he had not actually participated. While weaving his delusional tale, Daly talked to veterans who had participated in the massacre and “reminded” them of his heroic deeds. His suggestions infiltrated their memories. “I know that Daly was there,” pleaded one veteran. “I know that. I know that.”

The key word here is infiltrated. This brings to mind the wonderful Christopher Nolan movie Inception, about a group of experts who seek to infiltrate the minds of sleeping targets in order to change their memories. The movie is fictional but there is a subtle reality to the idea: With enough work, an idea that is merely suggested to us in one context can seem like our own idea or our own memory.

Take suggestive questioning, a problem with criminal investigations. The investigator talks to an eyewitness and, hoping to jog their memory, asks a series of leading questions, arriving at the answer he was hoping for. But is it genuine? Not always.

Schacter describes a psychology experiment wherein participants see a video of a robbery and then are fed misleading suggestions about the robbery soon after, such as the idea that the victim of the robbery was wearing a white apron. Amazingly, even when people could recognize that the apron idea was merely suggested to them, many people still regurgitated the suggested idea!

Previous experiments had shown that suggestive questions produce memory distortion by creating source memory problems like those in the previous chapter: participants misattribute information presented only in suggestive questions about the original videotape. [The psychologist Philip] Higham’s results provide an additional twist. He found that when people took a memory test just minutes after receiving the misleading question, and thus still correctly recalled that the “white apron” was suggested by the experimenter, they sometimes insisted nevertheless that the attendant wore a white apron in the video itself. In fact, they made this mistake just as often as people who took the memory test two days after receiving misleading suggestions, and who had more time to forget that the white apron was merely suggested. The findings testify to the power of misleading suggestions: they can create false memories of an event even when people recall that the misinformation was suggested.

The problem of overconfidence also plays a role in suggestion and memory errors. Take an experiment where subjects are shown a man entering a department store and then told he murdered a security guard. After being shown a photo lineup (which did not contain the gunman), some were told they chose correctly and some were told they chose incorrectly. Guess which group was more confident and trustful of their memories afterwards?

It was, of course, the group that received reinforcement. Not only were they more confident, but they felt they had better command of the details of the gunman’s appearance, even though they were as wrong as the group that received no positive feedback. This has vast practical applications. (Consider a jury taking into account the testimony of a very confident eyewitness, reinforced by police with an agenda.)

***

One more interesting idea in reference to suggestibility: Like the DiCaprio-led clan in the movie Inception, psychologists have been able to successfully “implant” false memories of childhood in many subjects based merely on suggestion alone. This should make you think carefully about what you think you remember about the distant past:

[The psychologist Ira] Hyman asked college students about various childhood experiences that, according to their parents, had actually happened, and also asked about a false event that, their parents confirmed, had never happened. For instance, students were asked: “When you were five you were at the wedding reception of some friends of the family and you were running around with some other kids, when you bumped into the table and spilled the punch bowl on the parents of the bride.” Participants accurately remembered almost all of the true events, but initially reported no memory of the false events.

However, approximately 20 to 40 percent of participants in different experimental conditions eventually came to describe some memory of the false event in later interviews. In one experiment, more than half of the participants who produced false memories describe them as “clear” recollections that included specific details of the central even, such as remembering exactly where or how one spilled the punch. Just under half reported “partial” false memories, which included some details but no specific memory of the central event.

Thus is the “power of the suggestion.”

The Sin of Bias

The problem of bias will be familiar to regular readers. In some form or another, we’re subject to mental biases every single day, most of which are benign, some of which are harmful, and most of which are not hard to understand. Biases specific to memory are so good to study because they’re so easy and natural to fall into. Because we trust our memory so deeply, they often go unquestioned. But we might want to be careful:

The sin of bias refers to distorting influences of our present knowledge, beliefs, feelings on new experiences, or our later memories of them. In the stifling psychological climate of 1984, the Ministry of Truth used memory as a pawn in the service of party rule. Much in the same manner, biases in remembering past experiences reveal how memory can serve as a pawn for the ruling masters of our cognitive systems.

There are four biases we’re subject to in this realm: Consistency and change bias, hindsight bias, egocentric bias, and stereotyping bias.

Consistency and Change Bias

The first is a consistency bias: We re-write our memories of the past based on how we feel in the present. In one experiment after another, this has undoubtedly been proven true. It’s probably something of a coping mechanism: If we saw the past with complete accuracy, we might not be such happy individuals.

We re-write our memories of the past based on how we feel in the present. Click To Tweet

We often re-write the past so that it seems we’ve always felt like we feel now, that we always believed what we believe now:

This consistency bias has turned up in several different contexts. Recalling past experiences of pain, for instance, is powerfully influenced by current pain level. When patients afflicted by chronic pain are experiencing high levels of pain in the present, they are biased to recall similarly high levels of pain in the past; when present pain isn’t so bad, past pain experiences seem more benign, too. Attitudes towards political and social issues also reflect consistency bias. People whose views on political issues have changed over time often recall incorrectly past attitudes as highly similar to present ones. In fact, memories of past political views are sometimes more closely related to present views than what they actually believed in the past.

Think about your stance five or ten years ago on some major issue like sentencing for drug-related crime. Can your recall specifically what you believed? For most people, they believe they have stayed consistent on the issue. But easily performed experiments show that a large percentage of people who think “all is the same” have actually changed their tune significantly over time. Such is the bias towards consistency.

This affects relationships fairly significantly: Schacter shows that our current feelings about our partner color our memories of our past feelings.

Consider a study that followed nearly four hundred Michigan couples through the first years of their marriage. In those couples who expressed growing unhappiness over the four years of the study, men mistakenly recalled the beginnings of their marriages as negative even though they said they were happy at the time. “Such biases can lead to a dangerous “downward spiral,” noted the researchers who conducted the study. “The worse your current view of your partner is, the worse your memories are, which only further confirms your negative attitudes.”

In other contexts, we sometimes lean in the other direction: We think things have changed more than they really have. We think the past was much better than it is today, or much worse than it is today.

Schacter discusses a twenty-year study done with a group of women between 1969 and 1989, assessing how they felt about their marriages throughout. Turns out, their recollections of the past were constantly on the move, but the false recollection did seem to serve a purpose: Keeping the marriage alive.

When reflecting back on the first ten years of their marriages, wives showed a change bias: They remembered their initial assessments as worse than they actually were. The bias made their present feelings seem an improvement by comparison, even though the wives actually felt more negatively ten years into the marriage than they had at the beginning. When they had been married for twenty years and reflected back on their second ten years of marriage, the women now showed a consistency bias: they mistakenly recalled that feelings from ten years earlier were similar to their present ones. In reality, however, they felt more negatively after twenty years of marriage than after ten. Both types of bias helped women cope with their marriages. 

The purpose of all this is to reduce our cognitive dissonance: That mental discomfort we get when we have conflicting ideas. (“I need to stay married” / “My marriage isn’t working” for example.)

Hindsight Bias

We won’t go into hindsight bias too extensively, because we have covered it before and the idea is familiar to most. Simply put, once we know the outcome of an event, our memory of the past is forever altered. As with consistency bias, we use the lens of the present to see the past. It’s the idea that we “knew it all along” — when we really didn’t.

A large part of hindsight bias has to do with the narrative fallacy and our own natural wiring in favor of causality. We really like to know why things happen, and when given a clear causal link in the present (Say, we hear our neighbor shot his wife because she cheated on him), the lens of hindsight does the rest (I always knew he was a bad guy!). In the process, we forget that we must not have thought he was such a bad guy, since we let him babysit our kids every weekend. That is hindsight bias. We’re all subject to it unless we start examining our past with more detail or keeping a written record.

Egocentric bias

The egocentric bias is our tendency to see the past in such a way that we, the rememberer, look better than we really are or really should. We are not neutral observers of our own past, we are instead highly biased and motivated to see ourselves in a certain light.

The self’s preeminent role in encoding and retrieval, combined with a powerful tendency for people to view themselves positively, creates fertile ground of memory biases that allow people to remember past experiences in a self-enhancing light. Consider, for example, college students who were led to believe that introversion is a desirable personality trait that predicts academic success, and then searched their memories for incidents in which they behaved in an introverted or extroverted manner. Compared with students who were led to believe that extroversion is a desirable trait, the introvert-success students more quickly generated memories in which they behaved like introverts than like extroverts. The memory search was biased by a desire to see the self positively, which led students to select past incidents containing the desired trait.

The egocentric bias occurs constantly and in almost any situation where it possibly can: It’s similar to what’s been called overconfidence in other arenas. We want to see ourselves in a positive light, and so we do. We mine our brain for evidence of our excellent qualities. We have positive maintaining illusions that keep our spirits up.

This is generally a good thing for our self-esteem, but as any divorced couple knows, it can also cause us to have a very skewed version of the past.

Bias from Stereotyping

In our series on the development of human personality, we discussed the idea of stereotyping as something human beings do constantly and automatically; the much-maligned concept is central to how we comprehend the world.

Stereotyping exists because it saves energy and space — it allows us to consolidate much of what we learn into categories with broadly accurate descriptions. As we learn new things, we either slot them into existing categories, create new categories, or slightly modify old categories (the one we like the least, because it requires the most work). This is no great insight.

But what is interesting is the degree to which stereotyping colors our memories themselves:

If I tell you that Julian, an artist, is creative, temperamental, generous, and fearless, you are more likely to recall the first two attributes, which fit the stereotype of an artist, than the latter two attributes, which do not. If I tell you that he is a skinhead, and list some of his characteristics, you’re more likely to remember that he is rebellious and aggressive than that he is lucky and modest. This congruity bias is especially likely to occur when people hold strong stereotypes about a particular group. A person with strong racial prejudices, for example, would be more likely to remember stereotypical features of an African American’s behavior than a less prejudiced person, and less likely to remember behaviors that don’t fit the stereotype.

Not only that, but when things happen which contradict our expectations, we are capable of distorting the past in such a way to make it come in line. When we try to remember a tale after we know how it ends, we’re more likely to distort the details of the story in such a way that the whole thing makes sense and fits our understanding of the world. This is related to the narrative fallacy and hindsight bias discussed above.

***

The final sin which Schacter discusses in his book is Persistence, the often difficult reality that some memories, especially negative ones, persist a lot longer than we wish. We’re not going to cover it here, but suggest you check out the book in its entirety to get the scoop.

And with that, we’re going to wrap up our series on the human memory. Take what you’ve learned, digest it, and then keep pushing deeper in your quest to understand human nature and the world around you.

20 Rules for a Knight: A Timeless Guide from 1483

“Often we imagine that we will work hard until we arrive at some distant goal, and then we will be happy. This is a delusion. Happiness is the result of a life lived with purpose. Happiness is not an objective. It is the movement of life itself, a process, and an activity. It arises from curiosity and discovery. Seek pleasure and you will quickly discover the shortest path to suffering.”

***

The quest to become a knight has occupied many over the years.

In 1483, Sir Thomas Lemuel Hawke of Cornwall was among 323 killed at the Battle of Slaughter Bridge. Foreseeing this outcome, Sir Thomas wrote a letter to his children in Cornish outlining the Rules for a Knight — the life lessons Sir Thomas wished to pass along to his four children.

The severely damaged letter was adapted and reconstructed by Ethan Hawke, after the family discovered it in the early 1970s in the basement of the family farm near Waynesville, Ohio after his great grandmother passed away.

Or, so the story goes.

The resulting book, Rules for a Knight — in reality a work of fiction — began over a decade ago. Why a book about knights? Hawke explains:

“I’ve just always loved the idea of knighthood,” he said. “It makes being a good person cool. Or, aspiring to be a good person cool.”

And so Hawke started applying the chivalry to his own household:

My wife was reading a book about step-parenting, and this book was talking about the value of rules, so we started saying, well, what are the rules of our house? And you start with the really mundane, like eight-o’clock bedtime, all that kind of stuff. And then, invariably, you start asking yourself, well, what do we really believe in? So I started riffing on this idea of ‘rules for a knight.’ Like, what does the king decree, you know? I wrote it out—the idea was we were going to put it on the wall, in calligraphy. Like, these are the rules.

The work stands alone as a blueprint of civilized growth and self-improvement, the path to becoming a humble, strong, and reliable gentleman (or lady). The ideas mostly come from “other knights,” including Muhammad Ali, Emily Dickinson, Dwight D. Eisenhower, and Mother Teresa, as Hawke credits them on the acknowledgment page.

“Never announce that you are a knight, simply behave as one.” Click To Tweet

rules-for-a-knight

“Tonight,” Sir Thomas Lemuel Hawkes of Cornwall begins, “I will share with you some of the more valuable stories, events, and moments of my life so that somewhere deep in the recesses of your imagination these lessons might continue on and my experiences will live to serve a purpose for you.”

20 Rules for a Knight

1. Solitude

Create time alone with yourself. When seeking the wisdom and clarity of your own mind, silence is a helpful tool. The voice of our spirit is gentle and cannot be heard when it has to compete with others. Just as it is impossible to see your reflection in troubled water, so too is it with the soul. In silence, we can sense eternity sleeping inside us.

2. Humility

Never announce that you are a knight, simply behave as one. You are better than no one, and no one is better than you.

3. Gratitude

The only intelligent response to the ongoing gift of life is gratitude. For all that has been, a knight says, “Thank you.” For all that is to come, a knight says, “Yes!”

4. Pride

Never pretend you are not a knight or attempt to diminish yourself because you deem it will make others more comfortable. We show others the most respect by offering the best of ourselves.

5. Cooperation

Each one of us is walking our own road. We are born at specific times, in specific places, and our challenges are unique. As knights, understanding and respecting our distinctiveness is vital to our ability to harness our collective strength. The use of force may be necessary to protect in an emergency, but only justice, fairness, and cooperation can truly succeed in leading men. We must live and work together as brothers or perish together as fools.

6. Friendship

The quality of your life will, to a large extent, be decided by with whom you elect to spend your time.

7. Forgiveness

Those who cannot easily forgive will not collect many friends. Look for the best in others.

8. Honesty

A dishonest tongue and a dishonest mind waste time, and therefore waste our lives. We are here to grow and the truth is the water, the light, and the soil from which we rise. The armor of falsehood is subtly wrought out of the darkness and hides us not only from others but from our own soul.

9. Courage

Anything that gives light must endure burning.

10. Grace

Grace is the ability to accept change. Be open and supple; the brittle break.

11. Patience

There is no such thing as a once-in-a-lifetime opportunity. A hurried mind is an addled mind; it cannot see clearly or hear precisely; it sees what it wants to see, or hears what it is afraid to hear, and misses much. A knight makes time his ally. There is a moment for action, and with a clear mind that moment is obvious.

12. Justice

There is only one thing for which a knight has no patience: injustice. Every true knight fights for human dignity at all times.

13. Generosity

You were born owning nothing and with nothing you will pass out of this life. Be frugal and you can be generous.

14. Discipline

In the field of battle, as in all things, you will perform as you practice. With practice, you build the road to accomplish your goals. Excellence lives in attention to detail. Give your all, all the time. Don’t save anything for the walk home.The better a knight prepares, the less willing he will be to surrender.

15. Dedication

Ordinary effort, ordinary result. Take steps each day to better follow these rules. Luck is the residue of design. Be steadfast. The anvil outlasts the hammer.

16. Speech

Do not speak ill of others. A knight does not spread news that he does not know to be certain, or condemn things that he does not understand.

17. Faith

Sometimes to understand more, you need to know less.

18. Equality

Every knight holds human equality as an unwavering truth. A knight is never present when men or women are being degraded or compromised in any way, because if a knight were present, those committing the hurtful acts or words would be made to stop.

19. Love

Love is the end goal. It is the music of our lives. There is no obstacle that enough love cannot move.

20. Death

Life is a long series of farewells; only the circumstances should surprise us. A knight concerns himself with gratitude for the life he has been given. He does not fear death, for the work one knight begins, others may finish.

The rest of Rules For a Knight goes on to explore these ideas in greater detail. Despite its fiction status, the book is a timeless meditation on self-improvement and what it means to be a parent.

20 Rules for a Knight: a Timeless Guide from 1483 Click To Tweet

The Many Ways Our Memory Fails Us (Part 2)

(Purchase a copy of the entire 3-part series in one sexy PDF for $3.99)

***

In part one, we began a conversation about the trappings of the human memory, using Daniel Schacter’s excellent The Seven Sins of Memory as our guide. We covered transience — the loss of memory due to time — and absent-mindedness — memories that were never encoded at all or were not available when needed. Let’s keep going with a couple more whoppers: Blocking and Misattribution.

Blocking

Blocking is the phenomenon when something is indeed encoded in our memory and should be easily available in the given situation, but simply will not come to mind. We’re most familiar with blocking as the always frustrating “It’s on the tip of my tongue!

Unsurprisingly, blocking occurs most frequently when it comes to peoples’ names and occurs more frequently as we get older:

Twenty-year-olds, forty-year-olds, and seventy-year-olds kept diaries for a month in which they recorded spontaneously occurring retrieval blocks that were accompanied by the “tip of the tongue” sensation. Blocking occurred occasionally for the names of objects (for example, algae) and abstract words (for example, idiomatic). In all three groups, however, blocking occurred most frequently for proper names, with more blocks for people than for other proper names such as countries or cities. Proper name blocks occurred more frequently in the seventy-year-olds than in either of the other two groups.

This is not the worst sin our memory commits — excepting the times when we forget an important person’s name (which is admittedly not fun), blocking doesn’t cause the terrible practical results some of the other memory issues cause. But the reason blocking occurs does tells us something interesting about memory, something we intuitively know from other domains: We have a hard time learning things by rote or by force. We prefer associations and connections to form strong, lasting, easily available memories.

Why are names blocked from us so frequently, even more than objects, places, descriptions, and other nouns? For example, Schacter mentions experiments in which researchers show that we more easily forget a man’s name than his occupationeven if they’re the same word! (Baker/baker or Potter/potter, for example.)

It’s because relative to a descriptive noun like “baker,” which calls to mind all sorts of connotations, images, and associations, a person’s name has very little attached to it. We have no easy associations to make — it doesn’t tell us anything about the person or give us much to hang our hat on. It doesn’t really help us form an image or impression. And so we basically remember it by rote, which doesn’t always work that well.

Most models of name retrieval hold that activation of phonological representations [sound associations] occurs only after activation of conceptual and visual representations. This idea explains why people can often retrieve conceptual information about an object or person whom they cannot name, whereas the reverse does not occur. For example, diary studies indicate that people frequently recall a person’s occupation without remembering his name, but no instances have been documented in which a name is recalled without any conceptual knowledge about the person. In experiments in which people named pictures of famous individuals, participants who failed to retrieve the name “Charlton Heston” could often recall that he was an actor. Thus, when you block on the name “John Baker” you may very well recall that he is an attorney who enjoys golf, but it is highly unlikely that you would recall Baker’s name and fail to recall any of his personal attributes.

A person’s name is the weakest piece of information we have about them in our people-information lexicon, and thus the least available at any time, and the most susceptible to not being available as needed. It gets worse if it’s a name we haven’t needed to recall frequently or recently, as we all can probably attest to. (This also applies to the other types of words we block on less frequently — objects, places, etc.)

The only real way to avoid blocking problems is to create stronger associations when we learn names, or even re-encode names we already know by increasing their salience with a vivid image, even a silly one. (If you ever meet anyone named Baker…you know what to do.)

But the most important idea here is that information gains salience in our brain based on what it brings to mind. 

Whether or not blocking occurs in the sense implied by Freud’s idea of repressed memories, Schacter is non-committal about — it seems the issue was not, at the time of writing, settled.

Misattribution

The memory sin of misattribution has fairly serious consequences. Misattribution happens all the time and is a peculiar memory sin where we do remember something, but that thing is wrong, or possibly not even our own memory at all:

Sometimes we remember events that never happened, misattributing speedy processing of incoming information or vivid images that spring to mind, to memories of past events that did not occur. Sometimes we recall correctly what happened, but misattribute it to the wrong time and place. And at other times misattribution operates in a different direction: we mistakenly credit a spontaneous image or thought to our own imagination, when in reality we are recalling it–without awareness–from something we read or heard.

The most familiar, but benign, experience we’ve all had with misattribution is the curious case of deja vu. As of the writing of his book, Schacter felt there was no convincing explanation for why deja vu occurs, but we know that the brain is capable of thinking it’s recalling an event that happened previously, even if it hasn’t.

In the case of deja vu, it’s simply a bit of an annoyance. But the misattribution problem causes more serious problems elsewhere.

The major one is eyewitness testimony, which we now know is notoriously unreliable. It turns out that when eyewitnesses claim they “know what they saw!” it’s unlikely they remember as well as they claim. It’s not their fault and it’s not a lie — you do think you recall the details of a situation perfectly well. But your brain is tricking you, just like deja vu. How bad is the eyewitness testimony problem? It used to be pretty bad.

…consider two facts. First, according to estimates made in the late 1980s, each year in the United States more than seventy-five thousand criminal trials were decided on the basis of eyewitness testimony. Second, a recent analysis of forty cases in which DNA evidence established the innocence of wrongly imprisoned individuals revealed that thirty-six of them (90 percent) involved mistaken eyewitness identification. There are no doubt other such mistakes that have not been rectified.

What happens is that, in any situation where our memory stores away information, it doesn’t have the horsepower to do it with complete accuracy. There are just too many variables to sort through. So we remember the general aspects of what happened, and we remember some details, depending on how salient they were.

We recall that we met John, Jim, and Todd, who were all part of the sales team for John Deere. We might recall that John was the young one with glasses, Jim was the older bald one, and Todd talked the most. We might remember specific moments or details of the conversation which stuck out.

But we don’t get it all perfectly, and if it was an unmemorable meeting, with the transience of time, we start to lose the details. The combination of the specifics and the details is a process called memory binding, and it’s often the source of misattribution errors.

Let’s say we remember for sure that we curled our hair this morning. All of our usual cues tell us that we did — our hair is curly, it’s part of our morning routine, we remember thinking it needed to be done, etc. But…did we turn the curling iron off? We remember that we did, but is that yesterday’s memory or today’s?

This is a memory binding error. Our brain didn’t sufficiently “link up” the curling event and the turning off of the curler, so we’re left to wonder. This binding issue leads to other errors, like the memory conjunction error, where sometimes the binding process does occur, but it makes a mistake. We misattribute the strong familiarity:

Having met Mr. Wilson and Mr. Albert during your business meeting, you reply confidently the next day when an associate asks you the name of the company vice president: “Mr. Wilbert.” You remembered correctly pieces of the two surnames but mistakenly combined them into a new one. Cognitive psychologists have developed experimental procedures in which people exhibit precisely these kinds of erroneous conjunctions between features of different words, pictures, sentences, or even faces. Thus, having studied spaniel and varnish, people sometimes claim to remember Spanish.

What’s happening is a misattribution. We know we saw the syllables Span- and –nish and our memory tells us we must have heard Spanish. But we didn’t.

Back to the eyewitness testimony problem, what’s happening is we’re combining a general familiarity with a lack of specific recall, and our brain is recombining those into a misattribution. We recall a tall-ish man with some sort of facial hair, and then we’re shown 6 men in a lineup, and one is tall-ish with facial hair, and our brain tells us that must be the guy. We’re make a relative judgment: Which person here is closest to what I think I saw? Unfortunately, like the Spanish/varnish issue, we never actually saw the person we’ve identified as the perp.

None of this occurs with much conscious involvement, of course. It’s happening subconsciously, which is why good procedures are needed to overcome the problem. In the case of suspect lineups, the solution is to show the witness each member, one after another, and have them give a thumbs up or thumbs down immediately. This takes away the relative comparison and makes us consciously compare the suspect in front of us with our memory of the perpetrator.

The good thing about this solving this error is that people can be encouraged to search their memory more carefully. But it’s far from foolproof, even if we’re getting a very strong indication that we remember something.

And what helps prevent us from making too many errors is something Schacter calls the distinctiveness heuristic. If a distinctive thing supposedly happened, we usually reason we’d have a good memory of it. And usually, this is a very good heuristic to have. (Remember, salience always encourages memory formation.) As we discussed in Part One, a salient artifact gives us something to tie a memory to. If I meet someone wearing a bright rainbow-colored shirt, I’m a lot more likely to recall some details about them, simply because they stuck out.

***

As an aside, misattribution allows us one other interesting insight into the human brain: Our “people information” remembering is a specific, distinct module, one that can falter on its own, without harming any other modules. Schacter discusses a man with a delusion that many of the normal people around him were film stars. He even misattributed made-up famous-sounding names (like Sharon Sugar) to famous people, although he couldn’t put his finger on who they were.

But the man did not falsely recognize other things. Made up cities or made up words did not trip up his brain in the strange way people did. This (and other data) tells us that our ability to recognize people is a distinct “module” our brain uses, supporting one of Judith Rich Harris’s modules of human personality that we’ve discussed: The “people information lexicon” we develop throughout our lives.

***

One final misattribution is something called cryptomnesia — the opposite of deja vu. It’s when we think we recognize something as new and novel when we have indeed seen it before. Accidental plagiarizing can even result from cryptomnesia. (Try telling that to your school teachers!) Cryptomnesia falls into the same bucket as other misattributions in that we fail to recollect the source of information we’re recalling — the information and event where we first remembered it are not bound together properly. Let’s say we “invent” the melody to a song which already exists. The melody sounds wonderful and familiar, so we like it. But we mistakenly think it’s new.

In the end, Schacter reminds us to think carefully about the memories we “know” are true, and to try to remember specifics when possible:

We often need to sort out ambiguous signals, such as feelings of familiarity or fleeting images, that may originate in specific past experiences, or arise from subtle influences in the present. Relying on judgment and reasoning to come up with plausible attributions, we sometimes go astray.  When misattribution combines with another of memory’s sins — suggestibility — people can develop detailed and strongly held recollections of complex events that never occurred.

And with that, we will leave it here for now. Next time we’ll delve into suggestibility and bias, two more memory sins with a range of practical outcomes.

What’s So Significant About Significance?

How Not to be wrong

One of my favorite studies of all time took the 50 most common ingredients from a cookbook and searched the literature for a connection to cancer: 72% had a study linking them to increased or decreased risk of cancer. (Here’s the link for the interested.)

Meta-analyses (studies examining multiple studies) quashed the effect pretty seriously, but how many of those single studies were probably reported on in multiple media outlets, permanently causing changes in readers’ dietary habits? (We know from studying juries that people are often unable to “forget” things that are subsequently proven false or misleading — misleading data is sticky.)

The phrase “statistically significant” is one of the more unfortunately misleading ones of our time. The word significant in the statistical sense — meaning distinguishable from random chance — does not carry the same meaning in common parlance, in which we mean distinguishable from something that does not matterWe’ll get to what that means.

Confusing the two gets at the heart of a lot of misleading headlines and it’s worth a brief look into why they don’t mean the same thing, so you can stop being scared that everything you eat or do is giving you cancer.

***

The term statistical significance is used to denote when an effect is found to be extremely unlikely to have occurred by chance. In order to make that determination, we have to propose a null hypothesis to be rejected. Let’s say we propose that eating an apple a day reduces the incidence of colon cancer. The “null hypothesis” here would be that eating an apple a day does nothing to the incidence of colon cancer — that we’d be equally likely to get colon cancer if we ate that daily apple.

When we analyze the data of our study, we’re technically not looking to say “Eating an apple a day prevents colon cancer” — that’s a bit of a misconception. What we’re actually doing is an inversion we want the data to provide us with sufficient weight to reject the idea that apples have no effect on colon cancer.

And even when that happens, it’s not an all-or-nothing determination. What we’re actually saying is “It would be extremely unlikely for the data we have, which shows a daily apple reduces colon cancer by 50%, to have popped up by chance. Not impossible, but very unlikely.” The world does not quite allow us to have absolute conviction.

How unlikely? The currently accepted standard in many fields is 5% — there is a less than 5% chance the data would come up this way randomly. That immediately tells you that at least 1 out of every 20 studies must be wrong, but alas that is where we’re at. (The problem with the 5% p-value, and the associated problem of p-hacking has been subject to some intense debate, but we won’t deal with that here.)

We’ll get to why “significance can be insignificant,” and why that’s so important, in a moment. But let’s make sure we’re fully on board with the importance of sorting chance events from real ones with another illustration, this one outlined by Jordan Ellenberg in his wonderful book How Not to Be WrongPay close attention:

Suppose we’re in null hypothesis land, where the chance of death is exactly the same (say, 10%) for the fifty patients who got your drug and the fifty who got [a] placebo. But that doesn’t mean that five of the drug patients die and five of the placebo patients die. In fact, the chance that exactly five of the drug patients die is about 18.5%; not very likely, just as it’s not very likely that a long series of coin tosses would yield precisely as many heads as tails. In the same way, it’s not very likely that exactly the same number of drug patients and placebo patients expire during the course of the trial. I computed:

13.3% chance equally many drug and placebo patients die
43.3% chance fewer placebo patients than drug patients die
43.3% chance fewer drug patients than placebo patients die

Seeing better results among the drug patients than the placebo patients says very little, since this isn’t at all unlikely, even under the null hypothesis that your drug doesn’t work.

But things are different if the drug patients do a lot better. Suppose five of the placebo patients die during the trial, but none of the drug patients do. If the null hypothesis is right, both classes of patients should have a 90% chance of survival. But in that case, it’s highly unlikely that all fifty of the drug patients would survive. The first of the drug patients has a 90% chance; now the chance that not only the first but also the second patient survives is 90% of that 90%, or 81%–and if you want the third patient to survive as well, the chance of that happening is only 90% of that 81%, or 72.9%. Each new patient whose survival you stipulate shaves a little off the chances, and by the end of the process, where you’re asking about the probability that all fifty will survive, the slice of probability that remains is pretty slim:

(0.9) x (0.9) x (0.9) x … fifty times! … x (0.9) x (0.9) = 0.00515 …

Under the null hypothesis, there’s only one chance in two hundred of getting results this good. That’s much more compelling. If I claim I can make the sun come up with my mind, and it does, you shouldn’t be impressed by my powers; but if I claim I can make the sun not come up, and it doesn’t, then I’ve demonstrated an outcome very unlikely under the null hypothesis, and you’d best take notice.

So you see, all this null hypothesis stuff is pretty important because what you want to know is if an effect is really “showing up” or if it just popped up by chance.

A final illustration should make it clear:

Imagine you were flipping coins with a particular strategy of getting more heads, and after 30 flips you had 18 heads and 12 tails. Would you call it a miracle? Probably not — you’d realize immediately that it’s perfectly possible for an 18/12 ratio to happen by chance. You wouldn’t write an article in U.S. News and World Report proclaiming you’d figured out coin flipping.

Now let’s say instead you flipped the coin 30,000 times and you get 18,000 heads and 12,000 tails…well, then your case for statistical significance would be pretty tight.  It would be approaching impossible to get that result by chance — your strategy must have something to it. The null hypothesis of “My coin flipping technique is no better than the usual one” would be easy to reject! (The p-value here would be orders of magnitude less than 5%, by the way.)

That’s what this whole business is about.

***

Now that we’ve got this idea down, we come to the big question that statistical significance cannot answer: Even if the result is distinguishable from chance, does it actually matter?

Statistical significance cannot tell you whether the result is worth paying attention to — even if you get the p-value down to a minuscule number, increasing your confidence that what you saw was not due to chance. 

In How Not to be Wrong, Ellenberg provides a perfect example:

A 1995 study published in a British journal indicated that a new birth control pill doubled the risk of venous thrombosis (potentially killer blood clot) in its users. Predictably, 1.5 million British women freaked out, and some meaningfully large percentage of them stopped taking the pill. In 1996, 26,000 more babies were born than the previous year and there were 13,600 more abortions. Whoops!

So what, right? Lots of mothers’ lives were saved, right?

Not really. The initial probability of a women getting a venous thrombosis with any old birth control pill, was about 1 in 7,000 or about 0.01%. That means that the “Killer Pill,” even if was indeed increasing “thrombosis risk,” only increased that risk to 2 in 7,000, or about 0.02%!! Is that worth rearranging your life for? Probably not.

Ellenberg makes the excellent point that, at least in the case of health, the null hypothesis is unlikely to be right in most cases! The body is a complex system — of course what we put in it affects how it functions in some direction or another. It’s unlikely to be absolute zero.

But numerical and scale-based thinking, indispensable for anyone looking to not be a sucker, tells us that we must distinguish between small and meaningless effects (like the connection between almost all individual foods and cancer so far) and real ones (like the connection between smoking and lung cancer).

And now we arrive at the problem of “significance” — even if an effect is really happening, it still may not matter!  We must learn to be wary of “relative” statistics (i.e., “the risk has doubled”), and look to favor “absolute” statistics, which tell us whether the thing is worth worrying about at all.

So we have two important ideas:

A. Just like coin flips, many results are perfectly possible by chance. We use the concept of “statistical significance” to figure out how likely it is that the effect we’re seeing is real and not just a random illusion, like seeing 18 heads in 30 coin tosses.

B. Even if it is really happening, it still may be unimportant – an effect so insignificant in real terms that it’s not worth our attention.

These effects should combine to raise our level of skepticism when hearing about groundbreaking new studies! (A third and equally important problem is the fact that correlation is not causation, a common problem in many fields of science including nutritional epidemiology. Just because x is associated with y does not mean that x is causing y.)

Tread carefully and keep your thinking cap on.

***

Still Interested? Read Ellenberg’s great book to get your head working correctly, and check out our posts on Bayesian thinking, another very useful statistical tool, and learn a little about how we distinguish science from pseudoscience.