Category: Science

Warnings From Sleep: Nightmares and Protecting The Self

“All of this is evidence that the mind, although asleep, is constantly concerned about the safety and integrity of the self.”

***

Rosalind Cartwright — also known as the Queen of Dreams — is a leading sleep researcher. In The Twenty-four Hour Mind: The Role of Sleep and Dreaming in Our Emotional Lives, she explores the role of nightmares and how we use sleep to protect ourselves.

When our time awake is frightening or remains unpressed, the sleeping brain “may process horrible images with enough raw fear attached to awaken a sleeper with a horrendous nightmare.” The more trauma we have in our lives the more likely we are to experience anxiety and nightmares after a horrific event.

The common feature is a threat of harm, accompanied by a lack of ability to control the circumstances of the threat, and the lack of or inability to develop protective behaviors.

The strategies we use for coping effectively with extreme stress and fear are controversial. Should we deny the threatening event and avoid thinking about it better than thinking about it and becoming sensitized to it?

One clear principle that comes out of this work is that the effects of trauma on sleep and dreaming depend on the nature of the threat. If direct action against the threat is irrelevant or impossible (as it would be if the trauma was well in the past), then denial may be helpful in reducing stress so that the person can get on with living as best they can. However, if the threat will be encountered over and over (such as with spousal abuse), and direct action would be helpful in addressing the threat, then denial by avoiding thinking about the danger (which helps in the short-term) will undermine problem-solving efforts and mastery in the long run. In other words, if nothing can be done, emotion-coping efforts to regulate the distress (dreaming) is a good strategy; but if constructive actions can be taken, waking problem-solving action is more adaptive.

What about nightmares?

Nightmares are defined as frightening dreams that wake the sleeper into full consciousness and with a clear memory of the dream imagery. These are not to be confused with sleep terrors. There are three main differences between these two. First, nightmare arousals are more often from late in the night’s sleep, when dreams are longest and the content is most bizarre and affect-laden (emotional); sleep terrors occur early in sleep. Second, nightmares are REM sleep-related, while sleep terrors come out of non-REM (NREM) slow-wave sleep (SWS). Third, sleepers experience vivid recall of nightmares, whereas with sleep terrors the experience is of full or partial amnesia for the episode itself, and only rarely is a single image recalled.

Nightmares abort the REM sleep, a critical component of our always on brain, Cartwright explains:

If we are right that the mind is continuously active throughout sleep—reviewing emotion-evoking new experiences from the day, scanning memory networks for similar experiences (which will defuse immediate emotional impact), revising by updating our organized sense of ourselves, and rehearsing new coping behaviors—nightmares are an exception and fail to perform these functions.

The impact is to temporarily relieve the negative emotion. The example Cartwright gives is “I am not about to be eaten by a monster. I am safe in my own bed.” But because the nightmare has woken me up, the nightmare is of no help in regulating my emotions (a critical role of sleep). As we learn to manage negative emotions while we are awake, that is, as we grow up, nightmares reduce in frequency and we develop skills for resolving fears.

It's not always fear that wakes us from a nightmare. We can also be woken by anger, disgust, and grief.

Cartwright concludes, with an interesting insight, on the role of sleep in consolidating and protecting “the self.”:

[N]ightmares appear to be more common in those who have intense reactions to stress. The criteria cited for nightmare disorder in the diagnostic manual for psychiatric disorders, the Diagnostic and Statistical Manual IV-TR (DSM IV-TR), include this phrase “frightening dreams usually involving threats to survival, security, or self-esteem.” This theme may sound familiar: Remember that threats to self-esteem seem to precede NREM parasomnia awakenings. All of this is evidence that the mind, although asleep, is constantly concerned about the safety and integrity of the self.

The Twenty-four Hour Mind goes on to explore the history of sleep research through case studies and synthesis.

The Science of Sleep: Regulating Emotions and the Twenty-four Hour Mind

“Memory is never a precise duplicate of the original; instead, it is a continuing act of creation.”

***

Rosalind Cartwright is one of the leading sleep researchers in the world. Her unofficial title is Queen of Dreams.

In The Twenty-four Hour Mind: The Role of Sleep and Dreaming in Our Emotional Lives, she looks back on the progress of sleep research and reminds us there is much left in the black box of sleep that we have yet to shine light on.

In the introduction she underscores the elusive nature of sleep:

The idea that sleep is good for us, beneficial to both mind and body, lies behind the classic advice from the busy physician: “Take two aspirins and call me in the morning.” But the meaning of this message is somewhat ambiguous. Will a night’s sleep plus the aspirin be of help no matter what ails us, or does the doctor himself need a night’s sleep before he is able to dispense more specific advice? In either case, the presumption is that there is some healing power in sleep for the patient or better insight into the diagnosis for the doctor, and that the overnight delay allows time for one or both of these natural processes to take place. Sometimes this happens, but unfortunately sometimes it does not. Sometimes it is sleep itself that is the problem.

Cartwright underscores that our brains like to run on “automatic pilot” mode, which is one of the reasons that getting better at things requires concentrated and focused effort. She explains:

We do not always use our highest mental abilities, but instead run on what we could call “automatic pilot”; once learned, many of our daily cognitive behaviors are directed by habit, those already-formed points of view, attitudes, and schemas that in part make us who we are. The formation of these habits frees us to use our highest mental processes for those special instances when a prepared response will not do, when circumstances change and attention must be paid, choices made or a new response developed. The result is that much of our baseline thoughts and behavior operate unconsciously.

Relating this back to dreams, and one of the more fascinating parts of Cartwright's research, is the role sleep and dreams play in regulating emotions. She explains:

When emotions evoked by a waking experience are strong, or more often were under-attended at the time they occurred, they may not be fully resolved by nighttime. In other words, it may take us a while to come to terms with strong or neglected emotions. If, during the day, some event challenges a basic, habitual way in which we think about ourselves (such as the comment from a friend, “Aren’t you putting on weight?”) it may be a threat to our self-concepts. It will probably be brushed off at the time, but that question, along with its emotional baggage, will be carried forward in our minds into sleep. Nowadays, researchers do not stop our investigations at the border of sleep but continue to trace mental activity from the beginning of sleep on into dreaming. All day, the conscious mind goes about its work planning, remembering, and choosing, or just keeping the shop running as usual. On balance, we humans are more action oriented by day. We stay busy doing, but in the inaction of sleep we turn inward to review and evaluate the implications of our day, and the input of those new perceptions, learnings, and—most important—emotions about what we have experienced.

What we experience as a dream is the result of our brain’s effort to match recent, emotion-evoking events to other similar experiences already stored in long-term memory. One purpose of this sleep-related matching process, this putting of similar memory experiences together, is to defuse the impact of those feelings that might otherwise linger and disrupt our moods and behaviors the next day. The various ways in which this extraordinary mind of ours works—the top-level rational thinking and executive deciding functions, the middle management of routine habits of thought, and the emotional relating and updating of the organized schemas of our self-concept—are not isolated from each other. They interact. The emotional aspect, which is often not consciously recognized, drives the not-conscious mental activity of sleep.

Later in the book, she writes more about how dreams regulate emotions:

Despite differences in terminology, all the contemporary theories of dreaming have a common thread — they all emphasize that dreams are not about prosaic themes, not about reading, writing, and arithmetic, but about emotion, or what psychologists refer to as affect. What is carried forward from waking hours into sleep are recent experiences that have an emotional component, often those that were negative in tone but not noticed at the time or not fully resolved. One proposed purpose of dreaming, of what dreaming accomplishes (known as the mood regulatory function of dreams theory) is that dreaming modulates disturbances in emotion, regulating those that are troublesome. My research, as well as that of other investigators in this country and abroad, supports this theory. Studies show that negative mood is down-regulated overnight. How this is accomplished has had less attention.

I propose that when some disturbing waking experience is reactivated in sleep and carried forward into REM, where it is matched by similarity in feeling to earlier memories, a network of older associations is stimulated and is displayed as a sequence of compound images that we experience as dreams. This melding of new and old memory fragments modifies the network of emotional self-defining memories, and thus updates the organizational picture we hold of “who I am and what is good for me and what is not.” In this way, dreaming diffuses the emotional charge of the event and so prepares the sleeper to wake ready to see things in a more positive light, to make a fresh start. This does not always happen over a single night; sometimes a big reorganization of the emotional perspective of our self-concept must be made—from wife to widow or married to single, say, and this may take many nights. We must look for dream changes within the night and over time across nights to detect whether a productive change is under way. In very broad strokes, this is the definition of the mood-regulatory function of dreaming, one basic to the new model of the twenty-four hour mind I am proposing.

In another fascinating part of her research, Cartwright outlines the role of sleep in skill enhancement. In short, “sleeping on it” is wise advice.

Think back to “take two aspirins and call me in the morning.” Want to improve your golf stroke? Concentrate on it before sleeping. An interval of sleep has been proven to bestow a real benefit for both laboratory animals and humans when they are tested on many different types of newly learned tasks. You will remember more items or make fewer mistakes if you have had a period of sleep between learning something new and the test of your ability to recall it later than you would if you spent the same amount of time awake.

Most researchers agree “with the overall conclusion that one of the ways sleep works is by enhancing the memory of important bits of new information and clearing out unnecessary or competing bits, and then passing the good bits on to be integrated into existing memory circuits.” This happens in two steps.

The first is in early NREM sleep when the brain circuits that were active while we were learning something new, a motor skill, say, or a new language, are reactivated and stay active until REM sleep occurs. In REM sleep, these new bits of information are then matched to older related memories already stored in long-term memory networks. This causes the new learning to stick (to be consolidated) and to remain accessible for when we need it later in waking.

As for the effect of alcohol has before sleep, Carlyle Smith, a Canadian Psychologist, found that it reduces memory formation, “reducing the number of rapid eye movements” in REM sleep. The eye movements, similar to the ones we make while reading, are how we do scanning of visual information.

The mind is active 24 hours a day:

If the mind is truly working continuously, during all 24 hours of the day, it is not in its conscious mode during the time spent asleep. That time belongs to the unconscious. In waking, the two types of cognition, conscious and unconscious, are working sometimes in parallel, but also often interacting. They may alternate, depending on our focus of attention and the presence of an explicit goal. If we get bored or sleepy, we can slip into a third mode of thought, daydreaming. These thoughts can be recalled when we return to conscious thinking, which is not generally true of unconscious cognition unless we are caught in the act in the sleep lab. This third in-between state is variously called the preconscious or subconscious, and has been studied in a few investigations of what is going on in the mind during the transition before sleep onset.

Toward the end, Cartwright explores the role of sleep.

[I]n good sleepers, the mind is continuously active, reviewing experience from yesterday, sorting which new information is relevant and important to save due to its emotional saliency. Dreams are not without sense, nor are they best understood to be expressions of infantile wishes. They are the result of the interconnectedness of new experience with that already stored in memory networks. But memory is never a precise duplicate of the original; instead, it is a continuing act of creation. Dream images are the product of that creation. They are formed by pattern recognition between some current emotionally valued experience matching the condensed representation of similarly toned memories. Networks of these become our familiar style of thinking, which gives our behavior continuity and us a coherent sense of who we are. Thus, dream dimensions are elements of the schemas, and both represent accumulated experience and serve to filter and evaluate the new day’s input.

Sleep is a busy time, interweaving streams of thought with emotional values attached, as they fit or challenge the organizational structure that represents our identity. One function of all this action, I believe, is to regulate disturbing emotion in order to keep it from disrupting our sleep and subsequent waking functioning. In this book, I have offered some tests of that hypothesis by considering what happens to this process of down-regulation within the night when sleep is disordered in various ways.

Cartwright develops several themes throughout The Twenty-four Hour Mind. First is that the mind is continuously active. Second is the role of emotion in “carrying out the collaboration of the waking and sleeping mind.” This includes exploring whether the sleeping mind “contributes to resolving emotional turmoil stirred up by some real anxiety inducing circumstance.” Third is how sleeping contributes to how new learning is retained. Accumulated experiences serve to filter and evaluate the new day’s input.

Competition, Cooperation, and the Selfish Gene

Richard Dawkins has one of the best-selling books of all time for a serious piece of scientific writing.

Often labeled “pop science”, The Selfish Gene pulls together the “gene-centered” view of evolution: It is not really individuals being selected for in the competition for life, but their genes. The individual bodies (phenotypes) are simply carrying out the instructions of the genes. This leads most people to a very “competition focused” view of life. But is that all?

***

More than 100 years before The Selfish Gene, Charles Darwin had famously outlined his Theory of Natural Selection in The Origin of Species.

We’re all hopefully familiar with this concept: Species evolve over long periods time through a process of heredity, variation, competition, and differential survival.

The mechanism of heredity was invisible to Darwin, but a series of scientists, not without a little argument, had figured it out by the 1970’s: Strands of the protein DNA (“genes”) encoded instructions for the building of physical structures. These genes were passed on to offspring in a particular way – the process of heredity. Advantageous genes were propagated in greater numbers. Disadvantageous genes, vice versa.

The Selfish Gene makes a particular kind of case: Specific gene variants grow in proportion to a gene pool by, on average, creating advantaged physical bodies and brains. The genes do their work through “phenotypes” – the physical representation of their information. As Helena Cronin would put in her book The Ant and the Peacock, “It is the net selective value of a gene's phenotypic effect that determines the fate of the gene.”

This take of the evolutionary process became influential because of the range of hard-to-explain behavior that it illuminated.

Why do we see altruistic behavior? Because copies of genes are present throughout a population, not just in single individuals, and altruism can cause great advantages in those gene variants surviving and thriving. (In other words, genes that cause individuals to sacrifice themselves for other copies of those same genes will tend to thrive.)

Why do we see more altruistic behavior among family members? Because they are closely related, and share more genes!

Many problems seemed to be solved here, and the Selfish Gene model became one for all-time, worth having in your head.

However, buried in the logic of the gene-centered view of evolution is a statistical argument. Gene variants rapidly grow in proportion to the rest of the gene pool because they provide survival advantages in the average environment that the gene will experience over its existence. Thus, advantageous genes “selfishly” dominate their environment before long. It's all about gene competition.

This has led many people, some biologists especially, to view evolution solely through the lens of competition. Unsurprisingly, this also led to some false paradigms about a strictly “dog eat dog” world where unrestricted and ruthless individual competition is deemed “natural”.

But what about cooperation?

***

The complex systems researcher Yaneer Bar-Yam argues that not only is the Selfish Gene a limiting concept biologically and possibly wrong mathematically (too complex to address here, but if you want to read about it, check out these pieces), but that there are more nuanced ways to understand the way competition and cooperation comfortably coexist. Not only that, but Bar-Yam argues that this has implications for optimal team formation.

In his book Making Things Work, Bar-Yam lays out a basic message: Even in the biological world, competition is a limited lens through which to see evolution. There’s always a counterbalance of cooperation.

Counter to the traditional perspective, the basic message of this and the following chapter is that competition and cooperation always coexist. People see them as opposing and incompatible forces. I think that this is a result of an outdated and one-sided understanding of evolution…This is extremely useful in describing nature and society; the basic insight that “what works, works” still holds. It turns out, however, that what works is a combination of competition and cooperation.

Bar-Yam uses the analogy of a sports team which exists in context of a sports league – let’s say the NBA. Through this lens we can see why players, teams, and leagues compete and cooperate. (The obvious analogy is that genes, individuals, and groups compete and cooperate in the biological world.)

In general, when we think about the conflict between cooperation and completion in team sports, we tend to think about the relationships between the players on a team. We care deeply about their willingness to cooperate and we distinguish cooperative “team players” from selfish non-team players, complaining about the latter even when their individual skill is formidable.

The reason we want players to cooperate is so that they can compete better as a team. Cooperation at the level of the individual enables effective competition at the level of the group, and conversely, the competition between teams motivates cooperation between players. There is a constructive relationship between cooperation and competition when they operate at different levels of organization.

The interplay between levels is a kind of evolutionary process where competition at the team level improves the cooperation between players. Just as in biological evolution, in organized team sports there is a process of selection of winners through competition of teams. Over time, the teams will change how they behave; the less successful teams will emulate strategies of teams that are doing well.

At every level then, there is an interplay between cooperation and competition. Players compete for playing time, and yet must be intensively cooperative on the court to compete with other teams. At the next level up, teams compete with each other for victories, and yet must cooperate intensively to sustain a league at all.

They create agreed upon rules, schedule times to play, negotiate television contracts, and so on. This allows the league itself to compete with other leagues for scarce attention from sports fans. And so on, up and down the ladder.

Competition among players, teams, and leagues is certainly a crucial dynamic. But it isn’t all that’s going on: They’re cooperating intensely at every level, because a group of selfish individuals loses to a group of cooperative ones.

And it is the same among biological species. Genes are competing with each other, as are individuals, tribes, and species. Yet at every level, they are also cooperating. The success of the human species is clearly due to its ability to cooperate in large numbers; and yet any student of war can attest to its deadly competitive nature. Similar dynamics are at play with ants, rats, and chimpanzees, among other species of insect and animal. It’s a yin and yang world.

Bar-Yam thinks this has great implications for how to build successful teams.

Teams will improve naturally – in any organization – when they are involved in a competition that is structured to select those teams that are better at cooperation. Winners of a competition become successful models of behavior for less successful teams, who emulate their success by learning their strategies and by selecting and trading team members.

For a business, a society, or any other complex system made up of many individuals, this means that improvement will come when the system’s structure involves a completion that rewards successful groups. The idea here is not a cutthroat competition of teams (or individuals) but a competition with rules that incorporate some cooperative activity with a mutual goal.

The dictum that “politics is the art of marshaling hatreds” would seem to reflect this notion: A non-violent way for competition of cooperative groups for dominance. As would the incentive systems of majorly successful corporations like Nucor and the best hospital systems, like the Mayo Clinic. Even modern business books are picking up on it.

Individual competition is important and drives excellence. Yet, as Bar-Yam points out, it’s ultimately not a complete formula. Having teams compete is more effective: You need to harness competition and cooperation at every level. You want groups pulling together, creating emerging effects where the whole is greater than the sum of the parts (a recurrent theme throughout nature).

You should read his book for more details on both this idea and the concept of complex systems in general. Bar-Yam also elaborated on his sports analogy in a white-paper here. If you're interested in complex systems, check out this post on frozen accidents. Also, for more on creating better groups, check out how Steve Jobs did it.

Scientific Concepts We All Ought To Know

John Brockman's online scientific roundtable Edge.org does something fantastic every year: It asks all of its contributors (hundreds of them) to answer one meaningful question. Questions like What Have You Changed Your Mind About? and What is Your Dangerous Idea?

This year's was particularly awesome for our purposesWhat Scientific Term or Concept Ought To Be More Known?

The answers give us a window into over 200 brilliant minds, with the simple filtering mechanism that there's something they know that we should probably know, too. We wanted to highlight a few of our favorites for you.

***

From Steven Pinker, a very interesting thought on The Second Law of Thermodynamics (Entropy). This reminded me of the central thesis of The Origin of Wealth by Eric Beinhocker. (Which we'll cover in more depth in the future: We referenced his work in the past.)


The Second Law of Thermodynamics states that in an isolated system (one that is not taking in energy), entropy never decreases. (The First Law is that energy is conserved; the Third, that a temperature of absolute zero is unreachable.) Closed systems inexorably become less structured, less organized, less able to accomplish interesting and useful outcomes, until they slide into an equilibrium of gray, tepid, homogeneous monotony and stay there.

In its original formulation the Second Law referred to the process in which usable energy in the form of a difference in temperature between two bodies is dissipated as heat flows from the warmer to the cooler body. Once it was appreciated that heat is not an invisible fluid but the motion of molecules, a more general, statistical version of the Second Law took shape. Now order could be characterized in terms of the set of all microscopically distinct states of a system: Of all these states, the ones that we find useful make up a tiny sliver of the possibilities, while the disorderly or useless states make up the vast majority. It follows that any perturbation of the system, whether it is a random jiggling of its parts or a whack from the outside, will, by the laws of probability, nudge the system toward disorder or uselessness. If you walk away from a sand castle, it won’t be there tomorrow, because as the wind, waves, seagulls, and small children push the grains of sand around, they’re more likely to arrange them into one of the vast number of configurations that don’t look like a castle than into the tiny few that do.

The Second Law of Thermodynamics is acknowledged in everyday life, in sayings such as “Ashes to ashes,” “Things fall apart,” “Rust never sleeps,” “Shit happens,” You can’t unscramble an egg,” “What can go wrong will go wrong,” and (from the Texas lawmaker Sam Rayburn), “Any jackass can kick down a barn, but it takes a carpenter to build one.”

Scientists appreciate that the Second Law is far more than an explanation for everyday nuisances; it is a foundation of our understanding of the universe and our place in it. In 1915 the physicist Arthur Eddington wrote:

[…]

Why the awe for the Second Law? The Second Law defines the ultimate purpose of life, mind, and human striving: to deploy energy and information to fight back the tide of entropy and carve out refuges of beneficial order. An underappreciation of the inherent tendency toward disorder, and a failure to appreciate the precious niches of order we carve out, are a major source of human folly.

To start with, the Second Law implies that misfortune may be no one’s fault. The biggest breakthrough of the scientific revolution was to nullify the intuition that the universe is saturated with purpose: that everything happens for a reason. In this primitive understanding, when bad things happen—accidents, disease, famine—someone or something must have wanted them to happen. This in turn impels people to find a defendant, demon, scapegoat, or witch to punish. Galileo and Newton replaced this cosmic morality play with a clockwork universe in which events are caused by conditions in the present, not goals for the future. The Second Law deepens that discovery: Not only does the universe not care about our desires, but in the natural course of events it will appear to thwart them, because there are so many more ways for things to go wrong than to go right. Houses burn down, ships sink, battles are lost for the want of a horseshoe nail.

Poverty, too, needs no explanation. In a world governed by entropy and evolution, it is the default state of humankind. Matter does not just arrange itself into shelter or clothing, and living things do everything they can not to become our food. What needs to be explained is wealth. Yet most discussions of poverty consist of arguments about whom to blame for it.

More generally, an underappreciation of the Second Law lures people into seeing every unsolved social problem as a sign that their country is being driven off a cliff. It’s in the very nature of the universe that life has problems. But it’s better to figure out how to solve them—to apply information and energy to expand our refuge of beneficial order—than to start a conflagration and hope for the best.

Richard Nisbett (a social psychologist) has a great one — a concept we've hit on before but is totally underappreciated by most people: The Fundamental Attribution Error.

Modern scientific psychology insists that explanation of the behavior of humans always requires reference to the situation the person is in. The failure to do so sufficiently is known as the Fundamental Attribution Error. In Milgram’s famous obedience experiment, two-thirds of his subjects proved willing to deliver a great deal of electric shock to a pleasant-faced middle-aged man, well beyond the point where he became silent after begging them to stop on account of his heart condition. When I teach about this experiment to undergraduates, I’m quite sure I‘ve never convinced a single one that their best friend might have delivered that amount of shock to the kindly gentleman, let alone that they themselves might have done so. They are protected by their armor of virtue from such wicked behavior. No amount of explanation about the power of the unique situation into which Milgram’s subject was placed is sufficient to convince them that their armor could have been breached.

My students, and everyone else in Western society, are confident that people behave honestly because they have the virtue of honesty, conscientiously because they have the virtue of conscientiousness. (In general, non-Westerners are less susceptible to the fundamental attribution error, lacking as they do sufficient knowledge of Aristotle!) People are believed to behave in an open and friendly way because they have the trait of extroversion, in an aggressive way because they have the trait of hostility. When they observe a single instance of honest or extroverted behavior they are confident that, in a different situation, the person would behave in a similarly honest or extroverted way.

In actual fact, when large numbers of people are observed in a wide range of situations, the correlation for trait-related behavior runs about .20 or less. People think the correlation is around .80. In reality, seeing Carlos behave more honestly than Bill in a given situation increases the likelihood that he will behave more honestly in another situation from the chance level of 50 percent to the vicinity of 55-57. People think that if Carlos behaves more honestly than Bill in one situation the likelihood that he will behave more honestly than Bill in another situation is 80 percent!

How could we be so hopelessly miscalibrated? There are many reasons, but one of the most important is that we don’t normally get trait-related information in a form that facilitates comparison and calculation. I observe Carlos in one situation when he might display honesty or the lack of it, and then not in another for perhaps a few weeks or months. I observe Bill in a different situation tapping honesty and then not another for many months.

This implies that if people received behavioral data in such a form that many people are observed over the same time course in a given fixed situation, our calibration might be better. And indeed it is. People are quite well calibrated for abilities of various kinds, especially sports. The likelihood that Bill will score more points than Carlos in one basketball game given that he did in another is about 67 percent—and people think it’s about 67 percent.

Our susceptibility to the fundamental attribution error—overestimating the role of traits and underestimating the importance of situations—has implications for everything from how to select employees to how to teach moral behavior.

Cesar Hidalgo, author of what looks like an awesome book, Why Information Grows, wrote about Criticality, which is a very important and central concept to understanding complex systems:

In physics we say a system is in a critical state when it is ripe for a phase transition. Consider water turning into ice, or a cloud that is pregnant with rain. Both of these are examples of physical systems in a critical state.

The dynamics of criticality, however, are not very intuitive. Consider the abruptness of freezing water. For an outside observer, there is no difference between cold water and water that is just about to freeze. This is because water that is just about to freeze is still liquid. Yet, microscopically, cold water and water that is about to freeze are not the same.

When close to freezing, water is populated by gazillions of tiny ice crystals, crystals that are so small that water remains liquid. But this is water in a critical state, a state in which any additional freezing will result in these crystals touching each other, generating the solid mesh we know as ice. Yet, the ice crystals that formed during the transition are infinitesimal. They are just the last straw. So, freezing cannot be considered the result of these last crystals. They only represent the instability needed to trigger the transition; the real cause of the transition is the criticality of the state.

But why should anyone outside statistical physics care about criticality?

The reason is that history is full of individual narratives that maybe should be interpreted in terms of critical phenomena.

Did Rosa Parks start the civil rights movement? Or was the movement already running in the minds of those who had been promised equality and were instead handed discrimination? Was the collapse of Lehman Brothers an essential trigger for the Great Recession? Or was the financial system so critical that any disturbance could have made the trick?

As humans, we love individual narratives. We evolved to learn from stories and communicate almost exclusively in terms of them. But as Richard Feynman said repeatedly: The imagination of nature is often larger than that of man. So, maybe our obsession with individual narratives is nothing but a reflection of our limited imagination. Going forward we need to remember that systems often make individuals irrelevant. Just like none of your cells can claim to control your body, society also works in systemic ways.

So, the next time the house of cards collapses, remember to focus on why we were building a house of cards in the first place, instead of focusing on whether the last card was the queen of diamonds or a two of clubs.

The psychologist Adam Alter has another good one on a concept we all naturally miss from time to time, due to the structure of our mind. The Law of Small Numbers.

In 1832, a Prussian military analyst named Carl von Clausewitz explained that “three quarters of the factors on which action in war is based are wrapped in a fog of . . . uncertainty.” The best military commanders seemed to see through this “fog of war,” predicting how their opponents would behave on the basis of limited information. Sometimes, though, even the wisest generals made mistakes, divining a signal through the fog when no such signal existed. Often, their mistake was endorsing the law of small numbers—too readily concluding that the patterns they saw in a small sample of information would also hold for a much larger sample.

Both the Allies and Axis powers fell prey to the law of small numbers during World War II. In June 1944, Germany flew several raids on London. War experts plotted the position of each bomb as it fell, and noticed one cluster near Regent’s Park, and another along the banks of the Thames. This clustering concerned them, because it implied that the German military had designed a new bomb that was more accurate than any existing bomb. In fact, the Luftwaffe was dropping bombs randomly, aiming generally at the heart of London but not at any particular location over others. What the experts had seen were clusters that occur naturally through random processes—misleading noise masquerading as a useful signal.

That same month, German commanders made a similar mistake. Anticipating the raid later known as D-Day, they assumed the Allies would attack—but they weren’t sure precisely when. Combing old military records, a weather expert named Karl Sonntag noticed that the Allies had never launched a major attack when there was even a small chance of bad weather. Late May and much of June were forecast to be cloudy and rainy, which “acted like a tranquilizer all along the chain of German command,” according to Irish journalist Cornelius Ryan. “The various headquarters were quite confident that there would be no attack in the immediate future. . . . In each case conditions had varied, but meteorologists had noted that the Allies had never attempted a landing unless the prospects of favorable weather were almost certain.” The German command was mistaken, and on Tuesday, June 6, the Allied forces launched a devastating attack amidst strong winds and rain.

The British and German forces erred because they had taken a small sample of data too seriously: The British forces had mistaken the natural clustering that comes from relatively small samples of random data for a useful signal, while the German forces had mistaken an illusory pattern from a limited set of data for evidence of an ongoing, stable military policy. To illustrate their error, imagine a fair coin tossed three times. You’ll have a one-in-four chance of turning up a string of three heads or tails, which, if you make too much of that small sample, might lead you to conclude that the coin is biased to reveal one particular outcome all or almost all of the time. If you continue to toss the fair coin, say, a thousand times, you’re far more likely to turn up a distribution that approaches five hundred heads and five hundred tails. As the sample grows, your chance of turning up an unbroken string shrinks rapidly (to roughly one-in-sixteen after five tosses; one-in-five-hundred after ten tosses; and one-in-five-hundred-thousand after twenty tosses). A string is far better evidence of bias after twenty tosses than it is after three tosses—but if you succumb to the law of small numbers, you might draw sweeping conclusions from even tiny samples of data, just as the British and Germans did about their opponents’ tactics in World War II.

Of course, the law of small numbers applies to more than military tactics. It explains the rise of stereotypes (concluding that all people with a particular trait behave the same way); the dangers of relying on a single interview when deciding among job or college applicants (concluding that interview performance is a reliable guide to job or college performance at large); and the tendency to see short-term patterns in financial stock charts when in fact short-term stock movements almost never follow predictable patterns. The solution is to pay attention not just to the pattern of data, but also to how much data you have. Small samples aren’t just limited in value; they can be counterproductive because the stories they tell are often misleading.

There are many, many more worth reading. Here's a great chance to build your multidisciplinary skill-set.

Who’s in Charge of Our Minds? The Interpreter

One of the most fascinating discoveries of modern neuroscience is that the brain is a collection of distinct modules (grouped, highly connected neurons) performing specific functions rather than a unified system.

We'll get to why this is so important when we introduce The Interpreter later on.

This modular organization of the human brain is considered one of the key properties that sets us apart from animals. So much so, that it has displaced the theory that it stems from disproportionately bigger brains for our body size.

As neuroscientist Dr. Michael Gazzaniga points out in his wonderful book Who's In Charge? Free Will and the Science of the Brain, in terms of numbers of cells, the human brain is a proportionately scaled-up primate brain: It is what is expected for a primate of our size and does not possess relatively more neurons. They also found that the ratio between nonneuronal brain cells and neurons in human brain structures is similar to those found in other primates.

So it's not the size of our brains or the number of neurons, it's about the patterns of connectivity. As brains scaled up from insect to small mammal to larger mammal, they had to re-organize, for the simple reason that billions of neurons cannot all be connected to one another — some neurons would be way too far apart and too slow to communicate. Our brains would be gigantic and require a massive amount of energy to function.

Instead, our brain specializes and localizes. As Dr. Gazzaniga puts it, “Small local circuits, made of an interconnected group of neurons, are created to perform specific processing jobs and become automatic.” This is an important advance in our efforts to understand the mind.

Dr. Gazzaniga is most famous for his work studying split-brain patients, where many of the discoveries we're talking about were refined and explored. Split-brain patients give us a natural controlled experiment to find out “what the brain is up to” — and more importantly, how it does its work. What Gazzaniga and his co-researchers found was fascinating.

Emergence

We experience our conscious mind as a single unified thing. But if Gazzaniga & company are right, it most certainly isn't. How could a “specialized and localized” modular brain give rise to the feeling of “oneness” we feel so strongly about? It would seem there are too many things going on separately and locally:

Our conscious awareness is the mere tip of the iceberg of nonconscious processing. Below our level of awareness is the very busy nonconscious brain hard at work. Not hard for us to imagine are the housekeeping jobs the brain constantly struggles to keep homeostatic mechanisms up and running, such as our heart beating, our lungs breathing, and our temperature just right. Less easy to imagine, but being discovered left and right over the past fifty years, are the myriads of nonconscious processes smoothly putt-putting along. Think about it.

To begin with there are all the automatic visual and other sensory processing we have talked about. In addition, our minds are always being unconsciously biased by positive and negative priming processes, and influenced by category identification processes. In our social world, coalitionary bonding processes, cheater detection processes, and even moral judgment processes (to name only a few) are cranking away below our conscious mechanisms. With increasingly sophisticated testing methods, the number and diversity of identified processes is only going to multiply.

So what's going on? Who's controlling all this stuff? The idea is that the brain works more like traffic than a car. No one is controlling it!

It's due to a principle of complex systems called emergence, and it explains why all of these “specialized and localized” processes can give rise to what seems like a unified mind.

The key to understanding emergence is to understand that there are different levels of organization. My favorite analogy is that of the car, which I have mentioned before. If you look at an isolated car part, such as a cam shaft, you cannot predict that the freeway will be full of traffic at 5:15 PM. Monday through Friday. In fact, you could not even predict the phenomenon of traffic would even occur if you just looked at a brake pad. You cannot analyze traffic at the level of car parts. Did the guy who invented the wheel ever visualize the 405 in Los Angeles on Friday evening? You cannot even analyze traffic at the level of the individual car. When you get a bunch of cars and drivers together, with the variables of location, time, weather, and society, all in the mix, then at that level you can predict traffic. A new set of laws emerge that aren't predicted from the parts alone.

Emergence, Gazzaniga goes on, is how to understand the brain. Sub-atomic particles, atoms, molecules, cells, neurons, modules, the mind, and a collection of minds (a society) are all different levels of organization, with their own laws that cannot necessarily be predicted from the properties of the level below.

The unified mind we feel present emerges from the thousands of lower-level processes operating in parallel. Most of it is so automatic that we have no idea it's going on. (Not only does the mind work bottom-up but top down processes also influence it. In other words, what you think influences what you see and hear.)

And when we do start consciously explaining what's going on — or trying to — we start getting very interesting results. The part of our brain that seeks explanations and infers causality turns out to be a quirky little beast.

The Interpreter

Let's say you were to see a snake and jump back, automatically and quickly. Did you choose that action? If asked, you'd almost certainly say so, but the truth is more complicated.

If you were to have asked me why I jumped, I would have replied that I thought I'd seen a snake. That answer certainly makes sense, but the truth is I jumped before I was conscious of the snake: I had seen it, I didn't know I had seen it. My explanation is from post hoc information I have in my conscious system: The facts are that I jumped and that I saw a snake. The reality, however, is that I jumped way before (in a world of milliseconds) I was conscious of the snake. I did not make a conscious decision to jump and then consciously execute it. When I answered that question, I was, in a sense, confabulating: giving a fictitious account of a past event, believing it to be true. The real reason I jumped was an automatic nonconscious reaction to the fear response set into play by the amygdala. The reason I would have confabulated is that our human brains are driven to infer causality. They are driven to explain events that make sense out of the scattered facts. The facts that my conscious brain had to work were that I saw a snake, and I jumped. It did not register that I jumped before I was consciously aware of the snake.

Here's how it works: A thing happens, we react, we feel something about it, and then we go on explaining it. Sensory information is fed into an explanatory module which Gazzaniga calls The Interpreter, and studying split-brain patients showed him that it resides in the left hemisphere of the brain.

With that knowledge, Gazzaniga and his team were able to do all kinds of clever things to show how ridiculous our Interpreter can often be, especially in split-brain patients.

Take this case of a split-brain patient unconsciously making up a nonsense story when its two hemispheres are shown different images and instructed to choose a related image from a group of pictures. Read carefully:

We showed a split-brain patient two pictures: A chicken claw was shown to his right visual field, so the left hemisphere only saw the claw picture, and a snow scene was shown to the left visual field, so the right hemisphere saw only that. He was then asked to choose a picture from an array of pictures placed in fully view in front of him, which both hemispheres could see.

The left hand pointed to a shovel (which was the most appropriate answer for the snow scene) and the right hand pointed to a chicken (the most appropriate answer for the chicken claw). Then we asked why he chose those items. His left-hemisphere speech center replied, “Oh, that's simple. The chicken claw goes with the chicken,” easily explaining what it knew. It had seen the chicken claw.

Then, looking down at his left hand pointing to the shovel, without missing a beat, he said, “And you need a shovel to clean out the chicken shed.” Immediately, the left brain, observing the left hand's response without the knowledge of why it had picked that item, put into a context that would explain it. It interpreted the response in a context consistent with what it knew, and all it knew was: Chicken claw. It knew nothing about the snow scene, but it had to explain the shovel in his left hand. Well, chickens do make a mess, and you have to clean it up. Ah, that's it! Makes sense.

What was interesting was that the left hemisphere did not say, “I don't know,” which truly was the correct answer. It made up a post hoc answer that fit the situation. It confabulated, taking cues from what it knew and putting them together in an answer that made sense.

The left hand, responding to the snow Gazzaniga covertly showed the left visual field, pointed to the snow shovel. This all took place in the right hemisphere of the brain (think of it like an “X” — the right hemisphere controls the left side of the body and vice versa). But since it was a split-brain patient, the left hemisphere was not given any of the information about snow.

And yet, the left hemisphere is where the Interpreter resides! So what did the Interpreter do, asked to explain why the shovel was chosen seeing but having no information about snow, only about chickens? It made up a story about shoveling chicken coops!

Gazzaniga goes on to explain several cases of being able to fool the left brain Interpreter over and over, and in often subtle ways.

***

This left-brain module is what we use to explain causality, seeking it for its own sake. The Interpreter, like all of our mental modules, is a wonderful adaption that's led us to understand and explain causality and the world around us, to our great advantage, but as any good student of social psychology knows, we'll simply make up a plausible story if we have nothing solid to go on — leading to a narrative fallacy.

This leads to odd results that seem pretty maladaptive, like our tendency to gamble like idiots. (Charlie Munger calls this mis-gambling compulsion.) But outside of the artifice of the casino, the Interpreter works quite well.

But here's the catch. In the words of Gazzaniga, “The interpreter is only as good as the information it gets.”

The interpreter receives the results of the computations of a multitude of modules. It does not receive the information that there are multitudes of modules. It does not receive the information about how the modules work. It does not receive the information that there is a pattern-recognition system in the right hemisphere. The interpreter is a module that explains events from the information it does receive.

[…]

The interpreter is receiving data from the domains that monitor the visual system, the somatosensory system, the emotions, and cognitive representations. But as we just saw above, the interpreter is only as good as the information it receives. Lesions or malfunctions in any one of these domain-monitoring systems leads to an array of peculiar neurological conditions that involve the formation of either incomplete or delusional understandings about oneself, other individuals, objects, and the surrounding environment, manifesting in what appears to be bizarre behavior. It no longer seems bizarre, however, once you understand that such behaviors are the result of the interpreter getting no, or bad, information.

This can account for a lot of the ridiculous behavior and ridiculous narratives we see around us. The Interpreter must deal with what it's given, and as Gazzaniga's work shows, it can be manipulated and tricked. He calls it “hijacking” — and when the Interpreter is hijacked, it makes pretty bad decisions and generates strange explanations.

Anyone who's watched a friend acting hilariously when wearing a modern VR headset can see how easy it is to “hijack” one's sensory perceptions even if the conscious brain “knows” that it's not real. And of course, Robert Cialdini once famously described this hijacking process as a “click, whirr” reaction to social stimuli. It's a powerful phenomenon.

***

What can we learn from this?

The story of the multi-modular mind and the Interpreter module shows us that the brain does not have a rational “central command station” — your mind is at the mercy of what it's fed. The Interpreter is constantly weaving a story of what's going on around us, applying causal explanations to the data it's being fed; doing the best job it can with what it's got.

This is generally useful: a few thousand generations of data has honed our modules to understand the world well enough to keep us surviving and thriving. The job of the brain is to pass on our genes. But that doesn't mean that it's always making optimal decisions in the modern world.

We must realize that our brain can be fooled; it can be tricked, played with, and we won't always realize it immediately. Our Interpreter will weave a plausible story — that's it's job.

For this reason, Charlie Munger employs a “two track” analysis: What are the facts; and where is my brain fooling me? We're wise to follow suit.

A Cascade of Sand: Complex Systems in a Complex Time

We live in a world filled with rapid change: governments topple, people rise and fall, and technology has created a connectedness the world has never experienced before. Joshua Cooper Ramo believes this environment has created an “‘avalanche of ceaseless change.”

In his book, The Age of the Unthinkable: Why the New World Disorder Constantly Surprises Us And What We Can Do About It he outlines what this new world looks like and gives us prescriptions on how best to deal with the disorder around us.

Ramo believes that we are entering a revolutionary age that will render seemingly fortified institutions weak, and weak movements strong. He feels we aren’t well prepared for these radical shifts as those in positions of power tend to have antiquated ideologies in dealing with issues. Generally, they treat anything complex as one dimensional.

Unfortunately, whether they are running corporations or foreign ministries or central banks, some of the best minds of our era are still in thrall to an older way of seeing and thinking. They are making repeated misjudgments about the world. In a way, it’s hard to blame them. Mostly they grew up at a time when the global order could largely be understood in simpler terms, when only nations really mattered, when you could think there was a predictable relationship between what you wanted and what you got. They came of age as part of a tradition that believed all international crises had beginnings and, if managed well, ends.

This is one of the main flaws of traditional thinking about managing conflict/change: we identify a problem, decide on a path forward, and implement that solution. We think in linear terms and see a finish line once the specific problem we have discovered is ‘solved.’

In this day and age (and probably in all days and ages, whether they realized it or not) we have to accept that the finish line is constantly moving and that, in fact, there never will be a finish line. Solving one problem may fix an issue for a time but it tends to also illuminate a litany of new problems. (Many of which were likely already present but hiding under the old problem you just “fixed”.)

In fact, our actions in trying to solve X will sometimes have a cascade effect because the world is actually a series of complex and interconnected systems.

Some great thinkers have spoken about these problems in the past. Ramo highlights some interesting quotes from the Nobel Prize speech that Austrian economist Friedrich August von Hayek gave in 1974, entitled The Pretence of Knowledge.

To treat complex phenomena as if they were simple, to pretend that you could hold the unknowable in the cleverly crafted structure of your ideas —he could think of nothing that was more dangerous. “There is much reason,” Hayek said, “to be apprehensive about the long-run dangers created in a much wider field by the uncritical acceptance of assertions which have the appearance of being scientific.”

Concluding his Nobel speech, Hayek warned, “If man is not to do more harm than good in his efforts to improve the social order, he will have to learn that in this, as in all other fields where essential complexity of an organized kind prevails, he cannot acquire the full knowledge which would make mastery of the events possible.” Politicians and thinkers would be wise not to try to bend history as “the craftsman shapes his handiwork, but rather to cultivate growth by providing the appropriate environment, in the manner a gardener does for his plants.”

This is an important distinction: the idea that we need to be gardeners instead of craftsmen. When we are merely creating something we have a sense of control; we have a plan and an end state. When the shelf is built, it's built.

Being a gardener is different. You have to prepare the environment; you have to nurture the plants and know when to leave them alone. You have to make sure the environment is hospitable to everything you want to grow (different plants have different needs), and after the harvest you aren’t done. You need to turn the earth and, in essence, start again. There is no end state if you want something to grow.

* * *

So, if most of the threats we face to today are so multifaceted and complex that we can’t use the majority of the strategies that have worked historically, how do we approach the problem? A Danish theoretical physicist named Per Bak had an interesting view of this which he termed self-organized criticality and it comes with an excellent experiment/metaphor that helps to explain the concept.

Bak’s research focused on answering the following question: if you created a cone of sand grain by grain, at what point would you create a little sand avalanche? This breakdown of the cone was inevitable but he wanted to know if he could somehow predict at what point this would happen.

Much like there is a precise temperature that water starts to boil, Bak hypothesized there was a specific point where the stack became unstable, and at this point adding a single grain of sand could trigger the avalanche.

In his work, Bak came to realize that the sandpile was inherently unpredictable. He discovered that there were times, even when the pile had reached a critical state, that an additional grain of sand would have no effect:

“Complex behavior in nature,” Bak explained, “reflects the tendency of large systems to evolve into a poised ‘critical’ state, way out of balance, where minor disturbances may lead to events, called avalanches, of all sizes.” What Bak was trying to study wasn’t simply stacks of sand, but rather the underlying physics of the world. And this was where the sandpile got interesting. He believed that sandpile energy, the energy of systems constantly poised on the edge of unpredictable change, was one of the fundamental forces of nature. He saw it everywhere, from physics (in the way tiny particles amassed and released energy) to the weather (in the assembly of clouds and the hard-to-predict onset of rainstorms) to biology (in the stutter-step evolution of mammals). Bak’s sandpile universe was violent —and history-making. It wasn’t that he didn’t see stability in the world, but that he saw stability as a passing phase, as a pause in a system of incredible —and unmappable —dynamism. Bak’s world was like a constantly spinning revolver in a game of Russian roulette, one random trigger-pull away from explosion.

Traditionally our thinking is very linear and if we start thinking of systems as more like sandpiles, we start to shift into nonlinear thinking. This means we can no longer assume that a given action will produce a given reaction: it may or may not depending on the precise initial conditions.

This dynamic sandpile energy demands that we accept the basic unpredictability of the global order —one of those intellectual leaps that sounds simple but that immediately junks a great deal of traditional thinking. It also produces (or should produce) a profound psychological shift in what we can and can’t expect from the world. Constant surprise and new ideas? Yes. Stable political order, less complexity, the survival of institutions built for an older world? No.

Ramo isn’t arguing that complex systems are incomprehensible and fundamentally flawed. These systems are manageable, they just require a divergence from the old ways of thinking, the linear way that didn’t account for all the invisible connections in the sand.

Look at something like the Internet; it’s a perfect example of a complex system with a seemingly infinite amount of connections, but it thrives. This system is constantly bombarded with unsuspected risk, but it is so malleable that it has yet to feel the force of an avalanche. The Internet was designed to thrive in a hostile environment and its complexity was embraced. Unfortunately, for every adaptive system like the Internet there seems to be a maladaptive system, ones so rigid they will surely break in a world of complexity.

The Age of the Unthinkable goes on to show us historical examples of systems that did indeed break; this helps to frame where we have been particularly fragile in the past and where the mistakes in our thinking may have been. In the back half of the book, Ramo outlines strategies he believes will help us become more Antifragile, he calls this “Deep Security”.

Implementing these strategies will likely be met with considerable resistance, many people in positions of power benefit from the systems staying as they are. Revolutions are never easy but, as we’ve shown, even one grain of sand can have a huge impact.