Over 400,000 people visited Farnam Street last month to learn how to make better decisions, create new ideas, and avoid stupid errors. With more than 100,000 subscribers to our popular weekly digest, we've become an online intellectual hub. To learn more about what we do, start here.

Category Archives: Science

Who’s in Charge of Our Minds? The Interpreter

One of the most fascinating discoveries of modern neuroscience is that the brain is a collection of distinct modules (grouped, highly connected neurons) performing specific functions rather than a unified system.

We'll get to why this is so important when we introduce The Interpreter later on.

This modular organization of the human brain is considered one of the key properties that sets us apart from animals. So much so, that it has displaced the theory that it stems from disproportionately bigger brains for our body size.

As neuroscientist Dr. Michael Gazzaniga points out in his wonderful book Who's In Charge? Free Will and the Science of the Brain, in terms of numbers of cells, the human brain is a proportionately scaled-up primate brain: It is what is expected for a primate of our size and does not possess relatively more neurons. They also found that the ratio between nonneuronal brain cells and neurons in human brain structures is similar to those found in other primates.

So it's not the size of our brains or the number of neurons, it's about the patterns of connectivity. As brains scaled up from insect to small mammal to larger mammal, they had to re-organize, for the simple reason that billions of neurons cannot all be connected to one another — some neurons would be way too far apart and too slow to communicate. Our brains would be gigantic and require a massive amount of energy to function.

Instead, our brain specializes and localizes. As Dr. Gazzaniga puts it, “Small local circuits, made of an interconnected group of neurons, are created to perform specific processing jobs and become automatic.” This is an important advance in our efforts to understand the mind.

Dr. Gazzaniga is most famous for his work studying split-brain patients, where many of the discoveries we're talking about were refined and explored. Split-brain patients give us a natural controlled experiment to find out “what the brain is up to” — and more importantly, how it does its work. What Gazzaniga and his co-researchers found was fascinating.

Emergence

We experience our conscious mind as a single unified thing. But if Gazzaniga & company are right, it most certainly isn't. How could a “specialized and localized” modular brain give rise to the feeling of “oneness” we feel so strongly about? It would seem there are too many things going on separately and locally:

Our conscious awareness is the mere tip of the iceberg of nonconscious processing. Below our level of awareness is the very busy nonconscious brain hard at work. Not hard for us to imagine are the housekeeping jobs the brain constantly struggles to keep homeostatic mechanisms up and running, such as our heart beating, our lungs breathing, and our temperature just right. Less easy to imagine, but being discovered left and right over the past fifty years, are the myriads of nonconscious processes smoothly putt-putting along. Think about it.

To begin with there are all the automatic visual and other sensory processing we have talked about. In addition, our minds are always being unconsciously biased by positive and negative priming processes, and influenced by category identification processes. In our social world, coalitionary bonding processes, cheater detection processes, and even moral judgment processes (to name only a few) are cranking away below our conscious mechanisms. With increasingly sophisticated testing methods, the number and diversity of identified processes is only going to multiply.

So what's going on? Who's controlling all this stuff? The idea is that the brain works more like traffic than a car. No one is controlling it!

It's due to a principle of complex systems called emergence, and it explains why all of these “specialized and localized” processes can give rise to what seems like a unified mind.

The key to understanding emergence is to understand that there are different levels of organization. My favorite analogy is that of the car, which I have mentioned before. If you look at an isolated car part, such as a cam shaft, you cannot predict that the freeway will be full of traffic at 5:15 PM. Monday through Friday. In fact, you could not even predict the phenomenon of traffic would even occur if you just looked at a brake pad. You cannot analyze traffic at the level of car parts. Did the guy who invented the wheel ever visualize the 405 in Los Angeles on Friday evening? You cannot even analyze traffic at the level of the individual car. When you get a bunch of cars and drivers together, with the variables of location, time, weather, and society, all in the mix, then at that level you can predict traffic. A new set of laws emerge that aren't predicted from the parts alone.

Emergence, Gazzaniga goes on, is how to understand the brain. Sub-atomic particles, atoms, molecules, cells, neurons, modules, the mind, and a collection of minds (a society) are all different levels of organization, with their own laws that cannot necessarily be predicted from the properties of the level below.

The unified mind we feel present emerges from the thousands of lower-level processes operating in parallel. Most of it is so automatic that we have no idea it's going on. (Not only does the mind work bottom-up but top down processes also influence it. In other words, what you think influences what you see and hear.)

And when we do start consciously explaining what's going on — or trying to — we start getting very interesting results. The part of our brain that seeks explanations and infers causality turns out to be a quirky little beast.

The Interpreter

Let's say you were to see a snake and jump back, automatically and quickly. Did you choose that action? If asked, you'd almost certainly say so, but the truth is more complicated.

If you were to have asked me why I jumped, I would have replied that I thought I'd seen a snake. That answer certainly makes sense, but the truth is I jumped before I was conscious of the snake: I had seen it, I didn't know I had seen it. My explanation is from post hoc information I have in my conscious system: The facts are that I jumped and that I saw a snake. The reality, however, is that I jumped way before (in a world of milliseconds) I was conscious of the snake. I did not make a conscious decision to jump and then consciously execute it. When I answered that question, I was, in a sense, confabulating: giving a fictitious account of a past event, believing it to be true. The real reason I jumped was an automatic nonconscious reaction to the fear response set into play by the amygdala. The reason I would have confabulated is that our human brains are driven to infer causality. They are driven to explain events that make sense out of the scattered facts. The facts that my conscious brain had to work were that I saw a snake, and I jumped. It did not register that I jumped before I was consciously aware of the snake.

Here's how it works: A thing happens, we react, we feel something about it, and then we go on explaining it. Sensory information is fed into an explanatory module which Gazzaniga calls The Interpreter, and studying split-brain patients showed him that it resides in the left hemisphere of the brain.

With that knowledge, Gazzaniga and his team were able to do all kinds of clever things to show how ridiculous our Interpreter can often be, especially in split-brain patients.

Take this case of a split-brain patient unconsciously making up a nonsense story when its two hemispheres are shown different images and instructed to choose a related image from a group of pictures. Read carefully:

We showed a split-brain patient two pictures: A chicken claw was shown to his right visual field, so the left hemisphere only saw the claw picture, and a snow scene was shown to the left visual field, so the right hemisphere saw only that. He was then asked to choose a picture from an array of pictures placed in fully view in front of him, which both hemispheres could see.

The left hand pointed to a shovel (which was the most appropriate answer for the snow scene) and the right hand pointed to a chicken (the most appropriate answer for the chicken claw). Then we asked why he chose those items. His left-hemisphere speech center replied, “Oh, that's simple. The chicken claw goes with the chicken,” easily explaining what it knew. It had seen the chicken claw.

Then, looking down at his left hand pointing to the shovel, without missing a beat, he said, “And you need a shovel to clean out the chicken shed.” Immediately, the left brain, observing the left hand's response without the knowledge of why it had picked that item, put into a context that would explain it. It interpreted the response in a context consistent with what it knew, and all it knew was: Chicken claw. It knew nothing about the snow scene, but it had to explain the shovel in his left hand. Well, chickens do make a mess, and you have to clean it up. Ah, that's it! Makes sense.

What was interesting was that the left hemisphere did not say, “I don't know,” which truly was the correct answer. It made up a post hoc answer that fit the situation. It confabulated, taking cues from what it knew and putting them together in an answer that made sense.

The left hand, responding to the snow Gazzaniga covertly showed the left visual field, pointed to the snow shovel. This all took place in the right hemisphere of the brain (think of it like an “X” — the right hemisphere controls the left side of the body and vice versa). But since it was a split-brain patient, the left hemisphere was not given any of the information about snow.

And yet, the left hemisphere is where the Interpreter resides! So what did the Interpreter do, asked to explain why the shovel was chosen seeing but having no information about snow, only about chickens? It made up a story about shoveling chicken coops!

Gazzaniga goes on to explain several cases of being able to fool the left brain Interpreter over and over, and in often subtle ways.

***

This left-brain module is what we use to explain causality, seeking it for its own sake. The Interpreter, like all of our mental modules, is a wonderful adaption that's led us to understand and explain causality and the world around us, to our great advantage, but as any good student of social psychology knows, we'll simply make up a plausible story if we have nothing solid to go on — leading to a narrative fallacy.

This leads to odd results that seem pretty maladaptive, like our tendency to gamble like idiots. (Charlie Munger calls this mis-gambling compulsion.) But outside of the artifice of the casino, the Interpreter works quite well.

But here's the catch. In the words of Gazzaniga, “The interpreter is only as good as the information it gets.”

The interpreter receives the results of the computations of a multitude of modules. It does not receive the information that there are multitudes of modules. It does not receive the information about how the modules work. It does not receive the information that there is a pattern-recognition system in the right hemisphere. The interpreter is a module that explains events from the information it does receive.

[…]

The interpreter is receiving data from the domains that monitor the visual system, the somatosensory system, the emotions, and cognitive representations. But as we just saw above, the interpreter is only as good as the information it receives. Lesions or malfunctions in any one of these domain-monitoring systems leads to an array of peculiar neurological conditions that involve the formation of either incomplete or delusional understandings about oneself, other individuals, objects, and the surrounding environment, manifesting in what appears to be bizarre behavior. It no longer seems bizarre, however, once you understand that such behaviors are the result of the interpreter getting no, or bad, information.

This can account for a lot of the ridiculous behavior and ridiculous narratives we see around us. The Interpreter must deal with what it's given, and as Gazzaniga's work shows, it can be manipulated and tricked. He calls it “hijacking” — and when the Interpreter is hijacked, it makes pretty bad decisions and generates strange explanations.

Anyone who's watched a friend acting hilariously when wearing a modern VR headset can see how easy it is to “hijack” one's sensory perceptions even if the conscious brain “knows” that it's not real. And of course, Robert Cialdini once famously described this hijacking process as a “click, whirr” reaction to social stimuli. It's a powerful phenomenon.

***

What can we learn from this?

The story of the multi-modular mind and the Interpreter module shows us that the brain does not have a rational “central command station” — your mind is at the mercy of what it's fed. The Interpreter is constantly weaving a story of what's going on around us, applying causal explanations to the data it's being fed; doing the best job it can with what it's got.

This is generally useful: a few thousand generations of data has honed our modules to understand the world well enough to keep us surviving and thriving. The job of the brain is to pass on our genes. But that doesn't mean that it's always making optimal decisions in the modern world.

We must realize that our brain can be fooled; it can be tricked, played with, and we won't always realize it immediately. Our Interpreter will weave a plausible story — that's it's job.

For this reason, Charlie Munger employs a “two track” analysis: What are the facts; and where is my brain fooling me? We're wise to follow suit.

A Cascade of Sand: Complex Systems in a Complex Time

We live in a world filled with rapid change: governments topple, people rise and fall, and technology has created a connectedness the world has never experienced before. Joshua Cooper Ramo believes this environment has created an “‘avalanche of ceaseless change.”

In his book, The Age of the Unthinkable: Why the New World Disorder Constantly Surprises Us And What We Can Do About It he outlines what this new world looks like and gives us prescriptions on how best to deal with the disorder around us.

Ramo believes that we are entering a revolutionary age that will render seemingly fortified institutions weak, and weak movements strong. He feels we aren’t well prepared for these radical shifts as those in positions of power tend to have antiquated ideologies in dealing with issues. Generally, they treat anything complex as one dimensional.

Unfortunately, whether they are running corporations or foreign ministries or central banks, some of the best minds of our era are still in thrall to an older way of seeing and thinking. They are making repeated misjudgments about the world. In a way, it’s hard to blame them. Mostly they grew up at a time when the global order could largely be understood in simpler terms, when only nations really mattered, when you could think there was a predictable relationship between what you wanted and what you got. They came of age as part of a tradition that believed all international crises had beginnings and, if managed well, ends.

This is one of the main flaws of traditional thinking about managing conflict/change: we identify a problem, decide on a path forward, and implement that solution. We think in linear terms and see a finish line once the specific problem we have discovered is ‘solved.’

In this day and age (and probably in all days and ages, whether they realized it or not) we have to accept that the finish line is constantly moving and that, in fact, there never will be a finish line. Solving one problem may fix an issue for a time but it tends to also illuminate a litany of new problems. (Many of which were likely already present but hiding under the old problem you just “fixed”.)

In fact, our actions in trying to solve X will sometimes have a cascade effect because the world is actually a series of complex and interconnected systems.

Some great thinkers have spoken about these problems in the past. Ramo highlights some interesting quotes from the Nobel Prize speech that Austrian economist Friedrich August von Hayek gave in 1974, entitled The Pretence of Knowledge.

To treat complex phenomena as if they were simple, to pretend that you could hold the unknowable in the cleverly crafted structure of your ideas —he could think of nothing that was more dangerous. “There is much reason,” Hayek said, “to be apprehensive about the long-run dangers created in a much wider field by the uncritical acceptance of assertions which have the appearance of being scientific.”

Concluding his Nobel speech, Hayek warned, “If man is not to do more harm than good in his efforts to improve the social order, he will have to learn that in this, as in all other fields where essential complexity of an organized kind prevails, he cannot acquire the full knowledge which would make mastery of the events possible.” Politicians and thinkers would be wise not to try to bend history as “the craftsman shapes his handiwork, but rather to cultivate growth by providing the appropriate environment, in the manner a gardener does for his plants.”

This is an important distinction: the idea that we need to be gardeners instead of craftsmen. When we are merely creating something we have a sense of control; we have a plan and an end state. When the shelf is built, it's built.

Being a gardener is different. You have to prepare the environment; you have to nurture the plants and know when to leave them alone. You have to make sure the environment is hospitable to everything you want to grow (different plants have different needs), and after the harvest you aren’t done. You need to turn the earth and, in essence, start again. There is no end state if you want something to grow.

* * *

So, if most of the threats we face to today are so multifaceted and complex that we can’t use the majority of the strategies that have worked historically, how do we approach the problem? A Danish theoretical physicist named Per Bak had an interesting view of this which he termed self-organized criticality and it comes with an excellent experiment/metaphor that helps to explain the concept.

Bak’s research focused on answering the following question: if you created a cone of sand grain by grain, at what point would you create a little sand avalanche? This breakdown of the cone was inevitable but he wanted to know if he could somehow predict at what point this would happen.

Much like there is a precise temperature that water starts to boil, Bak hypothesized there was a specific point where the stack became unstable, and at this point adding a single grain of sand could trigger the avalanche.

In his work, Bak came to realize that the sandpile was inherently unpredictable. He discovered that there were times, even when the pile had reached a critical state, that an additional grain of sand would have no effect:

“Complex behavior in nature,” Bak explained, “reflects the tendency of large systems to evolve into a poised ‘critical’ state, way out of balance, where minor disturbances may lead to events, called avalanches, of all sizes.” What Bak was trying to study wasn’t simply stacks of sand, but rather the underlying physics of the world. And this was where the sandpile got interesting. He believed that sandpile energy, the energy of systems constantly poised on the edge of unpredictable change, was one of the fundamental forces of nature. He saw it everywhere, from physics (in the way tiny particles amassed and released energy) to the weather (in the assembly of clouds and the hard-to-predict onset of rainstorms) to biology (in the stutter-step evolution of mammals). Bak’s sandpile universe was violent —and history-making. It wasn’t that he didn’t see stability in the world, but that he saw stability as a passing phase, as a pause in a system of incredible —and unmappable —dynamism. Bak’s world was like a constantly spinning revolver in a game of Russian roulette, one random trigger-pull away from explosion.

Traditionally our thinking is very linear and if we start thinking of systems as more like sandpiles, we start to shift into nonlinear thinking. This means we can no longer assume that a given action will produce a given reaction: it may or may not depending on the precise initial conditions.

This dynamic sandpile energy demands that we accept the basic unpredictability of the global order —one of those intellectual leaps that sounds simple but that immediately junks a great deal of traditional thinking. It also produces (or should produce) a profound psychological shift in what we can and can’t expect from the world. Constant surprise and new ideas? Yes. Stable political order, less complexity, the survival of institutions built for an older world? No.

Ramo isn’t arguing that complex systems are incomprehensible and fundamentally flawed. These systems are manageable, they just require a divergence from the old ways of thinking, the linear way that didn’t account for all the invisible connections in the sand.

Look at something like the Internet; it’s a perfect example of a complex system with a seemingly infinite amount of connections, but it thrives. This system is constantly bombarded with unsuspected risk, but it is so malleable that it has yet to feel the force of an avalanche. The Internet was designed to thrive in a hostile environment and its complexity was embraced. Unfortunately, for every adaptive system like the Internet there seems to be a maladaptive system, ones so rigid they will surely break in a world of complexity.

The Age of the Unthinkable goes on to show us historical examples of systems that did indeed break; this helps to frame where we have been particularly fragile in the past and where the mistakes in our thinking may have been. In the back half of the book, Ramo outlines strategies he believes will help us become more Antifragile, he calls this “Deep Security”.

Implementing these strategies will likely be met with considerable resistance, many people in positions of power benefit from the systems staying as they are. Revolutions are never easy but, as we’ve shown, even one grain of sand can have a huge impact.

Survival of the Fittest: Groups versus Individuals

If ‘survival of the fittest’ is the prime evolutionary tenet, then why do some behaviors that lead to winning or success, seemingly justified by this concept, ultimately leave us cold?

Taken from Darwin’s theory of evolution, survival of the fittest is often conceptualized as the advantage that accrues with certain traits, allowing an individual to both thrive and survive in their environment by out-competing for limited resources. Qualities such as strength and speed were beneficial to our ancestors, allowing them to survive in demanding environments, and thus our general admiration for these qualities is now understood through this evolutionary lens.

However, in humans this evolutionary concept is often co-opted to defend a wide range of behaviors, not all of them good. Winning by cheating, or stepping on others to achieve goals.

Why is this?

One answer is that humans are not only concerned with our individual survival, but the survival of our group. (Which, of course, leads to improved individual survival, on average.) This relationship between individual and group survival is subject to intense debate among biologists.

Selecting for Unselfishness?

Humans display a wide range of behavior that seems counter-intuitive to the survival of the fittest mentality until you consider that we are an inherently social species, and that keeping our group fit is a wise investment of our time and energy.

One of the behaviors that humans display a lot of is “indirect reciprocity”. Distinguished from “direct reciprocity”, in which I help you and you help me, indirect reciprocity confers no immediate benefit to the one doing the helping. Either I help you, then you help someone else at a later time, or I help you and then someone else, some time in the future, helps me.

Martin A. Nowak and Karl Sigmund have studied this phenomenon in humans for many years. Essentially, they ask the question “How can natural selection promote unselfish behavior?”

Many of their studies have shown that “propensity for indirect reciprocity is widespread. A lot of people choose to do it.”

Furthermore:

Humans are the champions of reciprocity. Experiments and everyday experience alike show that what Adam Smith called ‘our instinct to trade, barter and truck' relies to a considerable extent on the widespread tendency to return helpful and harmful acts in kind. We do so even if these acts have been directed not to us but to others.

We care about what happens to others, even if the entire event is one that we have no part in. If you consider evolution in terms of survival of the fittest group, rather than individual, this makes sense.

Supporting those who harm others can breed mistrust and instability. And if we don’t trust each other, day to day transactions in our world will be completely undermined. Sending your kids to school, banking, online shopping: We place a huge amount of trust in our fellow humans every day.

If we consider this idea of group survival, we can also see value in a wider range of human attributes and behaviors. It is now not about “I have to be the fittest in every possible way in order to survive“, but recognizing that I want fit people in my group.

In her excellent book, Quiet: The Power of Introverts in a World That Can’t Stop Talking, author Susan Cain explores, among other things, the relevance of introverts to social function. How their contributions benefit the group as a whole. Introverts are people who “like to focus on one task at a time, … listen more than they talk, think before they speak, … [and] tend to dislike conflict.”

Though out of step with the culture of “the extrovert ideal” we are currently living in, introverts contribute significantly to our group fitness. Without them we would be deprived of much of our art and scientific progress.

Cain argues:

Among evolutionary biologists, who tend to subscribe to the vision of lone individuals hell-bent on reproducing their own DNA, the idea that species include individuals whose traits promote group survival is hotly debated and, not long ago, could practically get you kicked out of the academy.

But the idea makes sense. If personality types such as introverts aren’t the fittest for survival, then why did they persist? Possibly because of their value to the group.

Cain looks at the work of Dr. Elaine Aron, who has spent years studying introverts, and is one herself. In explaining the idea of different personality traits as part of group selection in evolution, Aron offers this story in an article posted on her website:

I used to joke that when a group of prehistoric humans were sitting around the campfire and a lion was creeping up on them all, the sensitive ones [introverts] would alert the others to the lion's prowling and insist that something be done. But the non-sensitive ones [extroverts] would be the ones more likely to go out and face the lion. Hence there are more of them than there are of us, since they are willing and even happy to do impulsive, dangerous things that will kill many of us. But also, they are willing to protect us and hunt for us, if we are not as good at killing large animals, because the group needs us. We have been the healers, trackers, shamans, strategists, and of course the first to sense danger. So together the two types survive better than a group of just one type or the other.

The lesson is this: Groups survive better if they have individuals with different strengths to draw on. The more tools you have, the more likely you can complete a job. The more people you have that are different the more likely you can survive the unexpected.

Which Group?

How then, does one define the group? Who am I willing to help? Arguably, I’m most willing to sacrifice for my children, or family. My immediate little group. But history is full of examples of those who sacrificed significantly for their tribes or sports teams or countries.

We can’t argue that it is just about the survival of our own DNA. That may explain why I will throw myself in front of a speeding car to protect my child, but the beaches of Normandy were stormed by thousands of young, childless men. Soldiers from World War I, when interviewed about why they would jump out of a trench, trying to take a slice of no man’s land, most often said they did it “for the guy next to them”. They initially joined the military out of a sense of “national pride”, or other very non-DNA reasons.

Clearly, human culture is capable of defining “groups” very broadly though a complex system of mythology, creating deep loyalty to “imaginary” groups like sports teams, corporations, nations, or religions.

As technology shrinks our world, our group expands. Technological advancement pushes us into higher degrees of specialization, so that individual survival becomes clearly linked with group survival.

I know that I have a vested interest in doing my part to maintain the health of my group. I am very attached to indoor plumbing and grocery stores, yet don’t participate at all in the giant webs that allow those things to exist in my life. I don’t know anything about the configuration of the municipal sewer system or how to grow raspberries. (Of course, Adam Smith called this process of the individual benefitting the group through specialization the Invisible Hand.)

When we see ourselves as part of a group, we want the group to survive and even thrive. Yet how big can our group be? Is there always an us vs. them? Does our group surviving always have to be at the expense of others? We leave you with the speculation.

 

Principles for an Age of Acceleration

MIT Media Lab is a creative nerve center where great ideas like One Laptop per Child, LEGO Mindstorms, and Scratch programming language have emerged.

Its director, Joi Ito, has done a lot of thinking about how prevailing systems of thought will not be the ones to see us through the coming decades. In his book Whiplash: How to Survive our Faster Future, he notes that sometime late in the last century, technology began to outpace our ability to understand it.

We are blessed (or cursed) to live in interesting times, where high school students regularly use gene editing techniques to invent new life forms, and where advancements in artificial intelligence force policymakers to contemplate widespread, permanent unemployment. Small wonder our old habits of mind—forged in an era of coal, steel, and easy prosperity—fall short. The strong no longer necessarily survive; not all risk needs to be mitigated; and the firm is no longer the optimum organizational unit for our scarce resources.

Ito's ideas are not specific to our moment in history, but adaptive responses to a world with certain characteristics:

1. Asymmetry
In our era, effects are no longer proportional to the size of their source. The biggest change-makers of the future are the small players: “start-ups and rogues, breakaways and indie labs.”

2. Complexity
The level of complexity is shaped by four inputs, all of which are extraordinarily high in today’s world: heterogeneity, interconnection, interdependency and adaptation.

3. Uncertainty
Not knowing is okay. In fact, we’ve entered an age where the admission of ignorance offers strategic advantages over expending resources–subcommittees and think tanks and sales forecasts—toward the increasingly futile goal of forecasting future events.”

When these three conditions are in place, certain guiding principles serve us best. In his book, Ito shares some of the maxims that organize his “anti-disciplinary” Media Lab in a complex and uncertain world.

Emergence over Authority

Complex systems show properties that their individual parts don’t possess, and we call this process “emergence”. For example, life is an emergent property of chemistry. Groups of people also produce a wondrous variety of emergent behaviors—languages, economies, scientific revolutions—when each intellect contributes to a whole that is beyond the abilities of any one person.

Some organizational structures encourage this kind of creativity more than others. Authoritarian systems only allow for incremental changes, whereas nonlinear innovation emerges from decentralized networks with a low barrier to entry. As Stephen Johnson describes in Emergence, when you plug more minds into the system, “isolated hunches and private obsessions coalesce into a new way of looking at the world, shared by thousands of individuals.”

Synthetic biology best exemplifies the type of new field that can arise from emergence. Not to be confused with genetic engineering, which modifies existing organisms, synthetic biology aims to create entirely new forms of life.

Having emerged in the era of open-source software, synthetic biology is becoming an exercise in radical collaboration between students, professors, and a legion of citizen scientists who call themselves biohackers. Emergence has made its way into the lab.

As a result, the cost of sequencing DNA is plummeting at six times the rate of Moore’s Law, and a large Registry of Standard Biological Parts, or BioBricks, now offers genetic components that perform well-understood functions in whatever organism is being created, like a block of Lego.

There is still a place for leaders in an organization that fosters emergence, but the role may feel unfamiliar to a manager from a traditional hierarchy. The new leader spends less time leading and more time “gardening”—pruning the hedges, watering the flowers, and otherwise getting out of the way. (As biologist Lewis Thomas puts it, a great leader must get the air right.)

Pull over Push

“Push” strategies involve directing resources from a central source to sites where, in the leader’s estimation, they are likely to be needed or useful. In contrast, projects that use “pull” strategies attract intellectual, financial and physical resources to themselves just as they are needed, rather than stockpiling them.

Ito is a proponent of the sharing economy, through which a startup might tap into the global community of freelancers and volunteers for a custom-made task force instead of hiring permanent teams of designers, programmers or engineers.

Here's a great example:

When the Fukushima nuclear meltdown happened, Ito was living just outside of Tokyo. The Japanese government took a command-and-control (“push”) approach to the disaster, in which information would slowly climb up the hierarchy, and decisions would then be passed down stepwise to the ground-level workers.

It soon became clear that the government was not equipped to assess or communicate the radioactivity levels of each neighborhood, so Ito and his friends took the problem into their own hands. Pulling in expertise and money from far-flung scientists and entrepreneurs, they formed a citizen science group called Safecast, which built its own GPS-equipped Geiger counters and strapped them to cars for faster monitoring. They launched a website that continues to share data – more than 50 million data points so far – about local environments.

To benefit from these kinds of “pull” strategies, it pays to foster an environment that is rich with weak ties – a wide network of acquaintances from which to draw just-in-time knowledge and resources, as Ito did with Safecast.

Compasses over Maps

Detailed maps can be more misleading than useful in a fast-changing world, where a compass is the tool of choice. In the same way, organizations that plan exhaustively will be outpaced in an accelerating world by ones that are guided by a more encompassing mission.

A map implies a straightforward knowledge of the terrain, and the existence of an optimum route; the compass is a far more flexible tool and requires the user to employ creativity and autonomy in discovering his or her own path.

One advantage to the compass approach is that when a roadblock inevitably crops up, there is no need to go back to the beginning to form another plan or draw up multiple plans for each contingency. You simply navigate around the obstacle and continue in your chosen direction.

It is impossible, in any case, to make detailed plans for a complex and creative organization. The way to set a compass direction for a company is by creating a culture—or set of mythologies—that animates the parts in a common worldview.

In the case of the MIT Media Lab, that compass heading is described in three values: “Uniqueness, Impact, and Magic”. Uniqueness means that if someone is working on a similar project elsewhere, the lab moves on.

Rather than working to discover knowledge for its own sake, the lab works in the service of Impact, through start-ups and physical creations. It was expressed in the lab’s motto “Deploy or die”, but Barack Obama suggested they work on their messaging, and Ito shortened it to “Deploy.”

The Magic element, though hard to define, speaks to the delight that playful originality so often awakens.

Both students and faculty at the lab are there to learn, but not necessarily to be “educated”. Learning is something you pursue for yourself, after all, whereas education is something that’s done to you. The result is “agile, scrappy, permissionless innovation”.

The new job landscape requires more creativity from everybody. The people who will be most successful in this environment will be the ones who ask questions, trust their instincts, and refuse to follow the rules when the rules get in their way.

Other principles discussed in Whiplash include Risk over Safety, Disobedience over Compliance, Practice over Theory, Diversity over Ability, Resilience over Strength, and Systems over Objects.

The Founder Principle: A Wonderful Idea from Biology

We've all been taught natural selection; the mechanism by which species evolve through differential reproductive success. Most of us are familiar with the idea that random mutations in DNA cause variances in offspring, some of which survive more frequently than others. However, this is only part of the story.

Sometimes other situations cause massive changes in species populations, and they're often more nuanced and tough to spot.

One such concept comes from one of the most influential biologists in history, Ernst Mayr. He called it The Founder Principle, a mechanism by which new species are created by a splintered population; often with lower genetic diversity and an increased risk of extinction.

In the brilliant The Song of the Dodo: Island Biography in an Age of ExtinctionDavid Quammen gives us not only the stories of many brilliant biological naturalists including Mayr, but we also get a deep dive into the core concepts of evolution and extinction, including the founder principle.

Quammen begins by outlining the basic idea:

When a new population is founded in an isolated place, the founders usually constitute a numerically tiny group – a handful of lonely pioneers, or just a pair, or maybe no more than one pregnant female. Descending from such a small number of founders, the new population will carry only a minuscule and to some extent random sample of the gene pool of the base population. The sample will most likely be unrepresentative, encompassing less genetic diversity than the larger pool. This effect shows itself whenever a small sample is taken from a large aggregation of diversity; whether the aggregation consists of genes, colored gum balls, M&M’s, the cards of a deck, or any other collection of varied items, a small sample will usually contain less diversity than the whole.

Why does the founder principle happen? It's basically applied probability. Perhaps an example will help illuminate the concept.

Think of yourself playing a game of poker (five card draw) with a friend. The deck of cards is separated into four suits: Diamonds, hearts, clubs and spades, each suit having 13 cards for a total of 52 cards.

Now look at your hand of five cards. Do you have one card from each suit? Maybe. Are all five cards from the same suit? Probably not, but it is possible. Will you get the ace of spades? Maybe, but not likely.

This is a good metaphor for how the founder principle works. The gene pool carried by a small group of founders is unlikely to be precisely representative of the gene pool of the larger group. In some rare cases it will be very unrepresentative, like you getting dealt a straight flush.

It starts to get interesting when this founder population starts to reproduce, and genetic drift causes the new population to diverge significantly from its ancestors. Quammen explains:

Already isolated geographically from its base population, the pioneer population now starts drifting away genetically. Over the course of generations, its gene pool becomes more and more different from the gene pool of the base population – different both as to the array of alleles (that is, the variant forms of a given gene) and as to the commonness of each allele.

The founder population, in some cases, will become so different that it can no longer mate with the original population. This new species may even be a competitor for resources if the two populations are ever reintroduced. (Say, if a land bridge is created between two islands, or humans bring two species back in contact.)

Going back to our card metaphor, let’s pretend that you and your friend are playing with four decks of cards — 208 total cards. Say we randomly pulled out forty cards from those decks. If there are absolutely no kings in the forty cards you are playing with, you will never be able to create a royal flush (ace+king+queen+jack+10 of the same suit). It doesn’t matter how the cards are dealt, you can never make a royal flush with no kings.

Thus it is with species: If a splintered-off population isn’t carrying a specific gene variant (allele), that variant can never be represented in the newly created population, no matter how prolific that gene may have been in the original population. It's gone. And as the rarest variants disappear, the new population becomes increasingly unlike the old one, especially if the new population is small.

Some alleles are common within a population, some are rare. If the population is large, with thousands or millions of parents producing thousands or millions of offspring, the rare alleles as well as the common ones will usually be passed along. Chance operation at high numbers tends to produce stable results, and the proportions of rarity and commonness will hold steady. If the population is small, though, the rare alleles will most likely disappear […] As it loses its rare alleles by the wayside, a small pioneer population will become increasingly unlike the base population from which it derived.

Some of this genetic loss may be positive (a gene that causes a rare disease may be missing), some may be negative (a gene for a useful attribute may be missing) and some may be neutral.

The neutral ones are the most interesting: A neutral gene at one point in time may become a useful gene at another point. It's like playing a round of poker where 8’s are suddenly declared “wild,” and that card suddenly becomes much more important than it was the hand before. The same goes for animal traits.

Take a mammal population living on an island, having lost all of its ability to swim. That won’t mean much if all is well and it is never required to swim. But the moment there is a natural disaster such as a fire, having the ability to swim the short distance to the mainland could be the difference between survival or extinction.

That's why the founder principle is so dangerous: The loss of genetic diversity often means losing valuable survival traits. Quammen explains:

Genetic drift compounds the founder-effect problem, stripping a small population of the genetic variation that it needs to continue evolving. Without that variation, the population stiffens toward uniformity. It becomes less capable of adaptive response. There may be no manifest disadvantages in uniformity so long as environmental circumstances remain stable; but when circumstances are disrupted, the population won’t be capable of evolutionary adjustment. If the disruption is drastic, the population may go extinct.

This loss of adaptability is one of the two major issues caused by the founder principle, the second being inbreeding depression. A founder population may have no choice but to only breed within its population and a symptom of too much inbreeding is the manifestation of harmful genetic variants among inbred individuals. (One reason humans consider incest a dangerous activity.) This too increases the fragility of species and decreases their ability to evolve.

The founder principle is just one of many amazing ideas in The Song of the Dodo. In fact, we at Farnam Street feel the book is so important that it made our list of books we recommend to improve your general knowledge of the world and it was the first book we picked for our learning community reading group.

If you have already read this book and want more we suggest Quammen’s The Reluctant Mr. Darwin or his equally thought provoking Spillover: Animal Infections and the Next Human Pandemic. Another wonderful and readable book on species evolution is The Beak of the Finch, by Jonathan Weiner.

The Island of Knowledge: Science and the Meaning of Life

“As the Island of Knowledge grows, so do the shores of our ignorance—the boundary between the known and unknown. Learning more about the world doesn't lead to a point closer to a final destination—whose existence is nothing but a hopeful assumption anyways—but to more questions and mysteries. The more we know, the more exposed we are to our ignorance, and the more we know to ask.”

***

Common across human history is our longing to better understand the world we live in, and how it works. But how much can we actually know about the world?

In his book, The Island of Knowledge: The Limits of Science and the Search for Meaning, Physicist Marcelo Gleiser traces our progress of modern science in the pursuit to the most fundamental questions on existence, the origin of the universe, and the limits of knowledge.

What we know of the world is limited by what we can see and what we can describe, but our tools have evolved over the years to reveal ever more pleats into our fabric of knowledge. Gleiser celebrates this persistent struggle to understand our place in the world and travels our history from ancient knowledge to our current understanding.

While science is not the only way to see and describe the world we live in, it is a response to the questions on who we are, where we are, and how we got here. “Science speaks directly to our humanity, to our quest for light, ever more light.

To move forward, science needs to fail, which runs counter to our human desire for certainty. “We are surrounded by horizons, by incompleteness.” Rather than give up, we struggle along a scale of progress. What makes us human is this journey to understand more about the mysteries of the world and explain them with reason. This is the core of our nature.

While the pursuit is never ending, the curious journey offers insight not just into the natural world, but insight into ourselves.

“What I see in Nature is a magnificent structure that we can comprehend only
very imperfectly,
and that must fill a thinking person with a feeling of humility.”
— Albert Einstein

We tend to think that what we see is all there is — that there is nothing we cannot see. We know it isn't true when we stop and think, yet we still get lulled into a trap of omniscience.

Science is thus limited, offering only part of the story — the part we can see and measure. The other part remains beyond our immediate reach.

What we see of the world,” Gleiser begins, “is only a sliver of what's out there.”

There is much that is invisible to the eye, even when we augment our sensorial perception with telescopes, microscopes, and other tools of exploration. Like our senses, every instrument has a range. Because much of Nature remains hidden from us, our view of the world is based only on the fraction of reality that we can measure and analyze. Science, as our narrative describing what we see and what we conjecture exists in the natural world, is thus necessarily limited, telling only part of the story. … We strive toward knowledge, always more knowledge, but must understand that we are, and will remain, surrounded by mystery. This view is neither antiscientific nor defeatist. … Quite the contrary, it is the flirting with this mystery, the urge to go beyond the boundaries of the known, that feeds our creative impulse, that makes us want to know more.

While we may broadly understand the map of what we call reality, we fail to understand its terrain. Reality, Gleiser argues, “is an ever-shifting mosaic of ideas.”

However…

The incompleteness of knowledge and the limits of our scientific worldview only add to the richness of our search for meaning, as they align science with our human fallibility and aspirations.

What we call reality is a (necessarily) limited synthesis. It is certainly our reality, as it must be, but it is not the entire reality itself:

My perception of the world around me, as cognitive neuroscience teaches us, is synthesized within different regions of my brain. What I call reality results from the integrated sum of countless stimuli collected through my five senses, brought from the outside into my head via my nervous system. Cognition, the awareness of being here now, is a fabrication of a vast set of chemicals flowing through myriad synaptic connections between my neurons. … We have little understanding as to how exactly this neuronal choreography engenders us with a sense of being. We go on with our everyday activities convinced that we can separate ourselves from our surroundings and construct an objective view of reality.

The brain is a great filtering tool, deaf and blind to vast amounts of information around us that offer no evolutionary advantage. Part of it we can see and simply ignore. Other parts, like dust particles and bacteria, go unseen because of limitations of our sensory tools.

As the Fox said to the Little Prince in Antoine de Saint-Exupery's fable, “What is essential is invisible to the eye.” There is no better example than oxygen.

Science has increased our view. Our measurement tools and instruments can see bacteria and radiation, subatomic particles and more. However precise these tools have become, their view is still limited.

There is no such thing as an exact measurement. Every measurement must be stated within its precision and quoted together with “error bars” estimating the magnitude of errors. High-precision measurements are simply measurements with small error bars or high confidence levels; there are no perfect, zero-error measurements.

[…]

Technology limits how deeply experiments can probe into physical reality. That is to say, machines determine what we can measure and thus what scientists can learn about the Universe and ourselves. Being human inventions, machines depend on our creativity and available resources. When successful, they measure with ever-higher accuracy and on occasion may also reveal the unexpected.

“All models are wrong, some are useful.”
— George Box

What we know about the world is only what we can detect and measure — even if we improve our “detecting and measuring” as time goes along. And thus we make our conclusions of reality on what we can currently “see.”

We see much more than Galileo, but we can't see it all. And this restriction is not limited to measurements: speculative theories and models that extrapolate into unknown realms of physical reality must also rely on current knowledge. When there is no data to guide intuition, scientists impose a “compatibility” criterion: any new theory attempting to extrapolate beyond tested ground should, in the proper limit, reproduce current knowledge.

[…]

If large portions of the world remain unseen or inaccessible to us, we must consider the meaning of the word “reality” with great care. We must consider whether there is such a thing as an “ultimate reality” out there — the final substrate of all there is — and, if so, whether we can ever hope to grasp it in its totality.

[…]

We thus must ask whether grasping reality's most fundamental nature is just a matter of pushing the limits of science or whether we are being quite naive about what science can and can't do.

Here is another way of thinking about this: if someone perceives the world through her senses only (as most people do), and another amplifies her perception through the use of instrumentation, who can legitimately claim to have a truer sense of reality? One “sees” microscopic bacteria, faraway galaxies, and subatomic particles, while the other is completely blind to such entities. Clearly they “see” different things and—if they take what they see literally—will conclude that the world, or at least the nature of physical reality, is very different.

Asking who is right misses the point, although surely the person using tools can see further into the nature of things. Indeed, to see more clearly what makes up the world and, in the process to make more sense of it and ourselves is the main motivation to push the boundaries of knowledge. … What we call “real” is contingent on how deeply we are able to probe reality. Even if there is such thing as the true or ultimate nature of reality, all we have is what we can know of it.

[…]

Our perception of what is real evolves with the instruments we use to probe Nature. Gradually, some of what was unknown becomes known. For this reason, what we call “reality” is always changing. … The version of reality we might call “true” at one time will not remain true at another. … Given that our instruments will always evolve, tomorrow's reality will necessarily include entitles not known to exist today. … More to the point, as long as technology advances—and there is no reason to suppose that it will ever stop advancing for as long as we are around—we cannot foresee an end to this quest. The ultimate truth is elusive, a phantom.

Gleiser makes his point with a beautiful metaphor. The Island of Knowledge.

Consider, then, the sum total of our accumulated knowledge as constituting an island, which I call the “Island of Knowledge.” … A vast ocean surrounds the Island of Knowledge, the unexplored ocean of the unknown, hiding countless tantalizing mysteries.

The Island of Knowledge grows as we learn more about the world and ourselves. And as the island grows, so too “do the shores of our ignorance—the boundary between the known and unknown.”

Learning more about the world doesn't lead to a point closer to a final destination—whose existence is nothing but a hopeful assumption anyways—but to more questions and mysteries. The more we know, the more exposed we are to our ignorance, and the more we know to ask.

As we move forward we must remember that despite our quest, the shores of our ignorance grow as the Island of Knowledge grows. And while we will struggle with the fact that not all questions will have answers, we will continue to progress. “It is also good to remember,” Gleiser writes, “that science only covers part of the Island.”

Richard Feynman has pointed out before that science can only answer the subset of question that go, roughly, “If I do this, what will happen?” Answers to questions like Why do the rules operate that way? and Should I do it? are not really questions of scientific nature — they are moral, human questions, if they are knowable at all.

There are many ways of understanding and knowing that should, ideally, feed each other. “We are,” Gleiser concludes, “multidimensional creatures and search for answers in many, complementary ways. Each serves a purpose and we need them all.”

“The quest must go on. The quest is what makes us matter: to search for more answers, knowing that the significant ones will often generate surprising new questions.”

The Island of Knowledge is a wide-ranging tour through scientific history from planetary motions to modern scientific theories and how they affect our ideas on what is knowable.