Tag: Complex Adaptive Systems

Complex Adaptive Cities

Complex adaptive systems are hard to understand. Messy and complicated, they cannot be broken down into smaller bits. It would be easier to ignore them, or simply leave them as mysteries. But given that we are living in one such system, it might be more useful to buckle down and sort it out. That way, we can make choices that are aligned with how the world actually operates.

In his book Diversity and Complexity, Scott E. Page explains, “Complexity can be loosely thought of as interesting structures and patterns that are not easily described or predicted. Systems that produce complexity consist of diverse rule-following entities whose behaviors are interdependent. Those entities interact over a contact structure or network. In addition, the entities often adapt.”

Understanding complexity is important, because sometimes things are not further reducible. While the premise of Occam’s Razor is that things should be made as simple as possible but not simpler, sometimes there are things that cannot be reduced. There is, in fact, an irreducible minimum. Certain things can be properly contemplated only in all their complicated, interconnected glory.

Take, for example, cities.

Cities cannot be created for success from the top down by the imposition of simple rules.

For those of us who live in cities, we all know what makes a particular neighborhood great. We can get what we need and have the interactions we want, and that’s ultimately because we feel safe there.

But how is this achieved? What magic combination of people and locations, uses and destinations, makes a vibrant, safe neighborhood? Is there a formula for, say, the ratio of houses to businesses, or of children to workers?

No. Cities are complex adaptive systems. They cannot be created for success from the top down by the imposition of simple rules.

In her seminal book The Death and Life of Great American Cities, Jane Jacobs approached the city as a complex adaptive system, turned city planning on its head, and likely saved many North American cities by taking them apart and showing that they cannot be reduced to a series of simple behavioral interactions.

Cities fall exactly into the definition of complexity given above by Page. They are full of rule-following humans, cars, and wildlife, the behaviors of which are interdependent on the other entities and respond to feedback.

These components of a city interact over multiple interfaces in a city network and will adapt easily, changing their behavior based on food availability, road closures, or perceived safety. But the city itself cannot be understood by looking at just one of these behaviors.

Jacobs starts with “the kind of problem which cities pose — a problem in handling organized complexity” — and a series of observations about that common, almost innocuous, part of all cities: the sidewalk.

What makes a particular neighborhood safe?

Jacobs argues that there is no one factor but rather a series of them. In order to understand how a city street can be safe, you must examine the full scope of interactions that occur on its sidewalk. “The trust of a city street is formed over time from many, many little public sidewalk contacts.” Nodding to people you know, noticing people you don’t. Recognizing which parent goes with which kid, or whose business seems to be thriving. People create safety.

Given that most of them are strangers to each other, how do they do this? How come these strangers are not all perceived as threats?

Safe streets are streets that are used by many different types of people throughout the 24-hour day. Children, workers, caregivers, tourists, diners — the more people who use the sidewalk, the more eyes that participate in the safety of the street.

Safety on city streets is “kept primarily by an intricate, almost unconscious, network of voluntary controls and standards among the people themselves, and enforced by the people themselves.” Essentially, we all contribute to safety because we all want safety. It increases our chances of survival.

Jacobs brings an amazing eye for observational detail in describing neighborhoods that work and those that don’t. In describing sidewalks, she explains that successful, safe neighborhoods are orderly. “But there is nothing simple about that order itself, or the bewildering number of components that go into it. Most of those components are specialized in one way or another. They unite in their joint effect upon the sidewalk, which is not specialized in the least. That is its strength.” For example, restaurant patrons, shopkeepers, loitering teenagers, etc. — some of whom belong to the area and some of whom are transient — all use the sidewalk and in doing so contribute to the interconnected and interdependent relationships that produce the perception of safety on that street. And real safety will follow perceived safety.

To get people participating in this unorganized street safety, you have to have streets that are desirable. “You can’t make people use streets they have no reason to use. You can’t make people watch streets they do not want to watch.” But Jacobs points out time and again that there is no predictable prescription for how to achieve this mixed use where people are unconsciously invested in the maintenance of safety.

This is where considering the city as a complex adaptive system is most useful.

Each individual component has a part to play, so a top-down imposition of theory that doesn’t allow for the unpredictable behavior of each individual is doomed to fail. “Orthodox planning is much imbued with puritanical and Utopian conceptions of how people should spend their free time, and in planning, these moralisms on people’s private lives are deeply confused with concepts about the workings of cities.” A large, diverse group of people is not going to conform to only one way of living. And it’s the diversity that offers the protection.

For example, a city planner might decide to not have bars in residential neighborhoods. The noise might keep people up, or there will be a negative moral impact on the children who are exposed to the behavior of loud, obnoxious drunks. But as Jacobs reveals, safe city areas can’t be built on the basis of this type of simplistic assumption.

By stretching the use of a street through as many hours of the day as possible, you might create a safer neighborhood. I say “might” because in this complex system, other factors might connect to manifest a different reality.

Planning that doesn’t respect the spectrum of diverse behavior and instead aims to insist on an ideal based on a few simple concepts will hinder the natural ability of a system to adapt.

As Scott Page explains, “Creating a complex system from scratch takes skill (or evolution). Therefore, when we see diverse complex systems in the real world, we should not assume that they’ve been assembled from whole cloth. Far more likely, they’ve been constructed bit by bit.”

Urban planning that doesn’t respect the spectrum of diverse behavior and instead aims to insist on an ideal based on a few simple concepts (fresh air, more public space, large private space) will hinder the natural ability of a city system to adapt in a way that suits the residents. And it is this ability to adapt that is the cornerstone requirement of this type of complex system. Inhibit the adaptive property and you all but ensure the collapse of the system.

As Jacobs articulates:

Under the seeming disorder of the old city, wherever the old city is working successfully, is a marvelous order for maintaining the safety of the streets and the freedom of the city. It is a complex order. Its essence is intricacy of sidewalk use, bringing with it a constant succession of eyes. This order is all composed of movement and change, and although it is life, not art, we may fancifully call it the art form of the city and liken it to the dance — … to an intricate ballet in which the individual dancers and ensembles all have distinctive parts which miraculously reinforce each other and compose an orderly whole. The ballet of the good city sidewalk never repeats itself from place to place, and in any one place is always replete with new improvisations.

This is the essence of complexity. As Scott Page argues, “Adaptation occurs at the level of individuals or of types. The system itself doesn’t adapt. The parts do; they alter their behaviors leading to system level adaptation.”

Jacobs maintains that “the sight of people attracts still other people.” We feel more secure when we know there are multiple eyes on us, eyes that are concerned only with the immediate function that might affect them and are not therefore invasive.

Our complex behavior as individuals in cities, interacting with various components in any given day, is multiplied by everyone, so a city that produces a safe environment seems to be almost miraculous. But ultimately our behavior is governed by certain rules — not rules that are imposed by theory or external forces, but rules that we all feel are critical to our well-being and success in our city.

Thus, the workings of a desirable city are produced by a multitude of small interactions that have evolved and adapted as they have promoted the existence of the things that most support the desires of individuals.

“The look of things and the way they work are inextricably bound together, and in no place more so than cities,” claims Jacobs. Use is not independent of form. That is why we must understand the system as a whole. No matter how many components and unpredictable potential interactions there are, they are all part of what makes the city function.

As Jacobs concludes, “There is no use wishing it were a simpler problem, because in real life it is not a simpler problem. No matter what you try to do to it, a city park behaves like a problem in organized complexity, and that is what it is. The same is true of all other parts or features of cities. Although the inter-relations of their many factors are complex, there is nothing accidental or irrational about the ways in which these factors affect each other.”

Samuel Arbesman on Complex Adaptive Systems and the Difference between Biological and Physics Based Thinking

Samuel Arbesman (@arbesman) is a complexity scientist whose work focuses on the nature of scientific and technological change. Sam's also written two books that I love, The Half-Life of Facts and Overcomplicated.

In this episode, Sam talks about:

  • Our relationship with technology
  • Whether art or science is more fundamental to humanity
  • How he defines success for himself
  • The difference between physics thinking and biological thinking
  • Why its better to learn things that change slowly
  • And much, much more!




Show Notes

A complete transcript is availale for members of the learning community.

Books mentioned:

The Need for Biological Thinking to Solve Complex Problems

“Biological thinking and physics thinking are distinct, and often complementary, approaches to the world, and ones that are appropriate for different kinds of systems.”


How should we think about complexity? Should we use a biological or physics system? The answer, of course, is that it depends. It's important to have both tools available at your disposal.

These are the questions that Samuel Arbesman explores in his fascinating book Overcomplicated: Technology at the Limits of Comprehension.

[B]iological systems are generally more complicated than those in physics. In physics, the components are often identical—think of a system of nothing but gas particles, for example, or a single monolithic material, like a diamond. Beyond that, the types of interactions can often be uniform throughout an entire system, such as satellites orbiting a planet.

Biology is different and there is something meaningful to be learned from a biological approach to thinking.

In biology, there are a huge number of types of components, such as the diversity of proteins in a cell or the distinct types of tissues within a single creature; when studying, say, the mating behavior of blue whales, marine biologists may have to consider everything from their DNA to the temperature of the oceans. Not only is each component in a biological system distinctive, but it is also a lot harder to disentangle from the whole. For example, you can look at the nucleus of an amoeba and try to understand it on its own, but you generally need the rest of the organism to have a sense of how the nucleus fits into the operation of the amoeba, how it provides the core genetic information involved in the many functions of the entire cell.

Arbesman makes an interesting point here when it comes to how we should look at technology. As the interconnections and complexity of technology increases, it increasingly resembles a biological system rather than a physics one. There is another difference.

[B]iological systems are distinct from many physical systems in that they have a history. Living things evolve over time. While the objects of physics clearly do not emerge from thin air—astrophysicists even talk about the evolution of stars—biological systems are especially subject to evolutionary pressures; in fact, that is one of their defining features. The complicated structures of biology have the forms they do because of these complex historical paths, ones that have been affected by numerous factors over huge amounts of time. And often, because of the complex forms of living things, where any small change can create unexpected effects, the changes that have happened over time have been through tinkering: modifying a system in small ways to adapt to a new environment.

Biological systems are generally hacks that evolved to be good enough for a certain environment. They are far from pretty top-down designed systems. And to accommodate an ever-changing environment they are rarely the most optimal system on a mico-level, preferring to optimize for survival over any one particular attribute. And it's not the survival of the individual that's optimized, it's the survival of the species.

Technologies can appear robust until they are confronted with some minor disturbance, causing a catastrophe. The same thing can happen to living things. For example, humans can adapt incredibly well to a large array of environments, but a tiny change in a person’s genome can cause dwarfism, and two copies of that mutation invariably cause death. We are of a different scale and material from a particle accelerator or a computer network, and yet these systems have profound similarities in their complexity and fragility.

Biological thinking, with a focus on details and diversity, is a necessary tool to deal with complexity.

The way biologists, particularly field biologists, study the massively complex diversity of organisms, taking into account their evolutionary trajectories, is therefore particularly appropriate for understanding our technologies. Field biologists often act as naturalists— collecting, recording, and cataloging what they find around them—but even more than that, when confronted with an enormously complex ecosystem, they don’t immediately try to understand it all in its totality. Instead, they recognize that they can study only a tiny part of such a system at a time, even if imperfectly. They’ll look at the interactions of a handful of species, for example, rather than examine the complete web of species within a single region. Field biologists are supremely aware of the assumptions they are making, and know they are looking at only a sliver of the complexity around them at any one moment.


When we’re dealing with different interacting levels of a system, seemingly minor details can rise to the top and become important to the system as a whole. We need “Field biologists” to catalog and study details and portions of our complex systems, including their failures and bugs. This kind of biological thinking not only leads to new insights, but might also be the primary way forward in a world of increasingly interconnected and incomprehensible technologies.

Waiting and observing isn't enough.

Biologists will often be proactive, and inject the unexpected into a system to see how it reacts. For example, when biologists are trying to grow a specific type of bacteria, such as a variant that might produce a particular chemical, they will resort to a process known as mutagenesis. Mutagenesis is what it sounds like: actively trying to generate mutations, for example by irradiating the organisms or exposing them to toxic chemicals.

When systems are too complex for human understanding, often we need to insert randomness to discover the tolerances and limits of the system. One plus one doesn't always equal two when you're dealing with non-linear systems. For biologists, tinkering is the way to go.

As Stewart Brand noted about legacy systems, “Teasing a new function out of a legacy system is not done by command but by conducting a series of cautious experiments that with luck might converge toward the desired outcome.”

When Physics and Biology Meet

This doesn't mean we should abandon the physics approach, searching for underlying regularities in complexity. The two systems complement one another rather than compete.

Arbesman recommends asking the following questions:

When attempting to understand a complex system, we must determine the proper resolution, or level of detail, at which to look at it. How fine-grained a level of detail are we focusing on? Do we focus on the individual enzyme molecules in a cell of a large organism, or do we focus on the organs and blood vessels? Do we focus on the binary signals winding their way through circuitry, or do we examine the overall shape and function of a computer program? At a larger scale, do we look at the general properties of a computer network, and ignore the individual machines and decisions that make up this structure?

When we need to abstract away a lot of the details we lean on physics thinking more. Think about it from an organizational perspective. The new employee at the lowest level is focused on the specific details of their job whereas the executive is focused on systems, strategy, culture, and flow — how things interact and reinforce one another. The details of the new employee's job are lost on them.

We can't use one system, whether biological or physics, exclusively. That's a sure way to fragile thinking. Rather, we need to combine them.

In Cryptonomicon, a novel by Neal Stephenson, he makes exactly this point talking about the structure of the pantheon of Greek gods:

And yet there is something about the motley asymmetry of this pantheon that makes it more credible. Like the Periodic Table of the Elements or the family tree of the elementary particles, or just about any anatomical structure that you might pull up out of a cadaver, it has enough of a pattern to give our minds something to work on and yet an irregularity that indicates some kind of organic provenance—you have a sun god and a moon goddess, for example, which is all clean and symmetrical, and yet over here is Hera, who has no role whatsoever except to be a literal bitch goddess, and then there is Dionysus who isn’t even fully a god—he’s half human—but gets to be in the Pantheon anyway and sit on Olympus with the Gods, as if you went to the Supreme Court and found Bozo the Clown planted among the justices.

There is a balance and we need to find it.

Making Decisions in a Complex Adaptive System


In Think Twice: Harnessing the Power of Counterintuition, Mauboussin does a good job adding to the work we've already done on complex adaptive systems:

You can think of a complex adaptive system in three parts (see the image at the top of this post). First, there is a group of heterogeneous agents. These agents can be neurons in your brain, bees in a hive, investors in a market, or people in a city. Heterogeneity means each agent has different and evolving decision rules that both reflect the environment and attempt to anticipate change in it. Second, these agents interact with one another, and their interactions create structure— scientists often call this emergence. Finally, the structure that emerges behaves like a higher-level system and has properties and characteristics that are distinct from those of the underlying agents themselves. … The whole is greater than the sum of the parts.


The inability to understand the system based on its components prompted Nobel Prize winner and physicist Philip Anderson, to draft the essay, “More Is Different.” Anderson wrote, “The behavior of large and complex aggregates of elementary particles, it turns out, is not to be understood in terms of the simple extrapolation of the properties of a few particles. Instead, at each level of complexity entirely new properties appear.”

Mauboussin comments that we are fooled by randomness:

The problem goes beyond the inscrutable nature of complex adaptive systems. Humans have a deep desire to understand cause and effect, as such links probably conferred humans with evolutionary advantage. In complex adaptive systems, there is no simple method for understanding the whole by studying the parts, so searching for simple agent-level causes of system-level effects is useless. Yet our minds are not beyond making up a cause to relieve the itch of an unexplained effect. When a mind seeking links between cause and effect meets a system that conceals them, accidents will happen.

Misplaced Focus on the Individual

One mistake we make is extrapolating the behaviour of an individual component, say an individual, to explain the entire system. Yet when we have to solve a problem dealing with a complex system, we often address an individual component. In so doing, we ignore Garrett Hardin's first law of Ecology, you can never do merely one thing and become a fragilista.

That unintended system-level consequences arise from even the best-intentioned individual-level actions has long been recognized. But the decision-making challenge remains for a couple of reasons. First, our modern world has more interconnected systems than before. So we encounter these systems with greater frequency and, most likely, with greater consequence. Second, we still attempt to cure problems in complex systems with a naïve understanding of cause and effect.


When I speak with executives from around the world going through a period of poor performance, it doesn't take long for them to mention they want to hire a star from another company. “If only we had Kate,” they'll say, “we could smash the competition and regain our footing.”

At first, poaching stars from competitors or even teams within the same organization seems like a winning strategy. But once the star comes over the results often fail to materialize.

What we fail to grasp is that their performance is part of an ecosystem and removing them from that ecosystem — that is isolating the individual performance — is incredibly hard without properly considering the entire ecosystem. (Reversion to the mean also likely accounts for some of the star's fading as well).

Three Harvard professors concluded, “When a company hires a star, the star’s performance plunges, there is a sharp decline in the functioning of the group or team the person works with, and the company’s market value falls.”

If it sounds like a lot of work to think this through at many levels, it should be. Why should it be easy?

Another example of this at an organizational level has to do with innovation. Most people want to solve the innovation problem. Ignoring for a second that that is the improper framing, how do most organizations go about this? They copy what the most successful organizations do. I can't count the number of times the solution to an organization's “innovation problem” is to be more like Google. Well-intentioned executives blindly copy approaches by others such as 20% innovation time, without giving an ounce of thought to the role the ecosystem plays.

Isolating and focusing on an individual part of a complex adaptive system without an appreciation and understanding of that system itself is sure to lead to disaster.

What Should We Do?

So this begs the question, what should we do when we find ourselves dealing with a complex adaptive system? Mauboussin provides three pieces of advice:

1. Consider the system at the correct level.

Remember the phrase “more is different.” The most prevalent trap is extrapolating the behavior of individual agents to gain a sense of system behavior. If you want to understand the stock market, study it at the market level. Consider what you see and read from individuals as entertainment, not as education. Similarly, be aware that the function of an individual agent outside the system may be very different from that function within the system. For instance, mammalian cells have the same metabolic rates in vitro, whether they are from shrews or elephants. But the metabolic rate of cells in small mammals is much higher than the rate of those in large mammals. The same structural cells work at different rates, depending on the animals they find themselves in.

2. Watch for tightly coupled systems.

A system is tightly coupled when there is no slack between items, allowing a process to go from one stage to the next without any opportunity to intervene. Aircraft, space missions, and nuclear power plants are classic examples of complex, tightly coupled systems. Engineers try to build in buffers or redundancies to avoid failure, but frequently don’t anticipate all possible contingencies. Most complex adaptive systems are loosely coupled, where removing or incapacitating one or a few agents has little impact on the system’s performance. For example, if you randomly remove some investors, the stock market will continue to function fine. But when the agents lose diversity and behave in a coordinated fashion, a complex adaptive system can behave in a tightly coupled fashion. Booms and crashes in financial markets are an illustration.

3. Use simulations to create virtual worlds.

Dealing with complex systems is inherently tricky because the feedback is equivocal, information is limited, and there is no clear link between cause and effect. Simulation is a tool that can help our learning process. Simulations are low cost, provide feedback, and have proved their value in other domains like military planning and pilot training.

Still Curious? Think Twice: Harnessing the Power of Counterintuition.

An Introduction to Complex Adaptive Systems

Let’s explore the concept of the Complex Adaptive Systems and see how this model might apply in various walks of life.

To illustrate what a complex adaptive system is, and just as importantly, what it is not, let’s take the example of a “driving system” – or as we usually refer to it, a car. (I have cribbed some parts of this example from the excellent book by John Miller and Scott Page.)

The interior of a car, at first glance is complicated. There are seats, belts, buttons, levers, knobs, a wheel, etc. Removing the passenger car seats would make this system less complicated. However, the system would remain essentially functional. Thus, we would not call the car interior complex.

The mechanical workings of a car, however, are complex. The system has interdependent components that must all simultaneously serve their function in order for the system to work. The higher order function, driving, derives from the interaction of the parts in a very specific way.

Let’s say instead of the passenger seats, we remove the timing belt. Unlike the seats, the timing belt is a necessary node for the system to function properly. Our “driving system” is now useless. The system has complexities, but they are not what we would call adaptive.

To understand complex adaptive systems, let’s put hundreds of “driving systems” on the same road, each with the goal of reaching their destination within an expected amount of time. We call this traffic. Traffic is a complex system in which its inhabitants adapt to each other’s actions. Let’s see it in action.


On a popular route into a major city, we observe a car in flames on the side of the road, with firefighters working to put out the fire. Naturally, cars will slow to observe the wreck. As the first cars slow, the cars behind them slow in turn. The cars behind them must slow as well. With everyone becoming increasingly agitated, we’ve got a traffic jam. The jam emerges from the interaction of the parts of the system.

With the traffic jam formed, potential entrants to the jam—let’s call them Group #2—get on their smartphones and learn that there is an accident ahead which may take hours to clear. Upon learning of the accident, they predictably begin to adapt by finding another route. Suppose there is only one alternate route into the city. What happens now? The alternate route forms a second jam! (I’m stressed out just writing about this.)

Now let’s introduce a third group of participants, which must choose between jams. Predicting the actions of this third group is very hard to do. Perhaps so many people in group #2 have altered their route that the second jam is worse than the first, causing the majority of the third group to choose jam #1. Perhaps, anticipating that others will follow that same line of reasoning, they instead choose jam #2. Perhaps they stay home!

What we see here are emergent properties of the complex adaptive system called traffic. By the time we hit this third layer of participants, predicting the behavior of the system has become extremely difficult, if not impossible.

The key element to complex adaptive systems is the social element. The belts and pulleys inside a car do not communicate with one another and adapt their behavior to the behavior of the other parts in an infinite loop. Drivers, on the other hand, do exactly that.


Where else do we see this phenomenon? The stock market is a great example. Instead of describing it myself, let’s use the words of John Maynard Keynes, who brilliantly related the nature of the market’s complex adaptive parts to that of a beauty contest in chapter 12 of The General Theory.

Or, to change the metaphor slightly, professional investment may be likened to those newspaper competitions in which the competitors have to pick out the six prettiest faces from a hundred photographs, the prize being awarded to the competitor whose choice most nearly corresponds to the average preferences of the competitors as a whole; so that each competitor has to pick, not those faces which he himself finds prettiest, but those which he thinks likeliest to catch the fancy of the other competitors, all of whom are looking at the problem from the same point of view. It is not a case of choosing those which, to the best of one’s judgment, are really the prettiest, nor even those which average opinion genuinely thinks the prettiest. We have reached the third degree where we devote our intelligences to anticipating what average opinion expects the average opinion to be. And there are some, I believe, who practice the fourth, fifth and higher degrees.

Like traffic, the complex, adaptive nature of the market is very clear. The participants in the market are interacting with one another constantly and adapting their behavior to what they know about others’ behavior. Stock prices jiggle all day long in this fashion. Forecasting outcomes in this system is extremely challenging.

To illustrate, suppose that a very skilled, influential, and perhaps lucky, market forecaster successfully calls a market crash. (There were a few in 2008, for example.) Five years later, he publicly calls for a second crash. Given his prescience in the prior crash, market participants might decide to sell their stocks rapidly, causing a crash for no other reason than the fact that it was predicted! Like traffic reports on the radio, the very act of observing and predicting has a crucial impact on the behavior of the system.

Thus, although we know that over the long term, stock prices roughly track the value of their underlying businesses, in the short run almost anything can occur due to the highly adaptive nature of market participants.


This understanding helps us understand some things that are not complex adaptive systems. Take the local weather. If the Doppler 3000 forecast on the local news predicts rain on Thursday, is the rain any less likely to occur? No. The act of predicting has not influenced the outcome. Although near-term weather is extremely complex, with many interacting parts leading to higher order outcomes, it does have an element of predictability.

On the other hand, we might call the Earth’s climate partially adaptive, due to the influence of human beings. (Have the cries of global warming and predictions of its worsening not begun affecting the very behavior causing the warming?)

Thus, behavioral dynamics indicate a key difference between weather and climate, and between systems that are simply complex and those that are also adaptive. Failure to use higher-order thinking when considering outcomes in complex adaptive systems is a common cause of overconfidence in prediction making.


Complex Adaptive Systems are part of the Farnam Street latticework of Mental Models.