Over 400,000 people visited Farnam Street last month to learn how to make better decisions, create new ideas, and avoid stupid errors. With more than 100,000 subscribers to our popular weekly digest, we've become an online intellectual hub. To learn more about we what do, start here.

The Humpty-Dumpty Problem

In 1637 René Descartes changed the course of science forever with the publication of Discourse on the Method of Rightly Conducting One’s Reason and of Seeking Truth in the Sciences. That work lays the foundation to modern science by putting forth two enduring ideas: reductionism as a way of knowing (“divide each of the difficulties under examination into as many parts as possible, and as might be necessary for its adequate solution”) and to “conduct my thoughts in such order that, beginning with those objects that are simplest and most readily understood, I might ascend little by little, and, as it were, step by step, to the knowledge of the more complex.”

 

Robert Doirt, in the article below, argues the days of blind faith in the power of reductionist deconstruction are over and a new approach is taking shape. This new, interactionist perspective on living systems, emphasis on the interplay between parts (along with advances of technology to allow for modelling). Doirt offers, “Whole new subfields in the life sciences, as well as productive interactions among existing disciplines, have emerged. Systems biologists, complexity theorists and newly minted biologists now attend as carefully to the ways in which parts come together as they do to the parts themselves.” He goes on to say we are beginning to understand that modularity and redundancy are inherent features of all levels of biological organization (something Nassim Taleb has long argued). These features, he argues, are both simultaneously resilient and capable of evolving.

 

The core of Doirt’s argument is not with reductionism itself but rather with the notion that it represents the only viable strategy for understanding the living world.

 

If anything, living systems consistently violate all of the criteria for reducibility. The number of elements that compose any living system—an ecosystem, an organism, an organ or a cell—is enormous. In living systems, the specific identities of these component parts matter. Unlike chemistry, for instance, in which an electron in a lithium atom is identical to an electron in a gold atom, all proteins in a cell are not equivalent or interchangeable. Each protein is the result of its own evolutionary trajectory. We understand and exploit their similarities, but their differences matter to us just as much. Perhaps most importantly, the relations between the components of living systems are complex, context-dependent and weak. In mechanical machines, the conversation taking place between the parts involves clear and unambiguous interactions. These interactions result in simple causes and effects: They are instructions barked down a simple chain of command.

In living systems, by contrast, virtually every interesting bit of biological machinery is embedded in a very large web of weak interactions. And this network of interactions gives rise to a discussion among the parts that is less like a chain of command and more like a complex court intrigue: ambiguous whispers against a noisy and distracting background. As a result, the same interaction between a regulatory protein and a segment of DNA can lead to different (and sometimes opposite) outcomes depending on which other proteins are present in the vicinity. The firing of a neuron can act to amplify the signal coming from other neurons or act instead to suppress it, based solely on the network in which the neuron is embedded. The disappearance of a single species can stabilize an ecosystem or send it spinning into chaos, depending on (you guessed it) the network of interactions that surrounds that species. This extensive and subtle connectivity, which gives meaning to the behavior of the underlying components, turns out to be a consistent feature of living systems.

The recurrent evolution of these networks of weak interactions suggests that they may allow biological systems to incorporate information from the environment while also maintaining stability in the face of constant perturbation. This general feature of living systems also has clear methodological consequences for modern biology. Once this gossamer web is taken apart in search of the smallest components we can study, the process of putting it back together bears no resemblance at all to reconstructing a clock. Thus we find ourselves, early in the 21st century, with extraordinarily detailed descriptions of the components of many biological systems. But reconstructing those systems is proving to be a monumental and consistently surprising enterprise.

The promise of reductionism rested on the belief that an intelligent dissection of complex phenomena would not only yield progress, but would eventually reduce any problem to its component parts. Complexity, we naively hoped, was simply a by-product of incomplete understanding, an illusion that would fall away once the parts were fully understood. But this is the dirty little secret of contemporary biology: Despite our reductionist successes, the central conceptual problems of biology have not yielded to study. We have revealed the elegant workings of neurons in exquisite detail, but the material understanding of consciousness remains elusive. We have sequenced human genomes in their entirety, but the process that leads from a genome to an organism is still poorly understood. We have captured the intricacies of photosynthesis, and yet the consequences of rising carbon-dioxide levels for the future of the rain forests remain frustratingly hazy. We are, in short, the king’s horses and the king’s men: We stare at the pieces, knowing what Humpty should look like, but unable to put him together again.

Continue Reading

Read what you’ve been missing. Subscribe to Farnam Street via Email, RSS, or Twitter.

Shop at Amazon.com and support Farnam Street.

Date:
Filed Under: