Blog

The Butterfly Effect: Everything You Need to Know About This Powerful Mental Model

“You could not remove a single grain of sand from its place without thereby … changing something throughout all parts of the immeasurable whole.”

— Fichte, The Vocation of Man (1800)
***

The Basics

In one of Stephen King’s greatest works, 11/22/63, a young man named Jake discovers a portal in a diner’s pantry which leads back to 1958. After a few visits and some experiments, Jake deduces that altering history is possible. However long he stays in the past, only two minutes go by in the present. He decides to live in the past until 1963 so he can prevent the assassination of President John F. Kennedy, believing that this change will greatly benefit humanity. After years of stalking Lee Harvey Oswald, Jake manages to prevent him from shooting Kennedy.

Upon returning to the present, he expects to find the world improved as a result. Instead, the opposite has happened. Earthquakes occur everywhere, his old home is in ruins, and nuclear war has destroyed much of the world. (As King wrote in an article for Marvel Spotlight, “Not good to fool with Father Time.”) Distraught, Jake returns to 1958 once again and resets history.

In addition to being a masterful work of speculative fiction, 11/22/63 is a classic example of how everything in the world is connected together.

The butterfly effect is the idea that small things can have non-linear impacts on a complex system. The concept is imagined with a butterfly flapping its wings and causing a typhoon.

Of course, a single act like the butterfly flapping its wings cannot cause a typhoon. Small events can, however, serve as catalysts that act on starting conditions.

And as John Gribbin writes in his cult-classic work Deep Simplicity, “some systems … are very sensitive to their starting conditions, so that a tiny difference in the initial ‘push’ you give them causes a big difference in where they end up, and there is feedback, so that what a system does affects its own behavior.”

In the foreword to The Butterfly Effect in Competitive Markets by Dr. Rajagopal, Tom Breuer writes:

Simple systems, with few variables, can nonetheless show unpredictable and sometimes chaotic behavior…[Albert] Libchaber conducted a series of seminal experiments. He created a small system in his lab to study convection (chaotic system behavior) in a cubic millimeter of helium. By gradually warming this up from the bottom, he could create a state of controlled turbulence. Even this tightly controlled environment displayed chaotic behavior: complex unpredictable disorder that is paradoxically governed by “orderly” rules.

… [A] seemingly stable system (as in Libchaber’s 1 ccm cell of helium) can be exposed to very small influences (like heating it up a mere 0.001 degree), and can transform from orderly convection into wild chaos. Although [such systems are] governed by deterministic phenomena, we are nonetheless unable to predict how [they] will behave over time.

What the Butterfly Effect Is Not

The point of the butterfly effect is not to get leverage. As General Stanley McChrystal writes in Team of Teams:

In popular culture, the term “butterfly effect” is almost always misused. It has become synonymous with “leverage”—the idea of a small thing that has a big impact, with the implication that, like a lever, it can be manipulated to a desired end. This misses the point of Lorenz’s insight. The reality is that small things in a complex system may have no effect or a massive one, and it is virtually impossible to know which will turn out to be the case.

Benjamin Franklin offered a poetic perspective in his variation of a proverb that’s been around since the 14th century in English and the 13th century in German, long before the identification of the butterfly effect:

For want of a nail the shoe was lost,
For want of a shoe the horse was lost,
For want of a horse the rider was lost,
For want of a rider the battle was lost,
For want of a battle the kingdom was lost,
And all for the want of a horseshoe nail.

The lack of one horseshoe nail could be inconsequential, or it could indirectly cause the loss of a war. There is no way to predict which outcome will occur. (If you want an excellent kids book to start teaching this to your children, check out If You Give a Mouse a Cookie.)

In this post, we will seek to unravel the butterfly effect from its many incorrect connotations, and build an understanding of how it affects our individual lives and the world in general.

Edward Lorenz and the Discovery of the Butterfly Effect

“It used to be thought that the events that changed the world were things like big bombs, maniac politicians, huge earthquakes, or vast population movements, but it has now been realized that this is a very old-fashioned view held by people totally out of touch with modern thought. The things that change the world, according to Chaos theory, are the tiny things. A butterfly flaps its wings in the Amazonian jungle, and subsequently a storm ravages half of Europe.”

— from Good Omens, by Terry Pratchett and Neil Gaiman
***

Although the concept of the butterfly effect has long been debated, the identification of it as a distinct effect is credited to Edward Lorenz (1917–2008). Lorenz was a meteorologist and mathematician who successfully combined the two disciplines to create chaos theory. During the 1950s, Lorenz searched for a means of predicting the weather, as he found linear models to be ineffective.

In an experiment to model a weather prediction, he entered the initial condition as 0.506, instead of 0.506127. The result was surprising: a somewhat different prediction. From this, he deduced that the weather must turn on a dime. A tiny change in the initial conditions had enormous long-term implications. By 1963, he had formulated his ideas enough to publish an award-winning paper entitled Deterministic Nonperiodic Flow. In it, Lorenz writes:

Subject to the conditions of uniqueness, continuity, and boundedness … a central trajectory, which in a certain sense is free of transient properties, is unstable if it is nonperiodic. A noncentral trajectory … is not uniformly stable if it is nonperiodic, and if it is stable at all, its very stability is one of its transient properties, which tends to die out as time progresses. In view of the impossibility of measuring initial conditions precisely, and thereby distinguishing between a central trajectory and a nearby noncentral trajectory, all nonperiodic trajectories are effectively unstable from the point of view of practical prediction.

In simpler language, he theorized that weather prediction models are inaccurate because knowing the precise starting conditions is impossible, and a tiny change can throw off the results. In order to make the concept understandable to non-scientific audiences, Lorenz began to use the butterfly analogy.

A small error in the initial data magnifies over time.

In speeches and interviews, he explained that a butterfly has the potential to create tiny changes which, while not creating a typhoon, could alter its trajectory. A flapping wing represents the minuscule changes in atmospheric pressure, and these changes compound as a model progresses. Given that small, nearly imperceptible changes can have massive implications in complex systems, Lorenz concluded that attempts to predict the weather were impossible. Elsewhere in the paper, he writes:

If, then, there is any error whatever in observing the present state—and in any real system such errors seem inevitable—an acceptable prediction of an instantaneous state in the distant future may well be impossible.

… In view of the inevitable inaccuracy and incompleteness of weather observations, precise very-long-range forecasting would seem to be nonexistent.

Lorenz always stressed that there is no way of knowing what exactly tipped a system. The butterfly is a symbolic representation of an unknowable quantity.

Furthermore, he aimed to contest the use of predictive models that assume a linear, deterministic progression and ignore the potential for derailment. Even the smallest error in an initial setup renders the model useless as inaccuracies compound over time. The exponential growth of errors in a predictive model is known as deterministic chaos. It occurs in most systems, regardless of their simplicity or complexity.

The butterfly effect is somewhat humbling—a model that exposes the flaws in other models. It shows science to be less accurate than we assume, as we have no means of making accurate predictions due to the exponential growth of errors.

Prior to the work of Lorenz, people assumed that an approximate idea of initial conditions would lead to an approximate prediction of the outcome. In Chaos: Making a New Science, James Gleick writes:

The models would churn through complicated, somewhat arbitrary webs of equations, meant to turn measurements of initial conditions … into a simulation of future trends. The programmers hoped the results were not too grossly distorted by the many unavoidable simplifying assumptions. If a model did anything too bizarre … the programmers would revise the equations to bring the output back in line with expectation… Models proved dismally blind to what the future would bring, but many people who should have known better acted as though they believed the results.

One theoretician declared, “The basic idea of Western science is that you don’t have to take into account the falling of a leaf on some planet in another galaxy when you’re trying to account for the motion of a billiard ball on a pool table on earth.”

An illustration of two weather conditions with very slightly different initial conditions. The trajectories are similar at first, before deviating further and further.

Lorenz’s findings were revolutionary because they proved this assumption to be entirely false. He found that without a perfect idea of initial conditions, predictions are useless—a shocking revelation at the time.

During the early days of computers, many people believed they would enable us to understand complex systems and make accurate predictions. People had been slaves to weather for millennia, and now they wanted to take control. With one innocent mistake, Lorenz shook the forecasting world, sending ripples which (appropriately) spread far beyond meteorology.

Ray Bradbury, the Butterfly Effect, and the Arrow of Time

Ray Bradbury’s classic science fiction story A Sound of Thunder predates the identification of chaos theory and the butterfly effect. Set in 2055, it tells of a man named Eckels who travels back 65 million years to shoot a dinosaur. Warned not to deviate from the tour guide’s plan, Eckels (along with his guide and the guide’s assistant) heads off to kill a Tyrannosaurus Rex who was going to die soon anyway when a falling tree lands on it. Eckels panics at the sight of the creature and steps off the path, leaving his guide to kill the T Rex. The guide is enraged and orders Eckels to remove the bullets before the trio returns to 2055. Upon arrival, they are confused to find that the world has changed. Language is altered and an evil dictator is now in charge. A confused Eckels notices a crushed butterfly stuck to his boot and realizes that in stepping off the path, he killed the insect and changed the future. Bradbury writes:

Eckels felt himself fall into a chair. He fumbled crazily at the thick slime on his boots. He held up a clod of dirt, trembling, “No, it cannot be. Not a little thing like that. No!”

Embedded in the mud, glistening green and gold and black, was a butterfly, very beautiful and very dead.

“Not a little thing like that! Not a butterfly!” cried Eckels.

It fell to the floor, an exquisite thing, a small thing that could upset balances and knock down a line of small dominoes and then big dominoes and then gigantic dominoes, all down the years across Time. Eckels' mind whirled. It couldn't change things. Killing one butterfly couldn't be that important! Could it?

Bradbury envisioned the passage of time as fragile and liable to be disturbed by minor changes. In the decades since the publication of A Sound of Thunder, physicists have examined its accuracy. Obviously, we cannot time–travel, so there is no way of knowing how plausible the story is, beyond predictive models. Bradbury’s work raises the questions of what time is and whether it is deterministic.

Physicists refer to the Arrow of Time—the non-reversible progression of entropy (disorder.) As time moves forward, matter becomes more and more chaotic and does not spontaneously return to its original state. If you break an egg, it remains broken and cannot spontaneously re-form, for example. The Arrow of Time gives us a sense of past, present, and future. Arthur Eddington (the astronomer and physicist who coined the term) explained:

Let us draw an arrow arbitrarily. If as we follow the arrow we find more and more of the random element in the state of the world, then the arrow is pointing towards the future; if the random element decreases the arrow points towards the past. That is the only distinction known to physics. This follows at once if our fundamental contention is admitted that the introduction of randomness is the only thing which cannot be undone.

In short, the passage of time as we perceive it does exist, conditional to the existence of entropy. As long as entropy is non-reversible, time can be said to exist. The closest thing we have to a true measurement of time is a measurement of entropy. If the progression of time is nothing but a journey towards chaos, it makes sense for small changes to affect the future by amplifying chaos.

We do not yet know if entropy creates time or is a byproduct of it. Subsequently, we cannot know if changing the past would change the future. Would stepping on a butterfly shift the path of entropy? Did Eckels move off the path out of his own free will, or was that event predetermined? Was the dictatorial future he returned to always meant to be?

These interconnected concepts — the butterfly effect, chaos theory, determinism, free will, time travel — have captured many imaginations since their discoveries. Films ranging from It’s a Wonderful Life to Donnie Darko and the eponymous Butterfly Effect have explored the complexities of cause and effect. Once again, it is important to note that works of fiction tend to view the symbolic butterfly as the cause of an effect. According to Lorenz’s original writing, though, the point is that small details can tip the balance without being identifiable.

The Butterfly Effect in Business

Marketplaces are, in essence, chaotic systems that are influenced by tiny changes. This makes it difficult to predict the future, as the successes and failures of businesses can appear random. Periods of economic growth and decline sprout from nowhere. This is the result of the exponential impact of subtle stimuli—the economic equivalent of the butterfly effect. Breuer explains:

We live in an interconnected, or rather a hyper-connected society. Organizations and markets “behave” like networks. This triggers chaotic (complex) rather than linear behavior.

Preparing for the future and seeing logic in the chaos of consumer behaviour is not easy. Once-powerful giants collapse as they fall behind the times. Tiny start-ups rise from the ashes and take over industries. Small alterations in existing technology transform how people live their lives. Fads capture everyone’s imagination, then disappear.

Businesses have two options in this situation: build a timeless product or service, or race to keep up with change. Many businesses opt for a combination of the two. For example, Doc Martens continues selling the classic 1460 boot, while bringing out new designs each season. This approach requires extreme vigilance and attention to consumer desires, in an attempt to both remain relevant and appear timeless. Businesses leverage the compounding impact of small tweaks that aim to generate interest in all they have to offer.

In The Butterfly Effect in Competitive Markets, Dr. Rajagopal writes that

most global firms are penetrating bottom-of-the-pyramid market segments by introducing small changes in technology, value perceptions, [and] marketing-mix strategies, and driving production on an unimagined scale of magnitude to derive a major effect on markets. …Procter & Gamble, Kellogg’s, Unilever, Nestlé, Apple, and Samsung, have experienced this effect in their business growth…. Well-managed companies drive small changes in their business strategies by nipping the pulse of consumers….

Most firms use such effect by making a small change in their strategy in reference to produce, price, place, promotion, … posture (developing corporate image), and proliferation…to gain higher market share and profit in a short span.

For most businesses, incessant small changes are the most effective way to produce the metaphorical typhoon. These iterations keep consumers engaged while preserving brand identity. If these small tweaks fail, the impact is hopefully not too great. But if they succeed and compound, the rewards can be monumental.

By nature, all markets are chaotic, and what seem like inconsequential alterations can propel a business up or down. Rajagopal explains how the butterfly effect connects to business:

Globalization and frequent shifts in consumer preferences toward products and services have accelerated chaos in the market due to the rush of firms, products, and business strategies. Chaos theory in markets addresses the behavior of strategic and dynamic moves of competing firms that are highly sensitive to existing market conditions triggering the butterfly effect.

The initial conditions (economic, social, cultural, political) in which a business sets up are vital influences on its success or failure. Lorenz found that the smallest change in the preliminary conditions created a different outcome in weather predictions, and we can consider the same to be true for businesses. The first few months and years are a crucial time, when rates of failure are highest and the basic brand identity forms. Any of the early decisions, achievements, or mistakes have the potential to be the wing flap that creates a storm.

Benoit Mandelbrot on the Butterfly Effect in Economics

International economies can be thought of as a single system, wherein each part influences the others. Much like the atmosphere, the economy is a complex system in which we see only the visible outcomes—rain or shine, boom or bust. With the advent of globalization and improved communication technology, the economy is even more interconnected than in the past. One episode of market volatility can cause problems for the entire system. The butterfly effect in economics refers to the compounding impact of small changes. As a consequence, it is nearly impossible to make accurate predictions for the future or to identify the precise cause of an inexplicable change. Long periods of stability are followed by sudden declines, and vice versa.

Benoit Mandelbrot (the “father of fractals”) began applying the butterfly effect to economics several decades ago. In a 1999 article for Scientific American, he explained his findings. Mandelbrot saw how unstable markets could be, and he cited an example of a company which saw its stock drop 40% in one day, followed by another 6%, before rising by 10%—the typhoon created by an unseen butterfly. When Benoit looked at traditional economic models, he found that they did not even allow for the occurrence of such events. Standard models denied the existence of dramatic market shifts. Benoit writes in Scientific American:

According to portfolio theory, the probability of these large fluctuations would be a few millionths of a millionth of a millionth of a millionth. (The fluctuations are greater than 10 standard deviations.) But in fact, one observes spikes on a regular basis—as often as every month—and their probability amounts to a few hundredths.

If these changes are unpredictable, what causes them? Mandelbrot’s answer lay in his work on fractals. To explain fractals would require a whole separate post, so we will go with Mandelbrot’s own simplified description: “A fractal is a geometric shape that can be separated into parts, each of which is a reduced-scale version of the whole.” He goes on to explain the connection:

In finance, this concept is not a rootless abstraction but a theoretical reformulation of a down-to-earth bit of market folklore—namely that movements of a stock or currency all look alike when a market chart is enlarged or reduced so that it fits the same time and price scale. An observer then cannot tell which of the data concern prices that change from week to week, day to day or hour to hour. This quality defines the charts as fractal curves and makes available many powerful tools of mathematical and computer analysis.”

In a talk, Mandelbrot held up his coffee and declared that predicting its temperature in a minute is impossible, but in an hour is perfectly possible. He applied the same concept to markets that change in dramatic ways in the short term. Even if a long-term pattern can be deduced, it has little use for those who trade on a shorter timescale.

Mandelbrot explains how his fractals can be used to create a more useful model of the chaotic nature of the economy:

Instead, multifractals can be put to work to “stress-test” a portfolio. In this technique, the rules underlying multifractals attempt to create the same patterns of variability as do the unknown rules that govern actual markets. Multifractals describe accurately the relation between the shape of the generator and the patterns of up-and-down swings of prices to be found on charts of real market data… They provide estimates of the probability of what the market might do and allow one to prepare for inevitable sea changes. The new modeling techniques are designed to cast a light of order into the seemingly impenetrable thicket of the financial markets. They also recognize the mariner’s warning that, as recent events demonstrate, deserves to be heeded: On even the calmest sea, a gale may be just over the horizon.

In The Misbehaviour of Markets, Mandelbrot and Richard Hudson expand upon the topic of financial chaos. They begin with a discussion of the infamous 2008 crash and its implications:

The worldwide market crash of autumn 2008 had many causes: greedy bankers, lax regulators and gullible investors, to name a few. But there is also a less-obvious cause: our all-too-limited understanding of how markets work, how prices move and how risks evolve. …

Markets are complex, and treacherous. The bone-chilling fall of September 29, 2008—a 7 percent, 777 point plunge in the Dow Jones Industrial Average—was, in historical terms, just a particularly dramatic demonstration of that fact. In just a few hours, more than $1.6 trillion was wiped off the value of American industry—$5 trillion worldwide.

Mandelbrot and Hudson believe that the 2008 credit crisis can be attributed in part to the increasing confidence in financial predictions. People who created computer models designed to guess the future failed to take into account the butterfly effect. No matter how complex the models became, they could not create a perfect picture of initial conditions or account for the compounding impact of small changes. Just as people believed they could predict and therefore control the weather before Lorenz published his work, people thought they could do the same for markets until the 2008 crash proved otherwise. Wall Street banks trusted their models of the future so much that they felt safe borrowing growing sums of money for what was, in essence, gambling. After all, their predictions said such a crash was impossible. Impossible or not, it happened.

According to Mandelbrot and Hudson, predictive models view markets as “a risky but ultimately … manageable world.” As with meteorology, economic predictions are based on approximate ideas of initial conditions—ideas that, as we know, are close to useless. As Mandelbrot and Hudson write:

[C]auses are usually obscure. … The precise market mechanism that links news to price, cause to effect, is mysterious and seems inconsistent. Threat of war: Dollar falls. Threat of war: Dollar rises. Which of the two will actually happen? After the fact, it seems obvious; in hindsight, fundamental analysis can be reconstituted and is always brilliant. But before the fact, both outcomes may seem equally likely.

In the same way that apparently similar weather conditions can create drastically different outcomes, apparently similar market conditions can create drastically different outcomes. We cannot see the extent to which the economy is interconnected and we cannot identify where the butterfly lies. Mandelbrot and Hudson disagree with the view of the economy as separate from other parts of our world. Everything connects:

No one is alone in this world. No act is without consequences for others. It is a tenet of chaos theory that, in dynamical systems, the outcome of any process is sensitive to its starting point—or in the famous cliché, the flap of a butterfly’s wings in the Amazon can cause a tornado in Texas. I do not assert that markets are chaotic…. But clearly, the global economy is an unfathomably complicated machine. To all the complexity of the physical world… you add the psychological complexity of men acting on their fleeting expectations….

Why do people prefer to blame crashes (such as the 2008 credit crisis) on the folly of those in the financial industry? Jonathan Cainer provides a succinct explanation:

Why do we love the idea that people might be secretly working together to control and organise the world? Because we do not like to face the fact that our world runs on a combination of chaos, incompetence, and confusion.

Historic Examples of the Butterfly Effect

“A very small cause which escapes our notice determines a considerable effect that we cannot fail to see, and then we say the effect is due to chance. If we knew exactly the laws of nature and the situation of the universe at the initial moment, we could predict exactly the situation of that same universe at a succeeding moment. But even if it were the case that the natural laws had no longer any secret for us, we could still only know the initial situation *approximately*. If that enabled us to predict the succeeding situation with *the same approximation*, that is all we require, and we should say that the phenomenon had been predicted, that it is governed by laws. But it is not always so; it may happen that small differences in the initial conditions produce very great ones in the final phenomena. A small error in the former will produce an enormous error in the latter. Prediction becomes impossible, and we have the fortuitous phenomenon.”

— Jules Henri Poincaré (1854–1912)

***

Many examples exist of instances where a tiny detail led to a dramatic change. In each case, the world we live in could be different if the situation had been reversed. Here are some examples of how the butterfly effect has shaped our lives.

  • The bombing of Nagasaki. The US initially intended to bomb the Japanese city of Kuroko, with the munitions factory as a target. On the day the US planned to attack, cloudy weather conditions prevented the factory from being seen by military personnel as they flew overhead. The airplane passed over the city three times before the pilots gave up. Locals huddled in shelters heard the hum of the airplane preparing to drop the nuclear bomb and prepared for their destruction. Except Kuroko was never bombed. Military personnel decided on Nagasaki as the target due to improved visibility. The implications of that split-second decision were monumental. We cannot even begin to comprehend how different history might have been if that day had not been cloudy. Kuroko is sometimes referred to as the luckiest city in Japan, and those who lived there during the war are still shaken by the near miss.
  • The Academy of Fine Arts in Vienna rejecting Adolf Hitler’s application, twice. In the early 1900s, a young Hitler applied for art school and was rejected, possibly by a Jewish professor. By his own estimation and that of scholars, this rejection went on to shape his metamorphosis from a bohemian aspiring artist into the human manifestation of evil. We can only speculate as to how history would have been different. But it is safe to assume that a great deal of tragedy could have been avoided if Hitler had applied himself to watercolors, not to genocide.
  • The assassination of Archduke Franz Ferdinand. A little-known fact about the event considered to be the catalyst for both world wars is that it almost didn’t happen. On the 28th of June, 1914, a teenage Bosnian-Serb named Gavrilo Princip went to Sarajevo with two other nationalists in order to assassinate the Archduke. The initial assassination attempt failed; a bomb or grenade exploded beneath the car behind the Archduke’s and wounded its occupants. The route was supposed to have been changed after that, but the Archduke’s driver didn’t get the message. Had he actually taken the alternate route, Princip would not have been on the same street as the car and would not have had the chance to shoot the Archduke and his wife that day. Were it not for a failure of communication, both world wars might never have happened.
  • The Chernobyl disaster. In 1986, a test at the Chernobyl nuclear plant went awry and released 400 times the radiation produced by the bombing of Hiroshima. One hundred fifteen thousand people were evacuated from the area, with many deaths and birth defects resulting from the radiation. Even today, some areas remain too dangerous to visit. However, it could have been much worse. After the initial explosion, three plant workers volunteered to turn off the underwater valves to prevent a second explosion. It has long been believed that the trio died as a result, although there is now some evidence this may not have been the case. Regardless, diving into a dark basement flooded with radioactive water was a heroic act. Had they failed to turn off the valve, half of Europe would have been destroyed and rendered uninhabitable for half a million years. Russia, Ukraine, and Kiev also would have become unfit for human habitation. Whether they lived or not, the three men—Alexei Ananenko, Valeri Bezpalov and Boris Baranov—stilled the wings of a deadly butterfly. Indeed, the entire Chernobyl disaster was the result of poor design and the ineptitude of staff. The long-term result (in addition to the impact on residents of the area) was a widespread anxiety towards nuclear plants and bias against nuclear power, leading to a preference for fossil fuels. Some people have speculated that Chernobyl is responsible for the acceleration of global warming, as countries became unduly slow to adopt nuclear power.
  • The Cuban Missile Crisis. We all may owe our lives to a single Russian Navy officer named Vasili Arkhipov, who has been called “the man who saved the world.” During the Cuban Missile Crisis, Arkhipov was stationed on a nuclear-armed submarine near Cuba. American aircraft and ships began using depth charges to signal the submarine that it should surface so it could be identified. With the submarine submerged too deep to monitor radio signals, the crew had no idea what was going on in the world above. The captain, Savitsky, decided the signal meant that war had broken out and he prepared to launch a nuclear torpedo. Everyone agreed with him—except Arkhipov. Had the torpedo launched, nuclear clouds would have hit Moscow, London, East Anglia and Germany, before wiping out half of the British population. The result could have been a worldwide nuclear holocaust, as countries retaliated and the conflict spread. Yet within an overheated underwater room, Arkhipov exercised his veto power and prevented a launch. Without the courage of one man, our world could be unimaginably different.

From these handful of examples, it is clear how fragile the world is, and how dire the effects of tiny events can be on starting conditions.

We like to think we can predict the future and exercise a degree of control over powerful systems such as the weather and the economy. Yet the butterfly effect shows that we cannot. The systems around us are chaotic and entropic, prone to sudden change. For some kinds of systems, we can try to create favorable starting conditions and be mindful of the kinds of catalysts that might act on those conditions – but that’s as far as our power extends. If we think that we can identify every catalyst and control or predict outcomes, we are only setting ourselves up for a fall.

Ed Latimore on The Secret to a Happy Life

Ed Latimore (@EdLatimore) might be the most interesting person you'll ever meet.

Ed is a professional heavyweight boxer, physics major, and philosopher. He's the author of the cult-hit Not Caring What Other People Think Is A Superpower.

This interview looks at the physics of boxing, the value of a coach, and a lot of philosophy. AFter listening to Ed, you won't see life the same way again.

This interview was recorded live in Pittsburgh, Pennsylvania.

Enjoy this amazing conversation.

Listen

Transcript
A lot of people like to take notes while listening. A transcription of this conversation is available to members of our learning community or you can purchase one separately.

The Difference Between Amateurs and Professionals

Why is it that some people seem to be hugely successful and do so much, while the vast majority of us struggle to tread water?

The answer is complicated and likely multifaceted.

One aspect is mindset—specifically, the difference between amateurs and professionals.

Most of us are just amateurs.

What’s the difference? Actually, there are many differences:

  • Amateurs stop when they achieve something. Professionals understand that the initial achievement is just the beginning.
  • Amateurs have a goal. Professionals have a process.
  • Amateurs think they are good at everything. Professionals understand their circles of competence.
  • Amateurs see feedback and coaching as someone criticizing them as a person. Professionals know they have weak spots and seek out thoughtful criticism.
  • Amateurs value isolated performance. Think about the receiver who catches the ball once on a difficult throw. Professionals value consistency. Can I catch the ball in the same situation 9 times out of 10?
  • Amateurs give up at the first sign of trouble and assume they’re failures. Professionals see failure as part of the path to growth and mastery.
  • Amateurs don’t have any idea what improves the odds of achieving good outcomes. Professionals do.
  • Amateurs show up to practice to have fun. Professionals realize that what happens in practice happens in games.
  • Amateurs focus on identifying their weaknesses and improving them. Professionals focus on their strengths and on finding people who are strong where they are weak.
  • Amateurs think knowledge is power. Professionals pass on wisdom and advice.
  • Amateurs focus on being right. Professionals focus on getting the best outcome.
  • Amateurs focus on first-level thinking. Professionals focus on second-level thinking.
  • Amateurs think good outcomes are the result of their brilliance. Professionals understand when good outcomes are the result of luck.
  • Amateurs focus on the short term. Professionals focus on the long term.
  • Amateurs focus on tearing other people down. Professionals focus on making everyone better.
  • Amateurs make decisions in committees so there is no one person responsible if things go wrong. Professionals make decisions as individuals and accept responsibility.
  • Amateurs blame others. Professionals accept responsibility.
  • Amateurs show up inconsistently. Professionals show up every day.
  • Amateurs go faster. Professionals go further.
  • Amateurs go with the first idea that comes into their head. Professionals realize the first idea is rarely the best idea.
  • Amateurs think in ways that can't be invalidated. Professionals don't.
  • Amateurs think in absolutes. Professionals think in probabilities.
  • Amateurs think the probability of them having the best idea is high. Professionals know the probability of that is low.
  • Amateurs think reality is what they want to see. Professionals know reality is what's true.
  • Amateurs think disagreements are threats. Professionals see them as an opportunity to learn.

There are a host of other differences, but they can effectively be boiled down to two things: fear and reality.

Amateurs believe that the world should work the way they want it to. Professionals realize that they have to work with the world as they find it. Amateurs are scared — scared to be vulnerable and honest with themselves. Professionals feel like they are capable of handling almost anything.

Luck aside, which approach do you think is going to yield better results?

Food for Thought:

  • In what circumstances do you find yourself behaving like an amateur instead of as a professional?
  • What’s holding you back? Are you hanging around people who are amateurs when you should be hanging around professionals?
Footnotes
  • 1

    Ideas in this article from Ryan Holiday, Ramit Sethi, Seth Godin and others.

How Filter Bubbles Distort Reality: Everything You Need to Know

The Basics

Read the headline, tap, scroll, tap, tap, scroll.

It is a typical day and you are browsing your usual news site. The New Yorker, BuzzFeed, The New York Times, BBC, The Globe and Mail, take your pick. As you skim through articles, you share the best ones with like-minded friends and followers. Perhaps you add a comment.

Few of us sit down and decide to inform ourselves on a particular topic. For the most part, we pick up our smartphones or open a new tab, scroll through a favored site and click on whatever looks interesting. Or we look at Facebook or Twitter feeds to see what people are sharing. Chances are high that we are not doing this intending to become educated on a certain topic. No, we are probably waiting in line, reading on the bus or at the gym, procrastinating, or grappling with insomnia, looking for some form of entertainment.

We all do this skimming and sharing and clicking, and it seems so innocent. But many of us are uninformed about or uninterested in the forces affecting what we see online and how content affects us in return — and that ignorance has consequences.

The term “filter bubble” refers to the results of the algorithms that dictate what we encounter online. According to Eli Pariser, those algorithms create “a unique universe of information for each of us … which fundamentally alters the way we encounter ideas and information.”

Many sites offer personalized content selections, based on our browsing history, age, gender, location, and other data. The result is a flood of articles and posts that support our current opinions and perspectives to ensure that we enjoy what we see. Even when a site is not offering specifically targeted content, we all tend to follow people whose views align with ours. When those people share a piece of content, we can be sure it will be something we are also interested in.

That might not sound so bad, but filter bubbles create echo chambers. We assume that everyone thinks like us, and we forget that other perspectives exist.

Filter bubbles transcend web surfing. In important ways, your social circle is a filter bubble; so is your neighborhood. If you're living in a gated community, for example, you might think that reality is only BMWs, Teslas, and Mercedes. Your work circle acts as a filter bubble, too, depending on whom you know and at what level you operate.

One of the great problems with filters is our human tendency to think that what we see is all there is, without realizing that what we see is being filtered.

Eli Pariser on Filter Bubbles

The concept of filter bubbles was first identified by Eli Pariser, executive of Upworthy, activist, and author. In his revolutionary book Filter Bubbles, Pariser explained how Google searches bring up vastly differing results depending on the history of the user. He cites an example in which two people searched for “BP” (British Petroleum). One user saw news related to investing in the company. The other user received information about a recent oil spill.

Pariser describes how the internet tends to give us what we want:

Your computer monitor is a kind of one-way mirror, reflecting your own interests while algorithmic observers watch what you click.

Pariser terms this reflection a filter bubble, a “personal ecosystem of information.” It insulates us from any sort of cognitive dissonance by limiting what we see. At the same time, virtually everything we do online is being monitored — for someone else's benefit.

Each time we click, watch, share, or comment, search engines and social platforms harvest information. In particular, this information serves to generate targeted advertisements. Most of us have experienced the odd sensation of deja vu as a product we took a look at online suddenly appears everywhere we go online, as well as in our email inboxes. Often this advertising continues until we succumb and purchase the product.

Targeted advertisements can help us to find what we need with ease, but costs exist:

Personalization is based on a bargain. In exchange for the service of filtering, you hand large companies an enormous amount of data about your daily life — much of which you might not trust your friends with.

The internet has changed a great deal from the early days, when people worried about strangers finding out who they were. Anonymity was once king. Now, our privacy has been sacrificed for the sake of advertising revenue:

What was once an anonymous medium where anyone could be anyone—where, in the words of the famous New Yorker cartoon, nobody knows you’re a dog—is now a tool for soliciting and analyzing our personal data. According to one Wall Street Journal study, the top fifty Internet sites, from CNN to Yahoo to MSN, install an average of 64 data-laden cookies and personal tracking beacons each. Search for a word like “depression” on Dictionary. com, and the site installs up to 223 tracking cookies and beacons on your computer so that other Web sites can target you with antidepressants. Share an article about cooking on ABC News, and you may be chased around the Web by ads for Teflon-coated pots. Open—even for an instant—a page listing signs that your spouse may be cheating and prepare to be haunted with DNA paternity-test ads. The new Internet doesn’t just know you’re a dog; it knows your breed and wants to sell you a bowl of premium kibble.

The sources of this information can be unexpected. Companies gather it from places we might not even consider:

When you read books on your Kindle, the data about which phrases you highlight, which pages you turn, and whether you read straight through or skip around are all fed back into Amazon’s servers and can be used to indicate what books you might like next. When you log in after a day reading Kindle e-books at the beach, Amazon can subtly customize its site to appeal to what you’ve read: If you’ve spent a lot of time with the latest James Patterson, but only glanced at that new diet guide, you might see more commercial thrillers and fewer health books.

One fact is certain. The personalization process is not crude or random. It operates along defined guidelines which are being refined every day. Honing occurs both on the whole and for individuals:

Most personalized filters are based on a three-step model. First, you figure out who people are and what they like. Then, you provide them with content and services that best fit them. Finally, you tune to get the fit just right. Your identity shapes your media. There’s just one flaw in this logic: Media also shape identity. And as a result, these services may end up creating a good fit between you and your media by changing … you.

In The Shallows, Nicholas Carr also covers online information collection. Carr notes that the more time we spend online, the richer the information we provide:

The faster we surf across the surface of the Web—the more links we click and pages we view—the more opportunities Google gains to collect information about us and to feed us advertisements. Its advertising system, moreover, is explicitly designed to figure out which messages are most likely to grab our attention and then to place those messages in our field of view. Every click we make on the Web marks a break in our concentration, a bottom-up disruption of our attention—and it’s in Google’s economic interest to make sure we click as often as possible.

Every single person who has ever spent time on the web knows how addictive the flow of stimulating information can be. No matter how disciplined we otherwise are, we cannot resist clicking related articles or scrolling through newsfeeds. There is a reason for this, as Pariser writes:

Personalized filters play to the most compulsive parts of you, creating “compulsive media” to get you to click things more.

In an attention economy, filter bubbles assist search engines, websites, and platforms in their goal to command the maximum possible share of our online time.

The Impact of Filter Bubbles

Each new technology brings with it a whole host of costs and benefits. Many are realized only as time passes. The invention of books led people to worry that memory and oral tradition would erode. Paper caused panic as young people switched from slates to this newfangled medium. Typewriters led to discussions of morality as female typists entered the job force and “distracted” men. The internet has been no exception. If anything, the issues presented by it are unique only in their complex intensity.

In particular, the existence of filter bubbles has led to widespread concern. Pariser writes:

Democracy requires citizens to see things from one another's point of view, but instead we're more and more enclosed in our own bubbles. Democracy requires a reliance on shared facts; instead we’re being offered parallel but separate universes.

… Personalization filters serve a kind of invisible autopropaganda, indoctrinating us with our own ideas, amplifying our desire for things that are familiar and leaving us oblivious to the dangers lurking in the dark territory of the unknown.

Pariser quotes Jon Chait as saying:

Partisans are more likely to consume news sources that confirm their ideological beliefs. People with more education are more likely to follow political news. Therefore, people with more education can actually become mis-educated.

Many people have debated the impact of filter bubbles on the recent US election and the Brexit vote. In both cases, large numbers of people were shocked by the outcome. Even those within the political and journalistic worlds expected the inverse results.

“We become, neurologically, what we think.”

— Nicholas Carr

In the case of the Brexit vote, a large percentage of those who voted to leave the European Union were older people who are less active online, meaning that their views are less visible. Those who voted to remain tended to be younger and more active online, meaning that they were in an echo chamber of similar attitudes.

Democracy requires everyone to be equally informed. Yet filter bubbles are distorting our ideas of the world. In a paper for Princeton University, Jacob N. Shapiro revealed the extent of the influence on our voting:

The results of these experiments demonstrate that (i) biased search rankings can shift the voting preferences of undecided voters by 20% or more, (ii) the shift can be much higher in some demographic groups, and (iii) search ranking bias can be masked so that people show no awareness of the manipulation. We call this type of influence, which might be applicable to a variety of attitudes and beliefs, the search engine manipulation effect. Given that many elections are won by small margins, our results suggest that a search engine company has the power to influence the results of a substantial number of elections with impunity. The impact of such manipulations would be especially large in countries dominated by a single search engine company.

Filter bubbles do not just occur on the internet. Shapiro provides an example from a decade ago of TV shifting the results of elections:

It is already well established that biased media sources such as newspapers, political polls, and television sway voters. A 2007 study by DellaVigna and Kaplan found, for example, that whenever the conservative-leaning Fox television network moved into a new market in the United States, conservative votes increased, a phenomenon they labeled the Fox News Effect. These researchers estimated that biased coverage by Fox News was sufficient to shift 10,757 votes in Florida during the 2000 US Presidential election: more than enough to flip the deciding state in the election, which was carried by the Republican presidential candidate by only 537 votes. The Fox News Effect was also found to be smaller in television markets that were more competitive.

However, Shapiro believes the internet has a more dramatic effect than other forms of media:

Search rankings are controlled in most countries today by a single company. If, with or without intervention by company employees, the algorithm that ranked election-related information favored one candidate over another, competing candidates would have no way of compensating for the bias. It would be as if Fox News were the only television network in the country. Biased search rankings would, in effect, be an entirely new type of social influence, and it would be occurring on an unprecedented scale. Massive experiments conducted recently by social media giant Facebook have already introduced other unprecedented types of influence made possible by the Internet. Notably, an experiment reported recently suggested that flashing “VOTE” advertisements to 61 million Facebook users caused more than 340,000 people to vote that day who otherwise would not have done so.

In the US election and the Brexit vote, filter bubbles caused people to become insulated from alternative views. Some critics have theorized that the widespread derision of Trump and Leave voters led them to be less vocal, keeping their opinions within smaller communities to avoid confrontation. Those who voted for Clinton or to Remain loudly expressed themselves within filtered communities. Everyone, it seemed, agreed with each other. Except, they didn’t, and no one noticed until it was too late.

A further issue with filter bubbles is that they are something we can only opt out of, not something we consent to. As of March 2017, an estimated 1.94 billion people have a Facebook account, of which 1.28 billion log on every day. It is safe to assume that only a small percentage are informed about the algorithms. Considering that 40% of people regard Facebook as their main news source, this is worrying. As with cognitive biases, a lack of awareness amplifies the impact of filter bubbles.

We have minimal concrete evidence of exactly what information search engines and social platforms collect. Even SEO (search engine optimization) experts do not know for certain how search rankings are organized. We also don’t know if sites collect information from users who do not have accounts.

Scandals are becoming increasingly common, as sites and services are found to be harvesting details without consent. For example, Evernote came under fire when documents revealed that staff members can access documents, and Unroll’s nasty habit of selling details of user email habits led to criticism. Even when this information is listed in user agreements or disclaimers, it can be difficult for users to ascertain from the confusing jargon how their data are being used, by whom, and why.

In his farewell speech, President Obama aired his personal concerns:

[We] retreat into our own bubbles, … especially our social media feeds, surrounded by people who look like us and share the same political outlook and never challenge our assumptions. … And increasingly, we become so secure in our bubbles that we start accepting only information, whether it’s true or not, that fits our opinions, instead of basing our opinions on the evidence that is out there.

Filter bubbles can cause cognitive biases and shortcuts to manifest, amplifying their negative impact on our ability to think in a logical and critical manner. A combination of social proof, availability bias, confirmation bias, and bias from disliking/liking is prevalent. As Pariser writes:

The filter bubble tends to dramatically amplify confirmation bias—in a way, it’s designed to. Consuming information that conforms to our ideas of the world is easy and pleasurable; consuming information that challenges us to think in new ways or question our assumptions is frustrating and difficult. This is why partisans of one political stripe tend not to consume the media of another. As a result, an information environment built on click signals will favor content that supports our existing notions about the world over content that challenges them.

Pariser sums up the result of extensive filtration: “A world constructed from the familiar is the world in which there's nothing to learn.”

Filter Bubbles and Group Psychology

We have an inherent desire to be around those who are like us and reinforce our worldview. Our online behavior is no different. People form tribes based on interests, location, employment, affiliation, and other details. These groups — subreddits, Tumblr fandoms, Facebook groups, Google+ circles, etc. — have their own rules, conventions, in-jokes, and even vocabulary. Within groups (even if members never meet each other), beliefs intensify. Anyone who disagrees may be ousted from the community. Sociologists call this behaviour “communal reinforcement” and stress that the ideas perpetuated can have no relation to reality or empirical evidence.

“When you’re asked to fight a war that’s over nothing, it’s best to join the side that’s going to win.”

— Conor Oberst

Communal reinforcement can be positive. Groups geared towards people with mental health problems, chronic illnesses, addictions, and other issues are often supportive and assist many people who might not have another outlet.

However, when a group is encased within a filter bubble, it can lead to groupthink. This is a psychological phenomenon wherein groups of people experience a temporary loss of the ability to think in a rational, moral and realistic manner. When the members of a group are all exposed to the same confirmatory information, the results can be extreme. Symptoms include being excessively optimistic, taking risks, ignoring legal and social conventions, regarding those outside the group as enemies, censoring opposing ideas, and pressuring members to conform. As occurred with the US election and the Brexit vote, those experiencing groupthink within a filter bubble see themselves as in the right and struggle to consider alternative perspectives.

For example, imagine a Facebook group for Trump supporters in the months prior to the election. Members share pro-Trump news items, discuss policies and circulate cohesive information among themselves. Groupthink sets in, as the members selectively process information, fail to evaluate alternative viewpoints, fail to consider risks, haze any members who disagree, and even ignore the possibility of a negative outcome. From the outside, we can see the issues with a combination of filter bubbles and groupthink, but they can be hard to identify from the inside.

How Can We Avoid Filter Bubbles?

Thankfully, it is not difficult to pop the filter bubble if we make an effort to do so. Methods for doing this include:

  • Using ad-blocking browser extensions. These remove the majority of advertisements from websites we visit. The downside is that most sites rely on advertising revenue to support their work, and some (such as Forbes and Business Insider) insist on users' disabling ad blockers before viewing a page.
  • Reading news sites and blogs which aim to provide a wide range of perspectives. Pariser’s own site, Upworthy, aims to do this. Others, including The Wall Street Journal, the New Yorker, BBC, and AP news claim to offer a balanced view of the world. Regardless of the sources we frequent, a brief analysis of the front page will provide a good idea of any biases. In the wake of the US election, a number of newsletters, sites, apps, and podcasts are working to pop the filter bubble. An excellent example is Colin Wright's podcast, Let’s Know Things (http://letsknowthings.com/), which examines a news story in context each week.
  • Switching our focus from entertainment to education. As Nicholas Carr writes in The Shallows: “The Net’s interactivity gives us powerful new tools for finding information, expressing ourselves, and conversing with others. It also turns us into lab rats constantly pressing levers to get tiny pellets of social or intellectual nourishment.”
  • Using Incognito browsing, deleting our search histories, and doing what we need to do online without logging into our accounts.
  • Deleting or blocking browser cookies. For the uninitiated, many websites plant “cookies” (small text files) each time we visit them; those cookies are then used to determine what content to show us. Cookies can be manually deleted, and browser extensions are available which remove them. In some instances, cookies are useful, so removal should be done with discretion.

Fish don’t know they are in the water and we don’t know we are in a filter bubble unless we take the effort to (as David Bowie put it) leave the capsule — if you dare.

In shaping what we see, filter bubbles show us a distorted map and not the terrain. In so doing, they trick our brains into thinking that this is the reality. As technology improves and the ability of someone like the NYT, say, to show the same story to 100 different people using 100 different ways, the filter bubble becomes deeper. We lose track of what's filtered and what's not as the news becomes tailored to cement our existing opinions. After all, everyone wants to read a newspaper that agrees with them.

Systems — be they people, cultures, or web browsing, to name a few examples — naturally have to filter information and thus they reduce options. Sometimes people make decisions, sometimes cultures make them, and increasingly algorithms make them. As the speed of information flowing through these systems increases, filters will play an even more important role.

Understanding that what we see is not all there is will help us realize that we're living in a distorted world and remind us to take off the glasses.

For more information on filter bubbles, consider reading Filter Bubbles by Eli Pariser, So, You’ve Been Publicly Shamed by Jon Ronson, The Shallows by Nicholas Carr or The Net Delusion by Evgeny Morozov.

A Primer on Critical Mass: Identifying Inflection Points

The Basics

Sometimes it can seem as if drastic changes happen at random.

One moment a country is stable; the next, a revolution begins and the government is overthrown. One day a new piece of technology is a novelty; the next, everyone has it and we cannot imagine life without it. Or an idea lingers at the fringes of society before it suddenly becomes mainstream.

As erratic and unpredictable as these occurrences are, there is a logic to them, which can be explained by the concept of critical mass. A collaboration between Thomas Schelling (a game theorist) and Mark Granovetter (a sociologist) led to the concept's being identified in 1971.

Also known as the boiling point, the percolation threshold, the tipping point, and a host of other names, critical mass is the point at which something (an idea, belief, trend, virus, behavior, etc.) is prevalent enough to grow, or sustain, a process, reaction, or technology.

As a mental model, critical mass can help us to understand the world around us by letting us spot changes before they occur, make sense of tumultuous times, and even gain insight into our own behaviors. A firm understanding can also give us an edge in launching products, changing habits, and choosing investments.

In The Decision Book, Mikael Krogerus wrote of technological critical masses:

Why is it that some ideas – including stupid ones – take hold and become trends, while others bloom briefly before withering and disappearing from the public eye?

… Translated into a graph, this development takes the form of a curve typical of the progress of an epidemic. It rises, gradually at first, then reaches the critical point of any newly launched product, when many products fail. The critical point for any innovation is the transition from the early adapters to the sceptics, for at this point there is a ‘chasm'. …

With technological innovations like the iPod or the iPhone, the cycle described above is very short. Interestingly, the early adaptors turn away from the product as soon as the critical masses have accepted it, in search of the next new thing.

In Developmental Evaluation, Michael Quinn Patton wrote:

Complexity theory shows that great changes can emerge from small actions. Change involves a belief in the possible, even the “impossible.” Moreover, social innovators don't follow a linear pathway of change; there are ups and downs, roller-coaster rides along cascades of dynamic interactions, unexpected and unanticipated divergences, tipping points and critical mass momentum shifts. Indeed, things often get worse before they get better as systems change creates resistance to and pushback against the new.

In If Nobody Speaks of Remarkable Things, Jon McGregor writes a beautiful explanation of how the concept of critical mass applies to weather:

He wonders how so much water can resist the pull of so much gravity for the time it takes such pregnant clouds to form, he wonders about the moment the rain begins, the turn from forming to falling, that slight silent pause in the physics of the sky as the critical mass is reached, the hesitation before the first swollen drop hurtles fatly and effortlessly to the ground.

Critical Mass in Physics

In nuclear physics, critical mass is defined as the minimum amount of a fissile material required to create a self-sustaining fission reaction. In simpler terms, it's the amount of reactant necessary for something to happen and to keep happening.

This concept is similar to the mental model of activation energy. The exact critical mass depends on the nuclear properties of a material, its density, its shape, and other factors.

In some nuclear reactions, a reflector made of beryllium is used to speed up the process of reaching critical mass. If the amount of fissile material is inadequate, it is referred to as a subcritical mass. Once the rate of reaction is increasing, the amount of material is referred to as a supercritical mass. This concept has been taken from physics and used in many other disciplines.

Critical Mass in Sociology

In sociology, a critical mass is a term for a group of people who make a drastic change, altering their behavior, opinions or actions.

“When enough people (a critical mass) think about and truly consider the plausibility of a concept, it becomes reality.”

—Joseph Duda

In some societies (e.g., a small Amazonian tribe), just a handful of people can change prevailing views. In larger societies (in particular, those which have a great deal of control over people, such as North Korea), the figure must usually be higher for a change to occur.

The concept of a sociological critical mass was first used in the 1960s by Morton Grodzins, a political science professor at the University of Chicago. Grodzins studied racial segregation — in particular, examining why people seemed to separate themselves by race even when that separation was not enforced by law. His hypothesis was that white families had different levels of tolerance for the number of people of racial minorities in their neighborhoods. Some white families were completely racist; others were not concerned with the race of their neighbors. As increasing numbers of racial minorities moved into neighborhoods, the most racist people would soon leave. Then a tipping point would occur — a critical mass of white people would leave until the area was populated by racial minorities. This phenomenon became known as “white flight.”

Critical Mass in Business

In business, at a macro level, critical mass can be defined as the time when a company becomes self-sustaining and is economically viable. (Please note that there is a difference between being economically viable and being profitable.) Just as a nuclear reaction reaches critical mass when it can sustain itself, so must a business. It is important, too, that a business chooses its methods for growth with care: sometimes adding more staff, locations, equipment, stock, or other assets can be the right choice; at other times, these additions can lead to negative cash flow.

The exact threshold and time to reach critical mass varies widely, depending on the industry, competition, startup costs, products, and other economic factors.

Bob Brinker, host of Money Talk, defines critical mass in business as:

A state of freedom from worry and anxiety about money due to the accumulation of assets which make it possible to live your life as you choose without working if you prefer not to work or just working because you enjoy your work but don't need the income. Plainly stated, the Land of Critical Mass is a place in which individuals enjoy their own personal financial nirvana. Differentiation between earned income and assets is a fundamental lesson to learn when thinking in terms of critical mass. Earned income does not produce critical mass … critical mass is strictly a function of assets.

Independence or “F*** You” Money

Most people work jobs and get paychecks. If you depend on a paycheck, like most of us, this means you are not independent — you are not self-sustaining. Once you have enough money, you can be self-sustaining.

If you were wealthy enough to be free, would you really keep the job you have now? How many of us check our opinions or thoughts before voicing them because we know they won't be acceptable? How many times have you agreed to work on a project that you know is doomed, because you need the paycheck?

“Whose bread I eat: his song I sing.”

—Proverb

In his book The Black Swan, Nassim Taleb describes “f*** you” money, which, “in spite of its coarseness, means that it allows you to act like a Victorian gentleman, free from slavery”:

It is a psychological buffer: the capital is not so large as to make you spoiled-rich, but large enough to give you the freedom to choose a new occupation without excessive consideration of the financial rewards. It shields you from prostituting your mind and frees you from outside authority — any outside authority. … Note that the designation f*** you corresponds to the exhilarating ability to pronounce that compact phrase before hanging up the phone.

Critical Mass in Psychology

Psychologists have known for a long time that groups of people behave differently than individuals.

Sometimes when we are in a group, we tend to be less inhibited, more rebellious, and more confident. This effect is known as mob behaviour. (An interesting detail is that mob psychology is one of the few branches of psychology which does not concern individuals.) As a general rule, the larger the crowd, the less responsibility people have for their behaviour. (This is also why individuals and not groups should make decisions.)

“[Groups of people] can be considered to possess agential capabilities: to think, judge, decide, act, reform; to conceptualize self and others as well as self's actions and interactions; and to reflect.”

—Burns and Engdahl

Gustav Le Bon is one psychologist who looked at the formation of critical masses of people necessary to spark change. According to Le Bon, this formation creates a collective unconsciousness, making people “a grain of sand amid other grains of sand which the wind stirs up at will.”

He identified three key processes which create a critical mass of people: anonymity, contagion, and suggestibility. When all three are present, a group loses their sense of self-restraint and behaves in a manner he considered to be more primitive than usual. The strongest members (often those who first convinced others to adopt their ideas) have power over others.

Examples of Critical Mass

Virality

Viral media include forms of content (such as text, images, and videos) which are passed amongst people and often modified along the way. We are all familiar with how memes, videos and jokes spread on social media. The term “virality” comes from the similarity to how viruses propagate.

“We are all susceptible to the pull of viral ideas. Like mass hysteria. Or a tune that gets into your head that you keep on humming all day until you spread it to someone else. Jokes. Urban legends. Crackpot religions. No matter how smart we get, there is always this deep irrational part that makes us potential hosts for self-replicating information.”

—Neal Stephenson, Snow Crash

In The Selfish Gene, Richard Dawkins compared memes to human genes. While the term “meme” is now, for the most part, used to describe content that is shared on social media, Dawkins described religion and other cultural objects as memes.

The difference between viral and mainstream media is that the former is more interactive and is shaped by the people who consume it. Gatekeeping and censorship are also less prevalent. Viral content often reflects dominant values and interests, such as kindness (for example, the dancing-man video) and humor. The importance of this form of media is apparent when it is used to negatively impact corporations or powerful individuals (such as the recent United Airlines and Pepsi fiascoes.)

Once a critical mass of people share and comment on a piece of content online, it reaches viral status. Its popularity then grows exponentially before it fades away a short time later.

Technology

The concept of critical mass is crucial when it comes to the adoption of new technology. Every piece of technology which is now (or once was) a ubiquitous part of our lives was once new and novel.

Most forms of technology become more useful as more people adopt them. There is no point in having a telephone if it cannot be used to call other people. There is no point in having an email account if it cannot be used to email other people.

The value of networked technology increases as the size of the network itself does. Eventually, the number of users reaches critical mass, and not owning that particular technology becomes a hindrance. Useful technology tends to lead the first adopters to persuade those around them to try it, too. As a general rule, the more a new technology depends upon a network of users, the faster it will reach critical mass. This situation creates a positive feedback loop.

In Zero to One, Peter Thiel describes how PayPal achieved the critical mass of users needed for it to be useful:

For PayPal to work, we needed to attract a critical mass of at least a million users. Advertising was too ineffective to justify the cost. Prospective deals with big banks kept falling through. So we decided to pay people to sign up.

We gave new customers $10 for joining, and we gave them $10 more every time they referred a friend. This got us hundreds of thousands of new customers and an exponential growth rate.

Another illustration of the importance of critical mass for technology (and the unique benefits of crowdfunding) comes from Chris LoPresti:

A friend of mine raised a lot of money to launch a mobile app; however, his app was trounced by one from another company that had raised a tenth of what he had, but had done so through 1,000 angels on Kickstarter. Those thousand angels became the customers and evangelists that provided the all-important critical mass early on. Any future project I do, I’ll do through Kickstarter, even if I don’t need the money.

Urban Legends

Urban legends are an omnipresent part of society, a modern evolution of traditional folklore. They tend to involve references to deep human fears and popular culture. Whereas traditional folklore was often full of fantastical elements, modern urban legends are usually a twist on reality. They are intended to be believed and passed on. Sociologists refer to them as “contemporary legends.” Some can survive for decades, being modified as time goes by and spreading to different areas and groups. Researchers who study urban legends have noted that many do have vague roots in actual events, and are just more sensationalized than the reality.

One classic urban legend is “The Hook.” This story has two key elements: a young couple parked in a secluded area and a killer with a hook for a hand. The radio in their car announces a serial killer on the loose, often escaped from a nearby institution, with a hook for a hand. In most versions, the couple panics and drives off, only to later find a hook hanging from the car door handle. In others, the man leaves the car while the woman listens to the radio bulletin. She keeps hearing a thumping sound on the roof of the car. When she exits to investigate, the killer is sitting on the roof, holding the man’s severed head. The origins of this story are unknown, although it first emerged in the 1950s in America. By 1960, it began to appear in publications.

Urban legends are an example of how a critical mass of people must be reached before an idea can spread. While the exact origins are rarely clear, it is assumed that it begins with a single person who misunderstands a news story or invents one and passes it on to others, perhaps at a party.

Many urban legends have a cautionary element, so they may first be told in an attempt to protect someone. “The Hook” has been interpreted as a warning to teenagers engaging in promiscuous behaviour. When this story is looked at by Freudian folklorists, the implications seem obvious. It could even have been told by parents to their children.

This cautionary element is clear in one of the first printed versions of “The Hook” in 1960:

If you are interested in teenagers, you will print this story. I do not know whether it's true or not, but it does not matter because it served its purpose for me… I do not think I will ever park to make out as long as I live. I hope this does the same for other kids.

Once a critical mass of people know an urban legend, the rate at which it spreads grows exponentially. The internet now enables urban legends (and everything else) to pass between people faster. Although a legend might also be disproved faster, that's a complicated mess. For now, as Lefty says in Donnie Brasco, “Forget about it.”

The more people who believe a story, the more believable it seems. This effect is exacerbated when media outlets or local police fall for the legends and issue warnings. Urban legends often then appear in popular culture (for example, “The Hook” inspired a Supernatural episode) and become part of our modern culture. The majority of people stop believing them, yet the stories linger in different forms.

Changes in Governments and Revolutions

“There are moments when masses establish contact with their nation's spirit. These are the moments of providence. Masses then see their nation in its entire history, and feel its moments of glory, as well as those of defeat. Then they can clearly feel turbulent events in the future. That contact with the immortal and collective nation's spirit is feverish and trembling. When that happens, people cry. It is probably some kind of national mystery, which some criticize, because they do not know what it represents, and others struggle to define it, because they have never felt it.”
―Corneliu Zelea Codreanu, For My Legionaries

***

From a distance, it can seem shocking when the people of a country revolt and overthrow dominant powers in a short time.

What is it that makes this sudden change happen? The answer is the formation of a critical mass of people necessary to move marginal ideas to a majority consensus. Pyotr Kropotkin wrote:

Finally, our studies of the preparatory stages of all revolutions bring us to the conclusion that not a single revolution has originated in parliaments or in any other representative assembly. All began with the people. And no revolution has appeared in full armor — born, like Minerva out of the head of Jupiter, in a day. They all had their periods of incubation during which the masses were very slowly becoming imbued with the revolutionary spirit, grew bolder, commenced to hope, and step by step emerged from their former indifference and resignation. And the awakening of the revolutionary spirit always took place in such a manner that at first, single individuals, deeply moved by the existing state of things, protested against it, one by one. Many perished, “uselessly,” the armchair critic would say. But the indifference of society was shaken by these progenitors. The dullest and most narrow-minded people were compelled to reflect, “Why should men, young, sincere, and full of strength, sacrifice their lives in this way?” It was impossible to remain indifferent; it was necessary to take a stand, for, or against: thought was awakening. Then, little by little, small groups came to be imbued with the same spirit of revolt; they also rebelled — sometimes in the hope of local success — in strikes or in small revolts against some official whom they disliked, or in order to get food for their hungry children, but frequently also without any hope of success: simply because the conditions grew unbearable. Not one, or two, or tens, but hundreds of similar revolts have preceded and must precede every revolution.

When an oppressive regime is in power, a change is inevitable. However, it is almost impossible to predict when that change will occur. Often, a large number of people want change and yet fear the consequences or lack the information necessary to join forces. When single individuals act upon their feelings, they are likely to be punished without having any real impact. Only when a critical mass of people’s desire for change overwhelms their fear can a revolution occur. Other people are encouraged by the first group, and the idea spreads rapidly.

One example occurred in China in 1989. While the desire for change was almost universal, the consequences felt too dire. When a handful of students protested for reform in Beijing, authorities did not punish them. We have all seen the classic image of a lone student, shopping bags in hand, standing in front of a procession of tanks and halting them. Those few students who protested were the critical mass. Demonstrations erupted in more than 300 towns all over the country as people found the confidence to act.

Malcolm Gladwell on Tipping Points

An influential text on the topic of critical mass is Malcolm Gladwell’s The Tipping Point. Published in 2000, the book describes a tipping point as “the moment of critical mass, the threshold, the boiling point.” He notes that “Ideas and products and messages and behaviors spread just like viruses do” and cites such examples as the sudden popularity of Hush Puppies and the steep drop in crime in New York after 1990. Gladwell writes that although the world “may seem like an immovable, implacable place,” it isn't. “With the slightest push — in just the right place — it can be tipped.”

Referring to the 80/20 rule (also known as Pareto’s principle), Gladwell explains how it takes a tiny number of people to kickstart the tipping point in any sort of epidemic:

Economists often talk about the 80/20 Principle, which is the idea that in any situation roughly 80 percent of the “work” will be done by 20 percent of the participants. In most societies, 20 percent of criminals commit 80 percent of crimes. Twenty percent of motorists cause 80 percent of all accidents. Twenty percent of beer drinkers drink 80 percent of all beer. When it comes to epidemics, though, this disproportionality becomes even more extreme: a tiny percentage of people do the majority of the work.

Rising crime rates are also the result of a critical mass of people who see unlawful behavior as justified, acceptable, or necessary. It takes only a small number of people who commit crimes for a place to seem dangerous and chaotic. Gladwell explains how minor transgressions lead to more serious problems:

[T]he Broken Windows theory … was the brainchild of the criminologist James Q. Wilson and George Kelling. Wilson and Kelling argued that crime is the inevitable result of disorder. If a window is broken and left unrepaired, people walking by will conclude that no one cares and no one is in charge. Soon, more windows will be broken, and the sense of anarchy will spread from the building to the street it faces, sending a signal that anything goes. In a city, relatively minor problems like graffiti, public disorder, and aggressive panhandling, they write, are all the equivalent of broken windows, invitations to more serious crimes…

According to Gladwell’s research, there are three main factors in the creation of a critical mass of people necessary to induce a sudden change.

The first of these is the Law of the Few. Gladwell states that certain categories of people are instrumental in the creation of tipping points. These categories are:

  • Connectors: We all know connectors. These are highly gregarious, sociable people with large groups of friends. Connectors are those who introduce us to other people, instigate gatherings, and are the fulcrums of social groups. Gladwell defines connectors as those with networks of over one hundred people. An example of a cinematic connector is Kevin Bacon. There is a trivia game known as “Six Degrees of Kevin Bacon,” in which players aim to connect any actor/actress to him within a chain of six films. Gladwell writes that connectors have “some combination of curiosity, self-confidence, sociability, and energy.”
  • Mavens: Again, we all know a maven. This is the person we call to ask what brand of speakers we should buy, or which Chinese restaurant in New York is the best, or how to cope after a rough breakup. Gladwell defines mavens as “people we rely upon to connect us with new information.” These people help create a critical mass due to their habit of sharing information, passing knowledge on through word of mouth.
  • Salesmen: Whom would you call for advice about negotiating a raise, a house price, or an insurance payout? That person who just came to mind is probably what Gladwell calls a salesman. These are charismatic, slightly manipulative people who can persuade others to accept what they say.

The second factor cited by Gladwell is the “stickiness factor.” This is what makes a change significant and memorable. Heroin is sticky because it is physiologically addictive. Twitter is sticky because we want to keep returning to see what is being said about and to us. Game of Thrones is sticky because viewers are drawn in by the narrative and want to know what happens next. Once something reaches a critical mass, stickiness can be considered to be the rate of decline. The more sticky something is, the slower its decline. Cat videos aren't very sticky, so even the viral ones thankfully fade into the night quickly.

Finally, the third factor is the specific context; the circumstances, time, and place must be right for an epidemic to occur. Understanding how a tipping point works can help to clarify the concept of critical mass.

The 10% Rule

One big question is: what percentage of a population is necessary to create a critical mass?

According to researchers at Rensselaer Polytechnic Institute, the answer is a mere 10%. Computational analysis was used to establish where the shift from minority to majority lies. According to director of research Boleslaw Szymanski:

When the number of committed opinion holders is below 10 percent, there is no visible progress in the spread of ideas. It would literally take the amount of time comparable to the age of the universe for this size group to reach the majority. Once that number grows above 10 percent, the idea spreads like flame.

The research has shown that the 10% can comprise literally anyone in a given society. What matters is that those people are set in their beliefs and do not respond to pressure to change them. Instead, they pass their ideas on to others. (I'd argue that the percentage is lower. Much lower. See Dictatorship of the Minority.)

As an example, Szymanski cites the sudden revolutions in countries such as Egypt and Tunisia: “In those countries, dictators who were in power for decades were suddenly overthrown in just a few weeks.”

According to another researcher:

In general, people do not like to have an unpopular opinion and are always seeking to try locally to come to a consensus … As agents of change start to convince more and more people, the situation begins to change. People begin to question their own views at first and then completely adopt the new view to spread it even further. If the true believers just influenced their neighbors, that wouldn’t change anything within the larger system, as we saw with percentages less than 10.

The potential use of this knowledge is tremendous. Now that we know how many people are necessary to form a critical mass, this information can be manipulated — for good or evil. The choice is yours.

Attrition Warfare: When Even Winners Lose

When warring opponents use similar approaches and possess similar weapons, trench warfare becomes inevitable. The winning side usually has a slight advantage in production capability or resources.

It's hard to see when you're in it, but most people and businesses are in some form of attrition warfare. The best way out is to use a different approach — through tactics, strategy, or weaponry.

“When you do as everyone else does, don't be surprised when you get what everyone else gets.”

— Peter Kaufman

The International Encyclopedia of the First World War defines attrition warfare as “the sustained process of wearing down an opponent so as to force their physical collapse through continuous losses in personnel, equipment and supplies or [wearing] them down to such an extent that their will to fight collapses.”

Attrition warfare is considered a somewhat dirty tactic, although necessary in some situations. Indeed, theorists are divided as to whether attrition is even a separate tactic, rather than a ubiquitous feature of all conflict.

Traditional military theorists such as Sun Tzu (“Supreme excellence consists of breaking the enemy's resistance without fighting”) and Machiavelli (“Never attempt to win by force what can be won by deception”) evangelized for clever tactics. These methods tend to result in far fewer casualties, waste fewer resources and are a display of superior intellect, rather than just strength.

Attrition warfare is usually a last resort only. And most of the time when you win, it's only temporary. By not scoring a decisive blow, the winners leave room for the losers to believe they can win the next time.

To understand attrition warfare, we can look at examples of how it works. Let’s examine two wars where attrition played a substantial role.

Attrition Warfare in World War I

One of the clearest examples of attrition warfare is World War I, so much so that many historians refer to it as “the War of Attrition.”

Military technology evolved at an unprecedented rate during the start of the 20th century, and the usual maneuvers and tactics were irrelevant. The large-scale recruitment of horses for cavalry officers, who were then pitted against shells and machine guns, provides a classic image of this sad discrepancy between what was understood and what was done. Raised on Tennyson’s images of heroic hand-to-hand combat, soldiers found themselves instead confined to trenches in a vicious fight to gain territory, inch by inch.

The goal for much of the war was for each side to amass artillery and troops faster than the other, in order to grind down defenses and sap resources. Both sides were reduced by pure attrition.

Trenches provided a somewhat effective means of protection, as long as soldiers remained within them. Perhaps the most poignant comment on this comes from Harry Patch, the longest surviving World War I soldier, who lived to 111: “If any man tells you he went over the top and he wasn't scared, he's a damn liar.”

Even when territory was gained, moving the heavy weapons was difficult and predictable. Germany slowly lost strength, leading to the eventual failure of their army. In the process, millions of people died and vast sums of money were spent on ammunition and other resources.

Strategists and commanders were somewhat out of their depth, as no one had any real experience with this type of warfare. Leaving the trenches meant the loss of the most valuable resource — people — and the usual techniques were irrelevant.

This situation led to a stationary war, wherein each side engaged in an incessant hurling of ammunition in the hope of eroding the other’s morale and supplies.

“The war will be ended by the exhaustion of nations rather than the victories of armies.”

— Winston Churchill

In a paper entitled The Theories of Attrition versus Manoeuvre and the Levels of War, Abel J. Esterhuyse writes:

Although campaigns were conducted in different parts of the world, the First World War was largely restricted to a relatively small geographical area in Western Europe. It was characterised by high personnel and equipment losses; an inability to bring the war to a decisive end; and bloody attritional fighting where, in a number of cases, the aim was to conquer terrain with no tactical and strategic value. This created an aversion to or negative view of an attritional war. It also contributed to a view that attrition required that terrain be occupied at all cost in order to ensure success.

We can also see the impact of attrition in the extensive efforts to boost the morale of those at home, far from the fighting — what was referred to as “the home front.”

War is rarely what we imagine. Attrition warfare was not what people had in mind when they sent off their sons, husbands, and friends, or mailed white feathers to the “cowards” who did not go. The stark difference between early propaganda posters and later photographs of the actual fighting illustrates this disconnect.

In a war of attrition, morale is one of the key resources. Initiatives such as the donation of kettles to make airplanes, the growing of cabbages in rose beds, and letters sent to those in the trenches did little to further the pursuit of peace but did much to keep people hopeful.

Historians and military theorists have asserted that the use of attrition was not altogether a strategic one. Much of World War I was characterized by indecision and poor communication, as commanders grappled with slow technology and the difficulty of seeing the bigger picture.

It's not uncommon in war to develop a chessboard-style mentality, wherein commanders are not averse to sacrificing human life for the sake of a strategic advantage.

One particular battle from World War I which stands out as a notable example of attrition warfare is the Battle of Verdun. Occurring in France over 303 days, it involved a standoff between the German and French forces.

The German army intended to capture hills in the area, thereby gaining an elevated point for strategic maneuvers. Early success boosted German morale before their progress slowed to a glacial crawl.

The French army was ordered not to withdraw under any circumstances and to sustain consistent counter-attacks. Regularly switching tactics, the German army continued trying to gain more useful territory.

By July, the battle of the Somme was in full force and the Germans subsequently had fewer resources — a bad position to be in during a war of attrition. Territory was gained and lost numerous times, with a single village switching hands 16 times between June and August.

As German resources were further reduced, commanders resorted to deceit (which is also not compatible with attrition). By the time the French army reclaimed the lost territory, the Battle of Verdun had morphed into one of the deadliest and most expensive battles ever. Estimates of the number of deaths range from 700,000 to nearly 1 million.

From the start, the Battle of Verdun was fought upon a foundation of attrition. Both sides were essentially trapped, with no option but to keep trying to force the other into submission. Both sides were using similar approaches with similar capabilities.

Much has been written about the tactics used, with some critics stating that the use of attrition was a necessity due to poor judgment, rather than being a considered choice. It also illustrates a problematic element of attrition warfare: Once a substantial number of lives have been lost, commanders are motivated to continue a battle for longer than is wise in an attempt to justify those losses — a deadly instance of the sunk costs fallacy.

In The Price of Glory, Alistair Horne quotes the diary of a French lieutenant who fought at Verdun: “Humanity is mad. It must be mad to do what it is doing. What a massacre! What scenes of horror and carnage! I cannot find words to translate my impressions. Hell cannot be so terrible. Men are mad!” He was killed a short time later as a result of a shell explosion. Horne also quotes a German soldier: “[The war would not be over] until the last German and the last French hobbled out of the trenches to exterminate each other with pocket knives.”

Attrition Warfare in the Vietnam War

The Vietnam War is another key example of attrition warfare. North Vietnam (aided by China, the Soviet Union, and other communist nations) fought South Vietnam (aided by the US, South Korea, Thailand, Australia and others). Spread over Laos, Cambodia, and Vietnam, the war lasted almost 20 years.

Much like World War II, the Vietnam War left an indelible mark on the affected nations and people. It was characterized by extreme brutality and human suffering — a ubiquitous feature of attrition warfare.

In an attempt to either prevent or foster the growth of communism, dominant world powers faced each other in appalling conditions. Somewhere in the region of 1.4–3.4 million lives were lost in the fight to prevent an economic model from spreading. Ten percent of South Vietnam was destroyed with Agent Orange in a bid to cut off food supplies, leading to hundreds of thousands of birth defects.

In the early 1960s, the US began to send troops to Vietnam (then known as French Indochina). Borders and old alliances were forgotten as the fighting spread over the following two decades.

Guerrilla warfare clashed with attrition. Struggling to compete with an unpredictable guerrilla insurgency (which included raids, sabotage and ambushes), the US army turned to attrition and sought to kill as many Vietnamese people as possible. Rather than aiming to gain territory as in typical conflicts, the goal was to reach the highest feasible body count in order to batter Vietnam into submission.

In Kill Anything That Moves, Nick Turse paints a stark portrait of the use of attrition warfare in the Vietnam War:

In Vietnam, the statistically minded war managers focused, above all, on the notion of achieving a “crossover point”: the moment when American soldiers would be killing more enemies than their Vietnamese opponents could replace… Producing a high body count was crucial for promotion in the officer corps. Many high-level officers established “production quotas” for their units, and systems of “debit” and “credit” to calculate exactly how efficiently subordinate units and middle-management personnel performed. Different formulas were used, but the commitment to war as a rational production process was common to all.

In many cases, attrition warfare can become a process of attempting to outright annihilate an opponent.

In Vietnam, uncertainty as to who the enemy was turned every non-American into a target.

The Americans never really grasped who the enemy was… they assumed that most villagers either were in league with the enemy or were guerrillas themselves once the sun went down… Farmers simply wanted nothing to do with the conflict or abstract notions like nationalism and communism… But bombs and napalm don’t discriminate. As gunships and howitzers ravaged the landscape… Vietnamese villages of every type … perished in vast numbers.”

Guerrilla warfare is the inverse of attrition: unstructured, fragmented and based on tactics, not force. It can take a large force engaging in attrition to defeat a small one engaged in guerrilla warfare.

Vietnamese revolutionary forces decisively outgunned by their adversaries, relied heavily on mines and other booby traps, as well as sniper fire and ambushes… Unable to deal with an enemy that overwhelmingly dictated the time, place, and duration of combat, US forces took to destroying whatever they could manage.

The Issues With Attrition Warfare

Among the many problems with attrition warfare are these:

  • High death tolls — This is the primary issue. Although all wars involve casualties, attrition warfare increases the number of combatants and civilians who are killed.
  • High costs — Attrition warfare requires a lot of resources. Adjusting for inflation, the Vietnam war cost $770 billion, plus $1 trillion for subsequent veterans benefits. Those figures do not even take into account the impact on industries and economies.
  • The potential for abuse — When maximum damage is the goal, there is more potential for war crimes to occur and for people to abuse their power.
  • Long durations — A war of attrition may be lengthy and slow moving. As Sun Tzu wrote, “there is no instance of a country having benefited from prolonged warfare.”
  • The potential for an unstable outcome — The outcome of attrition warfare is not always clear. A nation which is battered into submission may rebel in the future, and tensions are exacerbated. The unstable outcome of World War I is in part responsible for the outbreak of World War II.
  • Long-term impact on a nation — Attrition warfare can cause serious long-term problems for both sides. The deaths of large numbers of young, mostly male combatants can lead to a dramatic drop in birth rates for years to come. It is estimated that World War I led to 3.2 million fewer births than would have been expected in Germany alone. Fewer people to work during and after a war means a decrease in productivity and a change in the structure of economies. Money spent on fighting a war of attrition cannot be used for other areas, such as healthcare. Educational institutions often suffer, as young people are less able to attend. Large areas of land are destroyed, damaging farmland, homes, and infrastructure. It can take decades for a nation to recover from the impact of attrition warfare.

Attrition in Business — and Two Ways Out

The concept of attrition applies outside of war. Bureaucracies grind people out. Excessive competition among businesses might be good for consumers, but it's a grind for the companies involved.

There are two ways out that seem important.

The first option is that after you recognize that you're mired in attrition, you can, of course, opt out. Maybe if you put aside your ego, you'll decide that the time and resources required to compete are too much.

The second option might appeal more. History shows that the way out of trench warfare is the use of asymmetric weaponry. If you're using basically the same strategy with basically the same resources as your competitor, you're in a war of attrition. If, however, you choose a radically different strategy, with the same resources or fewer, you're likely not to be in a war of attrition. Of course, if you're right and your radically different strategy succeeds, you get the spoils. If you're wrong and your strategy fails, you get the humiliation of being wrong. And since the world rewards conventional failure and not unconventional failure, you've got a difficult choice to make.