Tag: Mental Model

Bias from Association: Why We Shoot the Messenger

Bias from Association

We automatically connect a stimulus (thing/person) with pain (fear) or pleasure (hope). As pleasure seeking animals we seek out positive associations and attempt to remove negative ones. This happens easily when we experience the positive or negative consequences of a stimulus. The more vivid the event the easier it is to remember. Brands (including people) attempt to influence our behavior by associating with positive things. 

***

Bias from Association

Our life and memory revolve around associations. The smell of a good lunch makes our stomach growl, the songs we hear remind us about the special times that we have had and horror movies leave us with goosebumps.

These natural, uncontrolled responses upon a specific signal are examples of classical conditioning. Classical conditioning, or in simple terms — learning by association, was discovered by a Russian scientist Ivan Petrovich Pavlov. Pavlov was a physiologist whose work on digestion in dogs won him a Nobel Prize in 1904.

In the course of his work in physiology, Pavlov made an accidental observation that dogs started salivating even before their food was presented to them.

With repeated testing, he noticed that the dogs began to salivate in anticipation of a specific signal, such as the footsteps of their feeder or, if conditioned that way, even after the sound of a tone.

Pavlov’s genius lay in his ability to understand the implications of his discovery. He knew that dogs have a natural reflex of salivating to food but not to footsteps or tones. He was on to something. Pavlov realized that, if coupling the two signals together induced the same reactive response in dogs, then other physical reactions may be inducible via similar associations.

In effect, with Pavlovian association, we respond to a stimulus because we anticipate what comes next: the reality that would make our response correct.

Now things get interesting.

Rules of Conditioning

Suppose we want to condition a dog to salivate to a tone. If we sound the tone without having taught the dog to specifically respond, the ears of the dog might move, but the dog will not salivate. The tone is just a neutral stimulus, at this point. On the other hand, food for the dog is an unconditioned stimulus, because it always makes the dog salivate.

If we now pair the arrival of food and the sound of the tone, we elicit a learning trial for the dog. After several such trials the association develops and is strong enough to make the dog salivate even though there is no food. The tone, at this point, has become a conditioned stimulus. This is learned hope. Learned fear is more easily acquired.

The speed and degree to which the dog learns to display the response will depend on several factors.

The best results come when the conditioned stimulus is paired with the unconditioned one several times. This develops a strong association. It takes time for our brains to detect specific patterns.

Classical conditioning involves automatic or reflexive responses and not voluntary behavior.*

There are also cases to which this principle does not apply. When we undergo high impact events, such as a car crash, robbery or firing from a job, a single event will be enough to create a strong association.

Why We Shoot The Messenger

One of our goals should be to understand how the world works. A necessary condition to this is understanding our problems. However, sometimes people are afraid to tell us problems.

This is also known at The Pavlovian Messenger Syndrome.

The original messenger wasn't shot, he was beheaded. In Plutarch's Lives we find:

The first messenger, that gave notice of Lucullus' coming was so far from pleasing Tigranes that, he had his head cut off for his pains; and no man dared to bring further information. Without any intelligence at all, Tigranes sat while war was already blazing around him, giving ear only to those who flattered him.

The number of times that happens in an organization is countless. A related sentiment exists in Antigone by Sophocles as “No one loves the messenger who brings bad news.”

In a lesson on elementary worldly wisdom, Charlie Munger said:

If people tell you what you really don't want to hear — what's unpleasant —there's an almost automatic reaction of antipathy. You have to train yourself out of it. It isn't foredestined that you have to be this way. But you will tend to be this way if you don't think about it.

In Antony and Cleopatra, when told Antony has married another, Cleopatra threatens to treat the messenger poorly, eliciting the response “Gracious madam, I that do bring the news made not the match.”

And the advice “Don't shoot the messenger” appears in Henry IV, Part 2.

If you yourself happen to be the messenger, it might be best to deliver the news first via and appear in person later to minimize the negative feelings towards you.

If, on the other hand, you're the receiver of bad news, it's best to follow the advice of Warren Buffett, who comments on being informed of bad news:

We only give a couple of instructions to people when they go to work for us: One is to think like an owner. And the second is to tell us bad news immediately — because good news takes care of itself. We can take bad news, but we don't like it late.

Pavlov showed that sequence matters: the association is most clear to us when the conditioned stimulus appears first and remains after the unconditioned stimulus is introduced.

Unsurprisingly, our learning responses become weaker if the two stimuli are introduced at the same time and are even slower if they are presented in the reverse (unconditioned then conditioned stimulus) order.

Attraction and Repulsion

There’s no doubt that classical conditioning influences what attracts us and even arouses us. Most of us will recognize that images and videos of kittens will make our hearts softer and perfume or a look from our partner can make our hearts beat faster.

Charlie Munger explains the case of building Coca-Cola, whose marketing and product strategy is built on strong foundations of conditioning.

Munger walks us through the creation of the brand by using conditioned reflexes:

The neural system of Pavlov's dog causes it to salivate at the bell it can't eat. And the brain of man yearns for the type of beverage held by the pretty woman he can't have. And so, Glotz, we must use every sort of decent, honorable Pavlovian conditioning we can think of. For as long as we are in business, our beverage and its promotion must be associated in consumer minds with all other things consumers like or admire.

By repeatedly pairing a product or brand with a favorable impression, we can turn it into a conditioned stimulus that makes us buy.

This goes even beyond advertising — conditioned reflexes are also encompassed in Coca Cola’s name. Munger continues:

Considering Pavlovian effects, we will have wisely chosen the exotic and expensive-sounding name “Coca-Cola,” instead of a pedestrian name like “Glotz's Sugared, Caffeinated Water.”

And even texture and taste:

And we will carbonate our water, making our product seem like champagne, or some other expensive beverage, while also making its flavor better and imitation harder to arrange for competing products.

Combining these and other clever, non-Pavlovian techniques leads to what Charlie Munger calls the lollapalooza effect causing so many consumers to buy and making Coca-Cola a great business for over a century.

While Coca-Cola has some of its advantages rooted in positive Pavlovian association, there are cases when associations do no good. In childhood many of us were afraid of doctors or dentists, because we quickly learnt to associate these visits with pain. While we may have lost our fear of dentists, by now many of us experience similarly unpleasant feelings when opening a letter from the police or anticipating a negative performance review.

Constructive criticism can be one of life’s great gifts and an engine for improvement, however, before we can benefit from it, we must be prepared that some of it will hurt. If we are not at least implicitly aware of the conditioning phenomena and have people telling us what we don’t want to hear, we may develop a certain disliking to those delivering the news.

The amount of people in leadership positions unable to detach the information from the messenger can be truly surprising. In The Psychology of Human Misjudgement, Munger tells about the ex-CEO of CBS, William Paley, who had a blind spot for ideas that did not align with his views.

Television was dominated by one network-CBS-in its early days. And Paley was a god. But he didn't like to hear what he didn't like to hear, and people soon learned that. So they told Paley only what he liked to hear. Therefore, he was soon living in a little cocoon of unreality and everything else was corrupt although it was a great business.

In the case of Paley, his inability to take criticism and recognize incentives was soon noticed by those around him and it resulted in sub-optimal outcomes.

… If you take all the acquisitions that CBS made under Paley after the acquisition of the network itself, with all his dumb advisors-his investment bankers, management consultants, and so forth, who were getting paid very handsomely-it was absolutely terrible.

Paley is by no means the only example of such dysfunction in the high ranks of business. In fact, the higher up you are in an organization the more people fear telling you the truth. Providing sycophants with positive reinforcement will only encourage this behaviour and ensure you're insulated from reality.

To make matters worse, as we move up in seniority, we also tend to become more confident about our own judgements being correct. This is a dangerous tendency, but we need not be bound by it.

We can train ourselves out of it with reflection and effort.

Escaping Associations

No doubt that learning via associations is crucial for our survival — it alerts us about the arrival of an important event and gives us time to prepare for the appropriate response.

Sometimes, however, learnt associations do not serve us and our relationships well. We find that we have become subject to negative responses in others or recognize unreasonable responses in ourselves.

Awareness and understanding may serve as good first steps. Yet, even when taken together they may not be sufficient to unlearn some of the more stubborn associations. In such cases we may want to try several known techniques to desensitize them or reverse their negative effects.

One way to proceed is via habituation.

When we habituate someone, we blunt down their conditioned response by exposing them to the specific stimulus pairing continuously. After a while, they simply stop responding. This loss of interest is a natural learning response that allows us to conserve energy for stimuli that are unfamiliar and therefore draw the attention of the mind.

Continuous exposure can yield results as powerful as becoming fully indifferent to stimuli as strong as violence and death.

In Man’s Search For Meaning, Viktor Frankl, a holocaust survivor, tells about experiencing absolute desensitization to the most horrific events imaginable:

Disgust, horror and pity are emotions that our spectator [Frankl] could not really feel any more. The sufferers, the dying and the dead, became such commonplace sights to him after a few weeks of camp life that they could not move him any more.

Of course, habituation can also serve good motives, such as getting ourselves over fear, overcoming trauma or harmonizing relationships by making each side less sensitive to the other side’s vices. However, as powerful as habituation is we must recognize its limitations.

If we want someone to respond differently rather than become indifferent, flooding them with stimuli will not help us achieve our aims.

Consider the case for teaching children – the last thing we would want is to make them indifferent to what we say. Therefore instead of habituation, we should employ another strategy.

A frequently used technique in coaching, exposure therapy, involves cutting back our criticism for a while and reintroducing it by gradually lowering the person’s threshold for becoming defensive.

The key difference between exposure therapy and habituation lies in being subtle rather than blunt.

If we try to avoid forming negative associations and achieve behavioral change at the same time, we will always want the positive vs. negative feedback ratio to be in favor of the positive. This is why we so often provide feedback in a “sandwich,” where a positive remark is followed by what must be improved and then finished with another positive remark.

Aversion therapy is the exact opposite of exposure therapy.

Aversion therapy aims to exchange the positive association with a negative one within a few high impact events. For example, some parents teach out a sweet tooth by forcing their children to consume an insurmountable amount of sweets in one sitting under their supervision.

While ethically questionable this idea is not completely unfounded.

If the experience is traumatic enough, the positive associations of, for example, a sugar high, will be replaced by the negative association of nausea and sickness.

This controversial technique was used in experiments with alcoholics. While effective in theory, it was known to yield only mixed results in practice, with patients often resorting back to past conditions over time.

This is also why there are gross and terrifying pictures on cigarette packages in many countries.

Overall, creating habits that last or permanently breaking them can be a tough mission to embark upon.

In the case of feedback, we may try to associate our presence with positive stimuli, which is why building great first impressions and appearing friendly matters.

Keep in Mind

When thinking about this bias it's important to keep in mind that: (1) people are neither good nor bad because we associate something positive or negative to them; (2) bad news should be sought immediately and your reaction to it will dictate how much of it you hear; (3) to end a certain behavior or habit you can create an association with a negative emotion.

***

Still Curious? Checkout the Farnam Street Latticework of Mental Models

Mental Model: Misconceptions of Chance

Misconceptions of Chance

We expect the immediate outcome of events to represent the broader outcomes expected from a large number of trials. We believe that chance events will immediately self-correct and that small sample sizes are representative of the populations from which they are drawn. All of these beliefs lead us astray.

***

 

Our understanding of the world around us is imperfect and when dealing with chance our brains tend to come up with ways to cope with the unpredictable nature of our world.

“We tend,” writes Peter Bevelin in Seeking Wisdom, “to believe that the probability of an independent event is lowered when it has happened recently or that the probability is increased when it hasn't happened recently.”

In short, we believe an outcome is due and that chance will self-correct.

The problem with this view is that nature doesn't have a sense of fairness or memory. We only fool ourselves when we mistakenly believe that independent events offer influence or meaningful predictive power over future events.

Furthermore we also mistakenly believe that we can control chance events. This applies to risky or uncertain events.

Chance events coupled with positive reinforcement or negative reinforcement can be a dangerous thing. Sometimes we become optimistic and think our luck will change and sometimes we become overly pessimistic or risk-averse.

How do you know if you're dealing with chance? A good heuristic is to ask yourself if you can lose on purpose. If you can't you're likely far into the chance side of the skill vs. luck continuum. No matter how hard you practice, the probability of chance events won't change.

“We tend,” writes Nassim Taleb in The Black Swan, “to underestimate the role of luck in life in general (and) overestimate it in games of chance.”

We are only discussing independent events. If events are dependent, where the outcome depends on the outcome of some other event, all bets are off.

 

***

Misconceptions of Chance

Daniel Kahneman coined the term misconceptions of chance to describe the phenomenon of people extrapolating large-scale patterns to samples of a much smaller size. Our trouble navigating the sometimes counterintuitive laws of probability, randomness, and statistics leads to misconceptions of chance.

Kahneman found that “people expect that a sequence of events generated by a random process will represent the essential characteristics of that process even when the sequence is short.”

In the paper Belief in the Law of Small Numbers, Kahneman and Tversky reflect on the results of an experiment, where subjects were instructed to generate a random sequence of hypothetical tosses of a fair coin.

They [the subjects] produce sequences where the proportion of heads in any short segment stays far closer to .50 than the laws of chance would predict. Thus, each segment of the response sequence is highly representative of the “fairness” of the coin.

Unsurprisingly, the same nature of errors occurred when the subjects, instead of being asked to generate sequences themselves, were simply asked to distinguish between random and human generated sequences. It turns out that when considering tosses of a coin for heads or tails people regard the sequence H-T-H-T-T-H to be more likely than the sequence H-H-H-T-H-T, which does not appear random, and also more likely than the sequence H-H-H-H-T-H. In reality, each one of those sequences has the exact same probability of occurring. This is a misconception of chance.

The aspect that most of us find so hard to grasp about this case is that any pattern of the same length is just as likely to occur in a random sequence. For example, the odds of getting 5 tails in a row are 0.03125 or simply stated 0.5 (the odds of a specific outcome at each trial) to the power of 5 (number of trials).

The same probability rule applies for getting the specific sequences of HHTHT or THTHT – where each sequence is obtained by once again taking 0.5 (the odds of a specific outcome at each trial) to the power of 5 (number of trials), which equals 0.03125.

This probability is true for sequences – but it implies no relation between the odds of a specific outcome at each trial and the representation of the true proportion within these short sequences.

Yet it’s still surprising. This is because people expect that the single event odds will be reflected not only in the proportion of events as a whole but also in the specific short sequences we encounter. But this is not the case. A perfectly alternating sequence is just as extraordinary as a sequence with all tails or all heads.

In comparison, “a locally representative sequence,” Kahneman writes, in Thinking, Fast and Slow, “deviates systematically from chance expectation: it contains too many alternations and too few runs. Another consequence of the belief in local representativeness is the well-known gambler’s fallacy.”

***

Gambler’s Fallacy

There is a specific variation of the misconceptions of chance that Kahneman calls the Gambler’s fallacy (elsewhere also called the Monte Carlo fallacy).

The gambler's fallacy implies that when we come across a local imbalance, we expect that the future events will smoothen it out. We will act as if every segment of the random sequence must reflect the true proportion and, if the sequence has deviated from the population proportion, we expect the imbalance to soon be corrected.

Kahneman explains that this is unreasonable – coins, unlike people, have no sense of equality and proportion:

The heart of the gambler's fallacy is a misconception of the fairness of the laws of chance. The gambler feels that the fairness of the coin entitles him to expect that any deviation in one direction will soon be cancelled by a corresponding deviation in the other. Even the fairest of coins, however, given the limitations of its memory and moral sense, cannot be as fair as the gambler expects it to be.

He illustrates this with an example of the roulette wheel and our expectations when a reasonably long sequence of repetition occurs.

After observing a long run of red on the roulette wheel, most people erroneously believe that black is now due, presumably because the occurrence of black will result in a more representative sequence than the occurrence of an additional red.

In reality, of course, roulette is a random, non-evolving process, in which the chance of getting a red or a black will never depend on the past sequence. The probabilities restore after each run, yet we still seem to take the past moves into account.

Contrary to our expectations, the universe does not keep accounting of a random process so streaks are not necessarily tilted towards the true proportion. Your chance of getting a red after a series of blacks will always be equal to that of getting another red as long as the wheel is fair.

The gambler’s fallacy need not to be committed inside the casino only. Many of us commit it frequently by thinking that a small, random sample will tend to correct itself.

For example, assume that the average IQ at a specific country is known to be 100. And for the purposes of assessing intelligence at a specific district, we draw a random sample of 50 persons. The first person in our sample happens to have an IQ of 150. What would you expect the mean IQ to be for the whole sample?

The correct answer is (100*49 + 150*1)/50 = 101. Yet without knowing the correct answer, it is tempting to say it is still 100 – the same as in the country as a whole.

According to Kahneman and Tversky such expectation could only be justified by the belief that a random process is self-correcting and that the sample variation is always proportional. They explain:

Idioms such as “errors cancel each other out” reflect the image of an active self-correcting process. Some familiar processes in nature obey such laws: a deviation from a stable equilibrium produces a force that restores the equilibrium.

Indeed, this may be true in thermodynamics, chemistry and arguably also economics. These, however, are false analogies. It is important to realize that the laws governed by chance are not guided by principles of equilibrium and the number of random outcomes in a sequence do not have a common balance.

“Chance,” Kahneman writes in Thinking, Fast and Slow, “is commonly viewed as a self-correcting process in which a deviation in one direction induces a deviation in the opposite direction to restore the equilibrium. In fact, deviations are not “corrected” as a chance process unfolds, they are merely diluted.”

 

***

The Law of Small Numbers

Misconceptions of chance are not limited to gambling. In fact, most of us fall for them all the time because we intuitively believe (and there is a whole best-seller section at the bookstore to prove) that inferences drawn from small sample sizes are highly representative of the populations from which they are drawn.

By illustrating people's expectations of random heads and tails sequences, we already established that we have preconceived notions of what randomness looks like. This, coupled with the unfortunate tendency to believe in self-correcting process in a random sample, generates expectations about sample characteristics and representativeness, which are not necessarily true. The expectation that the patterns and characteristics within a small sample will be representative of the population as a whole is called the law of small numbers.

Consider the sequence:

1, 2, 3, _, _, _

What do you think are the next three digits?

The task almost seems laughable, because the pattern is so familiar and obvious – 4,5,6. However, there is an endless variation of different algorithms that would still fit the first three numbers, such as the Fibonacci sequence (5, 8, 13), a repeated sequence (1,2,3), a random sequence (5,8,2) and many others. Truth is, in this case there simply is not enough information to say what the rules governing this specific sequence are with any reliability.

The same rule applies to sampling problems – sometimes we feel we have gathered enough data to tell a real pattern from an illusion. Let me illustrate this fallacy with yet another example.

Imagine that you face a tough decision between investing in the development of two different product opportunities. Let’s call them Product A or Product B. You are interested in which product would appeal to the majority of the market, so you decide to conduct customer interviews. Out of the first five pilot interviews, four customers show a preference for Product A. While the sample size is quite small, given the time pressure involved, many of us would already have some confidence in concluding that the majority of customers would prefer Product A.

However, a quick statistical test will tell you that the probability of a result just as extreme is in fact 3/8, assuming that there is no preference among customers at all. This in simple terms means that if customers had no preference between Products A and B, you would still expect 3 customer samples out of 8 to have four customers vouching for Product A.

Basically, a study of such size has little to no predictive validity – these results could easily be obtained from a population with no preference for one or the other product. This, of course, does not mean that talking to customers is of no value. Quite the contrary – the more random cases we examine, the more reliable and accurate the results of the true proportion will be. If we want absolute certainty we must be prepared for a lot of work.

There will always be cases where a guesstimate based on a small sample will be enough because we have other critical information guiding the decision-making process or we simply do not need a high degree of confidence. Yet rather than assuming that the samples we come across are always perfectly representative, we must treat random selection with the suspicion it deserves. Accepting the role imperfect information and randomness play in our lives and being actively aware of what we don’t know already makes us better decision makers.

Bias from Overconfidence: A Mental Model

overconfidence

“What a Man wishes, he will also believe” – Demosthenes

Bias from overconfidence is a natural human state. All of us believe good things about ourselves and our skills. As Peter Bevelin writes in Seeking Wisdom:

Most of us believe we are better performers, more honest and intelligent, have a better future, have a happier marriage, are less vulnerable than the average person, etc. But we can’t all be better than average.

This inherent base rate of overconfidence is especially strong when projecting our beliefs about our future. Over-optimism is a form of overconfidence. Bevelin again:

We tend to Overestimate our ability to predict the future. People tend to put a higher probability on desired events than undesired events.

The bias from overconfidence is insidious because of how many factors can create and inflate it. Emotional, cognitive and social factors all influence it. Emotional, as we see, because of the emotional pain of believing bad things about ourselves, or in our lives.

Emotional and Cognitive distortion that creates overconfidence is the dangerous and unavoidable accompaniment to any form of success.

Roger Lowenstein writes in When Genius Failed, “there is nothing like success to blind one to the possibility of failure.”

In Seeking Wisdom Bevelin writes:

What tends to inflate the price that CEOs pay for acquisitions? Studies found evidence of infection through three sources of hubris: 1) overconfidence after recent success, 2) a sense of self-importance; the belief that a high salary compared to other senior ranking executives implies skill, and 3) the CEOs belief in their own press coverage. The media tend to glorify the CEO and over-attribute business success to the role of the CEO rather than to other factors and people. This makes CEOs more likely to become both more overconfident about their abilities and more committed to the actions that made them media celebrities.

This isn’t an effect confined to CEOs and large transactions. This feedback loop happens every day between employees and their managers. Or between students and professors, even peers and spouses.

Perhaps the most surprising, pervasive, and dangerous reinforcer of Overconfidence is the social incentives. Take a look at this example of social pressures on doctors, from Kahneman in Thinking, Fast and Slow:

Generally, it is considered a weakness and a sign of vulnerability for clinicians to appear unsure. Confidence is valued over uncertainty and there is a prevailing censure against disclosing uncertainty to patients.

An unbiased appreciation of uncertainty is a cornerstone of rationality—but that is now what people and organizations want. Extreme uncertainty is paralyzing under dangerous circumstances, and the admission that one is merely guessing is especially unacceptable when the stakes are high. Acting on pretended knowledge is often the preferred solution.

And what about those who don’t succumb to this social pressure to let the Overconfidence bias run wild?

Kahneman writes, “Experts who acknowledge the full extent of their ignorance may expect to be replaced by more confident competitors, who are better able to gain the trust of the clients.”

It’s important to structure environments that allow for uncertainty, or the system will reward the most overconfident, not the most rational, of the decision-makers.

Making perfect forecasts isn’t the goal–self-awareness is, in the form of wide confidence intervals. Kahneman again in Thinking, Fast and Slow writes:

For a number of years, professors at Duke University conducted a survey in which the chief financial officers of large corporations estimated the results of the S&P index over the following year. The Duke scholars collected 11,600 such forecasts and examined their accuracy. The conclusion was straightforward: financial officers of large corporations had no clue about the short-term future of the stock market; the correlation between their estimates and the true value was slightly less than zero! When they said the market would go down, it was slightly more likely than not that it would go up. these findings are not surprising. The truly bad news is that the CFOs did not appear to know that their forecasts were worthless.

You don’t have to be right. You just have to know that you’re not very likely to be right.

As always with the lollapalooza effect of overlapping, combining, and compounding psychological effects, this one has powerful partners in some of our other mental models. Overconfidence bias is often caused or exacerbated by: doubt-avoidance, inconsistency-avoidance, incentives, denial, believing-first-and-doubting-later, and the endowment effect.

So what are the ways of restraining Overconfidence bias?
The discipline to apply basic math, as prescribed by Munger: “One standard antidote to foolish optimism is trained, habitual use of the simple probability math of Fermat and Pascal, taught in my youth to high school sophomores.” (Pair with Fooled by Randomness).

And in Seeking Wisdom, Bevelin reminds us that “Overconfidence can cause unreal expectations and make us more vulnerable to disappointment.” A few sentences later he advises us to “focus on what can go wrong and the consequences.”

Build in some margin of safety in decisions. Know how you will handle things if they go wrong. Surprises occur in many unlikely ways. Ask: How can I be wrong? Who can tell me if I'm wrong?

Bias from Overconfidence is a Farnam Street mental model.

An Introduction to Complex Adaptive Systems

Let’s explore the concept of the Complex Adaptive Systems and see how this model might apply in various walks of life.

To illustrate what a complex adaptive system is, and just as importantly, what it is not, let’s take the example of a “driving system” – or as we usually refer to it, a car. (I have cribbed some parts of this example from the excellent book by John Miller and Scott Page.)

The interior of a car, at first glance is complicated. There are seats, belts, buttons, levers, knobs, a wheel, etc. Removing the passenger car seats would make this system less complicated. However, the system would remain essentially functional. Thus, we would not call the car interior complex.

The mechanical workings of a car, however, are complex. The system has interdependent components that must all simultaneously serve their function in order for the system to work. The higher order function, driving, derives from the interaction of the parts in a very specific way.

Let’s say instead of the passenger seats, we remove the timing belt. Unlike the seats, the timing belt is a necessary node for the system to function properly. Our “driving system” is now useless. The system has complexities, but they are not what we would call adaptive.

To understand complex adaptive systems, let’s put hundreds of “driving systems” on the same road, each with the goal of reaching their destination within an expected amount of time. We call this traffic. Traffic is a complex system in which its inhabitants adapt to each other’s actions. Let’s see it in action.

***

On a popular route into a major city, we observe a car in flames on the side of the road, with firefighters working to put out the fire. Naturally, cars will slow to observe the wreck. As the first cars slow, the cars behind them slow in turn. The cars behind them must slow as well. With everyone becoming increasingly agitated, we’ve got a traffic jam. The jam emerges from the interaction of the parts of the system.

With the traffic jam formed, potential entrants to the jam—let’s call them Group #2—get on their smartphones and learn that there is an accident ahead which may take hours to clear. Upon learning of the accident, they predictably begin to adapt by finding another route. Suppose there is only one alternate route into the city. What happens now? The alternate route forms a second jam! (I’m stressed out just writing about this.)

Now let’s introduce a third group of participants, which must choose between jams. Predicting the actions of this third group is very hard to do. Perhaps so many people in group #2 have altered their route that the second jam is worse than the first, causing the majority of the third group to choose jam #1. Perhaps, anticipating that others will follow that same line of reasoning, they instead choose jam #2. Perhaps they stay home!

What we see here are emergent properties of the complex adaptive system called traffic. By the time we hit this third layer of participants, predicting the behavior of the system has become extremely difficult, if not impossible.

The key element to complex adaptive systems is the social element. The belts and pulleys inside a car do not communicate with one another and adapt their behavior to the behavior of the other parts in an infinite loop. Drivers, on the other hand, do exactly that.

***

Where else do we see this phenomenon? The stock market is a great example. Instead of describing it myself, let’s use the words of John Maynard Keynes, who brilliantly related the nature of the market’s complex adaptive parts to that of a beauty contest in chapter 12 of The General Theory.

Or, to change the metaphor slightly, professional investment may be likened to those newspaper competitions in which the competitors have to pick out the six prettiest faces from a hundred photographs, the prize being awarded to the competitor whose choice most nearly corresponds to the average preferences of the competitors as a whole; so that each competitor has to pick, not those faces which he himself finds prettiest, but those which he thinks likeliest to catch the fancy of the other competitors, all of whom are looking at the problem from the same point of view. It is not a case of choosing those which, to the best of one’s judgment, are really the prettiest, nor even those which average opinion genuinely thinks the prettiest. We have reached the third degree where we devote our intelligences to anticipating what average opinion expects the average opinion to be. And there are some, I believe, who practice the fourth, fifth and higher degrees.

Like traffic, the complex, adaptive nature of the market is very clear. The participants in the market are interacting with one another constantly and adapting their behavior to what they know about others’ behavior. Stock prices jiggle all day long in this fashion. Forecasting outcomes in this system is extremely challenging.

To illustrate, suppose that a very skilled, influential, and perhaps lucky, market forecaster successfully calls a market crash. (There were a few in 2008, for example.) Five years later, he publicly calls for a second crash. Given his prescience in the prior crash, market participants might decide to sell their stocks rapidly, causing a crash for no other reason than the fact that it was predicted! Like traffic reports on the radio, the very act of observing and predicting has a crucial impact on the behavior of the system.

Thus, although we know that over the long term, stock prices roughly track the value of their underlying businesses, in the short run almost anything can occur due to the highly adaptive nature of market participants.

***

This understanding helps us understand some things that are not complex adaptive systems. Take the local weather. If the Doppler 3000 forecast on the local news predicts rain on Thursday, is the rain any less likely to occur? No. The act of predicting has not influenced the outcome. Although near-term weather is extremely complex, with many interacting parts leading to higher order outcomes, it does have an element of predictability.

On the other hand, we might call the Earth’s climate partially adaptive, due to the influence of human beings. (Have the cries of global warming and predictions of its worsening not begun affecting the very behavior causing the warming?)

Thus, behavioral dynamics indicate a key difference between weather and climate, and between systems that are simply complex and those that are also adaptive. Failure to use higher-order thinking when considering outcomes in complex adaptive systems is a common cause of overconfidence in prediction making.

***

Complex Adaptive Systems are part of the Farnam Street latticework of Mental Models.

The Minimum Effective Dose: Why Less is More

“Perfection is achieved, not when there is nothing more to add,
but when there is nothing left to take away.”
— Antoine de Saint-Exupéry

***

In pharmacology, the effective dose is the amount of a drug that produces the desired response in most patients. Determining the range for a drug, the difference between the minimum effective dose and the maximum tolerated dose is incredibly important.

The Minimum Effective Dose (MED) is a concept I first came across in The 4-Hour Body: An Uncommon Guide to Rapid Fat-Loss, Incredible Sex, and Becoming Superhuman. The definition is pretty simple: the smallest dose that will produce the desired outcome (this is also known as the “minimum effective load.”

Most people think that anything beyond the minimum effective dose is a waste.

To boil water, the MED is 212°F (100°C) at standard air pressure. Boiled is boiled. Higher temperatures will not make it “more boiled.” Higher temperatures just consume more resources that could be used for something else more productive.

[…]

In biological systems, exceeding your MED can freeze progress for weeks, even months.

[…]

More is not better. Indeed, your greatest challenge will be resisting the temptation to do more. The MED not only delivers the most dramatic results, but it does so in the least time possible.

While that's true in some cases it's not true in all cases. The world is complicated. Perhaps an example or two will help illustrate.

Consider a bridge used to take vehicles from one side of a river to another. The maximum anticipated load is 100 tons. So, in theory, it would be over-engineering to make sure it can withstand 101 tons.

Another example, think about the person that wants to make a sports team. Do they want to do barely enough work, so they are 0.01 percent better than the other person to make the team? No of course not.

Do you want a Dr. performing surgery on you that did the bare minimum to pass tests in medical school?

No of course not. You don't want to leave things to chance. You want to build a bridge that your kids can cross without you worrying if there are more cars on the bridge than some engineer 15 years ago guessed. You want a surgeon who is in the top 1%, not one that just passed med-school. You want to be so good that you're not on the roster bubble.

There are a lot of areas where applying the minimum required to get an outcome and calling it a day doesn't make any sense at all. In fact, it can be downright dangerous. You want to think about the dynamic and holistic world that you're operating in. And to borrow a concept from Engineering, you want to make sure you have a Margin of Safety.

 

Margin of Safety: An Introduction to the Mental Model

Previously on Farnam Street, we covered the idea of Redundancy — a central concept in both the world of engineering and in practical life. Today we’re going to explore a related concept: Margin of Safety.

The margin of safety is another concept rooted in engineering and quality control. Let’s start there, then see where else our model might apply in practical life, and lastly, where it might have limitations.

* * *

Consider a highly-engineered jet engine part. If the part were to fail, the engine would also fail, perhaps at the worst possible moment—while in flight with passengers on board. Like most jet engine parts, let us assume the part is replaceable over time—though we don’t want to replace it too often (creating prohibitively high costs), we don’t expect it to last the lifetime of the engine. We design the part for 10,000 hours of average flying time.

That brings us to a central question: After how many hours of service do we replace this critical part? The easily available answer might be 9,999 hours. Why replace it any sooner than we have to? Wouldn’t that be a waste of money?

The first problem is, we know nothing of the composition of the 10,000 hours any individual part has gone through. Were they 10,000 particularly tough hours, filled with turbulent skies? Was it all relatively smooth sailing? Somewhere in the middle?

Just as importantly, how confident are we that the part will really last the full 10,000 hours? What if it had a slight flaw during manufacturing? What if we made an assumption about its reliability that was not conservative enough? What if the material degraded in bad weather to a degree we didn’t foresee?

The challenge is clear, and the implication obvious: we do not wait until the part has been in service for 9,999 hours. Perhaps at 7,000 hours, we seriously consider replacing the part, and we put a hard stop at 7,500 hours.

The difference between waiting until the last minute and replacing it comfortably early gives us a margin of safety. The sooner we replace the part, the more safety we have—by not pushing the boundaries, we leave ourselves a cushion. (Ever notice how your gas tank indicator goes on long before you’re really on empty? It’s the same idea.)

The principle is essential in bridge building. Let’s say we calculate that, on an average day, a proposed bridge will be required to support 5,000 tons at any one time. Do we build the structure to withstand 5,001 tons? I'm not interested in driving on that bridge. What if we get a day with much heavier traffic than usual? What if our calculations and estimates are little off? What if the material weakens over time at a rate faster than we imagined? To account for these, we build the bridge to support 20,000 tons. Only now do we have a margin of safety.

This fundamental engineering principle is useful in many practical areas of life, even for non-engineers. Let’s look at one we all face.

* * *

Take a couple earning $100,000 per year after taxes, or about $8,300 per month. In designing their life, they must necessarily decide what standard of living to enjoy. (The part which can be quantified, anyway.) What sort of monthly expenses should they allow themselves to accumulate?

One all-too-familiar approach is to build in monthly expenses approaching $8,000. A $4,000 mortgage, $1,000 worth of car payments, $1,000/month for private schools…and so on. The couple rationalizes that they have “earned” the right live large.

However, what if there are some massive unexpected expenditures thrown their way? (In the way life often does.) What if one of them lost their job and their combined monthly income dropped to $4,000?

The couple must ask themselves whether the ensuing misery is worth the lavish spending. If they kept up their $8,000/month spending habit after a loss of income, they would have to choose between two difficult paths: Rapidly eating into their savings or considerably downsizing their life. Either is likely to cause extreme misery from the loss of long-held luxuries.

Thinking in reverse, how can we avoid the potential misery?

A common refrain is to tell the couple to make sure they’ve stashed away some money in case of emergency, to provide a buffer. Often there is a specific multiple of current spending we’re told to have in reserve—perhaps 6-12 months. In this case, savings of $48,000-$96,000 should suffice.

However, is there a way we can build them a much larger margin for error?

Let’s say the couple decides instead to permanently limit their monthly spending to $4,000 by owning a smaller house, driving less expensive cars, and trusting their public schools. What happens?

Our margin of safety now compounds. Obviously, a savings rate exceeding 50% will rapidly accumulate in their favor — $4,300 put away by the first month, $8,600 by the second month, and so on. The mere act of systematically underspending their income rapidly gives them a cushion without much trying. If an unexpected expenditure comes up, they’ll almost certainly be ready.

The unseen benefit, and the extra margin of safety in this choice comes if either spouse loses their income – either by choice (perhaps to care for a child) or by bad luck (health issues). In this case, not only has a high savings rate accumulated in their favor but because their spending is systematically low, they are able to avoid tapping it altogether! Their savings simply stop growing temporarily while they live on one income. This sort of “belt and suspenders” solution is the essence of margin-of-safety thinking.

(On a side note: Let’s take it even one step further. Say their former $8,000 monthly spending rate meant they probably could not retire until age 70, given their current savings rate, investment choices, and desired lifestyle post-retirement. Reducing their needs to $4,000 not only provides them much needed savings, quickly accelerating their retirement date, but they now need even less to retire on in the first place. Retiring at 70 can start to look like retiring at 45 in a hurry.)

* * *

Clearly, the margin of safety model is very powerful and we’re wise to use it whenever possible to avoid failure. But it has limitations.

One obvious issue, most salient in the engineering world, comes in the tradeoff with time and money. Given an unlimited runway of time and the most expensive materials known to mankind, it’s likely that we could “fail-proof” many products to such a ridiculous degree as to be impractical in the modern world.

For example, it’s possible to imagine Boeing designing a plane that would have a fail rate indistinguishable from zero, with parts being replaced 10% into their useful lives, built with rare but super-strong materials, etc.—so long as the world was willing to pay $25,000 for a coach seat from Boston to Chicago. Given the impracticability of that scenario, our tradeoff has been to accept planes that are not “fail-proof,” but merely extremely unlikely to fail, in order to give the world safe enough air travel at an affordable cost. This tradeoff has been enormously wise and helpful to the world. Simply put, the margin-of-safety idea can be pushed into farce without careful judgment.

* * *

This brings us to another limitation of the model, which is the failure to engage in “total systems” thinking. I'm reminded of a quote I've used before at Farnam Street:

The reliability that matters is not the simple reliability of one component of a system,
but the final reliability of the total control system
.”
— Garrett Hardin in Filters Against Folly

Let’s return to the Boeing analogy. Say we did design the safest and most reliable jet airplane imaginable, with parts that would not fail in one billion hours of flight time under the most difficult weather conditions imaginable on Earth—and then let it be piloted by a drug addict high on painkillers.

The problem is that the whole flight system includes much more than just the reliability of the plane itself. Just because we built in safety margins in one area does not mean the system will not fail. This illustrates not so much a failure of the model itself, but a common mistake in the way the model is applied.

* * *

Which brings us to a final issue with the margin of safety model—naïve extrapolation of past data. Let’s look at a common insurance scenario to illustrate this one.

Suppose we have a 100-year-old reinsurance company – PropCo – which reinsures major primary insurers in the event of property damage in California caused by a catastrophe – most worrying being an earthquake and its aftershocks. Throughout its entire (long) history, PropCo had never experienced a yearly loss on this sort of coverage worse than $1 billion. Most years saw no loss worse than $250 million, and in fact, many years had no losses at all – giving them comfortable profit margins.

Thinking like engineers, the directors of PropCo insisted that the company have such a strong financial position so that they could safely cover a loss twice as bad as anything ever encountered. Given their historical losses, the directors believed this extra capital would give PropCo a comfortable “margin of safety” against the worst case. Right?

However, our directors missed a few crucial details. The $1 billion loss, the insurer’s worst, had been incurred in the year 1994 during the Northridge earthquake. Since then, the building density of Californian cities had increased significantly, and due to ongoing budget issues and spreading fraud, strict building codes had not been enforced. Considerable inflation in the period since 1994 also ensured that losses per damaged square foot would be far higher than ever faced previously.

With these conditions present, let’s propose that California is hit with an earthquake reading 7.0 on the Richter scale, with an epicenter 10 miles outside of downtown LA. PropCo faces a bill of $5 billion – not twice as bad, but five times as bad as it had ever faced. In this case, PropCo fails.

This illustration (which recurs every so often in the insurance field) shows the limitation of naïvely assuming a margin of safety is present based on misleading or incomplete past data.

* * *

Margin of safety is an important component to some decisions and life. You can think of it as a reservoir to absorb errors or poor luck. Size matters. At least, in this case, bigger is better. And if you need a calculator to figure out how much room you have, you're doing something wrong.

Margin of safety is part of the Farnam Street Latticework of Mental Models.