Farnam Street helps you make better decisions, innovate, and avoid stupidity.
With over 350,000 monthly readers and more than 88,000 subscribers to our popular weekly digest, we've become an online intellectual hub.
Farnam Street helps you make better decisions, innovate, and avoid stupidity.
With over 350,000 monthly readers and more than 88,000 subscribers to our popular weekly digest, we've become an online intellectual hub.
The mental models approach is very intellectually appealing, almost seductive to a certain type of person. (It certainly is for us.)
The whole idea is to take the world’s greatest, most useful ideas and make them work for you!
How hard can it be?
Nearly all of the models themselves are perfectly well understandable by the average well-educated knowledge worker, including all of you reading this piece. Ideas like Bayes’ rule, multiplicative thinking, hindsight bias, or the bias from envy and jealousy, are all obviously true and part of the reality we live in.
There’s a bit of a problem we’re seeing though: People are reading the stuff, enjoying it, agreeing with it…but not taking action. It’s not becoming part of their standard repertoire.
Let’s say you followed up on Bayesian thinking after reading our post on it — you spent some time soaking in Thomas Bayes’ great wisdom on updating your understanding of the world incrementally and probabilistically rather than changing your mind in black-and-white. Great!
But a week later, what have you done with that knowledge? How has it actually impacted your life? If the honest answer is “It hasn’t,” then haven’t you really wasted your time?
Ironically, it’s this habit of “going halfway” instead of “going all the way,” like Sisyphus constantly getting halfway up the mountain, which is the biggest waste of time!
See, the common reason why people don’t truly “follow through” with all of this stuff is that they haven’t raised their knowledge to a “deep fluency” — they’re skimming the surface. They pick up bits and pieces — some heuristics or biases here, a little physics or biology there, and then call it a day and pull up Netflix. They get a little understanding, but not that much, and certainly no doing.
The better approach, if you actually care about making changes, is to imitate Charlie Munger, Charles Darwin, and Richard Feynman, and start raising your knowledge of the Big Ideas to a deep fluency, and then figuring out systems, processes, and mental tricks to implement them in your own life.
Let’s work through an example.
Say you’re just starting to explore all the wonderful literature on heuristics and biases and come across the idea of Confirmation Bias: The idea that once we’ve landed on an idea we really like, we tend to keep looking for further data to confirm our already-held notions rather than trying to disprove our idea.
This is common, widespread, and perfectly natural. We all do it. John Kenneth Galbraith put it best:
“In the choice between changing one’s mind and proving there’s no need to do so, most people get busy on the proof.”
Now, what most people do, the ones you’re trying to outperform, is say “Great idea! Thanks Galbraith.” and then stop thinking about it.
Don’t do that!
The next step would be to push a bit further, to get beyond the sound bite: What’s the process that leads to confirmation bias? Why do I seek confirmatory information and in which contexts am I particularly susceptible? What other models are related to the confirmation bias? How do I solve the problem?
The big question: How far do you go? A good question without a perfect answer. But the best test I can think of is to perform something like the Feynman technique, and to think about the chauffeur problem.
Can you explain it simply to an intelligent layperson, using vivid examples? Can you answer all the follow-ups? That’s fluency. And you must be careful not to fool yourself, because in the wise words of Feynman, “…you are the easiest person to fool.“
While that’s great work, you’re not done yet. You have to make the rubber hit the road now. Something has to happen in your life and mind.
The way to do that is to come up with rules, systems, parables, and processes of your own, or to copy someone else’s that are obviously sound.
In the case of Confirmation Bias, we have two wonderful models to copy, one from each of the Charlies — Darwin, and Munger.
Darwin had rule, one we have written about before but will restate here: Make a note, immediately, if you come across a thought or idea that is contrary to something you currently believe.
As for Munger, he implemented a rule in his own life: “I never allow myself to have an opinion on anything that I don’t know the other side’s argument better than they do.”
Now we’re getting somewhere! With the implementation of those two habits and some well-earned deep fluency, you can immediately, tomorrow, start improving the quality of your decision-making.
Sometimes when we get outside the heuristic/biases stuff, it’s less obvious how to make the “rubber hit the road” — and that will be a constant challenge for you as you take this path.
But that’s also the fun part! With every new idea and model you pick up, you also pick up the opportunity to synthesize for yourself a useful little parable to make it stick or a new habit that will help you use it. Over time, you’ll come up with hundreds of them, and people might even look to you when they’re having problems doing it themselves!
Look at Buffett and Munger — both guys are absolute machines, chock full of pithy little rules and stories they use in order to implement and recall what they’ve learned.
For example, Buffett discovered early on the manipulative psychology behind open-outcry auctions. What did he do? He made a rule to never go to one! That’s how it’s done.
Even if you can’t come up with a great rule like that, you can figure out a way to use any new model or idea you learn. It just takes some creative thinking.
Sometimes it’s just a little mental rule or story that sticks particularly well. (Recall one of the prime lessons from our series on memory: Salient, often used, well-associated, and important information sticks best.)
We did this very thing recently with Lee Kuan Yew’s Rule. What a trite way to refer to the simple idea of asking if something actually works…attributing it to a Singaporean political leader!
But that’s exactly the point. Give the thing a name and a life and, like clockwork, you’ll start recalling it. The phrase “Lee Kuan Yew’s Rule” actually appears in my head when I’m approaching some new system or ideology, and as soon as it does, I find myself backing away from ideology and towards pragmatism. Exactly as I’d hoped.
Your goal should be to create about a thousand of those little tools in your head, attached to a deep fluency in the material from which it came.
I can hear the objection coming. Who has time for this stuff?
You do. It’s about making time for the things that really matter. And what could possibly matter more than upgrading your whole mental operating system? I solemnly promise that you’re spending way more time right now making sub-optimal decisions and trying to deal with the fallout.
If you need help learning to manage your time right this second, check out our Productivity Seminar, one that’s changed some people’s lives entirely. The central idea is to become more thoughtful and deliberate with how you spend your hours. When you start doing that, you’ll notice you do have an hour a day to spend on this Big Ideas stuff. It’s worth the 59 bucks.
If you don’t have 59 bucks, at least imitate Cal Newport and start scheduling your days and put an hour in there for “Getting better at making all of my decisions.”
Once you find that solid hour (or more), start using it in the way outlined above, and let the world’s great knowledge actually start making an impact. Just do a little every day.
What you’ll notice, over the weeks and months and years of doing this, is that your mind will really change! It has to! And with that, your life will change too. The only way to fail at improving your brain is by imitating Sisyphus, pushing the boulder halfway up, over and over.
Unless and until you really understand this, you’ll continue spinning your wheels. So here’s your call to action. Go get to it!
(This is a follow-up to our post on the Bias from Liking/Loving, which you can find here.)
Think of a cat snarling and spitting, lashing with its tail and standing with its back curved. Her pulse is elevated, blood vessels constricted and muscles tense. This reaction may sound familiar, because everyone has experienced the same tensed-up feeling of rage at least once in their lives.
When rage is directed towards an external object, it becomes hate. Just as we learn to love certain things or people, we learn to hate others.
There are several cognitive processes that awaken the hate within us and most of them stem from our need for self-protection.
We tend to dislike people who dislike us (and, true to Newton, with equal strength.) The more we perceive they hate us, the more we hate them.
A lot of hate comes from scarcity and competition. Whenever we compete for resources, our own mistakes can mean good fortune for others. In these cases, we affirm our own standing and preserve our self-esteem by blaming others.
Robert Cialdini explains that because of the competitive environment in American classrooms, school desegregation may increase the tension between children of different races instead of decreasing it. Imagine being a secondary school child:
If you knew the right answer and the teacher called on someone else, you probably hoped that he or she would make a mistake so that you would have a chance to display your knowledge. If you were called on and failed, or if you didn’t even raise your hand to compete, you probably envied and resented your classmates who knew the answer.
At first we are merely annoyed. But then as the situation fails to improve and our frustration grows, we are slowly drawn into false attributions and hate. We keep blaming and associating “the others” who are doing better with the loss and scarcity we are experiencing (or perceive we are experiencing). That is one way our emotional frustration boils into hate.
The ability to separate friends from enemies has been critical for our safety and survival. Because mistaking the two can be deadly, our mental processes have evolved to quickly spot potential threats and react accordingly. We are constantly feeding information about others into our “people information lexicon” that forms not only our view of individuals, whom we must decide how to act around, but entire classes of people, as we average out that information.
To shortcut our reactions, we classify narrowly and think in dichotomies: right or wrong, good or bad, heroes or villains. (The type of Grey Thinking we espouse is almost certainly unnatural, but, then again, so is a good golf swing.) Since most of us are merely average at everything we do, even superficial and small differences, such as race or religious affiliation, can become an important source of identification. We are, after all, creatures who seek to belong to groups above all else.
Seeing ourselves as part of a special, different and, in its own way, superior group, decreases our willingness to empathize with the other side. This works both ways – the hostility towards the others also increases the solidarity of the group. In extreme cases, we are so drawn towards the inside view that we create a strong picture of the enemy that has little to do with reality or our initial perceptions.
We think of ourselves as compassionate, empathetic and cooperative. So why do we learn to hate?
Part of the answer lies in the fact that we think of ourselves in a specific way. If we cannot reach a consensus, then the other side, which is in some way different from us, must necessarily be uncooperative for our assumptions about our own qualities to hold true.
Our inability to examine the situation from all sides and shake our beliefs, together with self-justifying behavior, can lead us to conclude that others are the problem. Such asymmetric views, amplified by strong perceived differences, often fuel hate.
What started off as odd or difficult to understand, has quickly turned into unholy.
If the situation is characterized by competition, we may also see ourselves as a victim. The others, who abuse our rights, take away our privileges or restrict our freedom are seen as bullies who deserve to be punished. We convince ourselves that we are doing good by doing harm to those who threaten to cross the line.
This is understandable. In critical times our survival indeed may depend on our ability to quickly spot and neutralize dangers. The cost of a false positive – mistaking a friend for a foe – is much lower than the potentially fatal false negative of mistaking our adversaries for innocent allies. As a result, it is safest to assume that anything we are not familiar with is dangerous by default. Natural selection, by its nature, “keeps what works,” and this tendency towards distrust of the unfamiliar probably survived in that way.
Physical and psychological pain is very mobilizing. We despise foods that make us nauseous and people that have hurt us. Because we are scared to suffer, we end up either avoiding or destroying the “enemy”, which is why revenge can be pursued with such vengeance. In short, hate is a defense against enduring pain repeatedly.
There are several ways that the bias for disliking and hating display themselves to the outer world. The most obvious of them is war, which seems to have been more or less prevalent throughout the history of mankind.
This would lead us to think that war may well be unavoidable. Charlie Munger offers the more moderate opinion that while hatred and dislike cannot be avoided, the instances of war can be minimized by channeling our hate and fear into less destructive behaviors. (A good political system allows for dissent and disagreement without explosions of blood upheaval.)
Even with the spread of religion, and the advent of advanced civilization, modern war remains pretty savage. But we also get what we observe in present-day Switzerland and the United States, wherein the clever political arrangements of man “channel” the hatreds and dislikings of individuals and groups into nonlethal patterns including elections.
But these dislikings and hatreds that are arguably inherent to our nature never go away completely and transcend themselves into politics. Think of the dichotomies. There is the left versus the right wing, the nationalists versus the communists and libertarians vs. authoritarians. This might be the reason why there are maxims like: “Politics is the art of marshaling hatreds.”
Finally, as we move away from politics, arguably the most sophisticated and civilized way of channeling hatred is litigation. Charlie Munger attributes the following words to Warren Buffett:
“A major difference between rich and poor people is that the rich people can spend their lives suing their relatives.”
While most of us reflect on our memories of growing up with our siblings with fondness, there are cases where the competition for shared attention or resources breeds hatred. If the siblings can afford it, they will sometimes litigate endlessly to lay claims over their parents’ property or attention.
There are several ways that bias from hating can interfere with our normal judgement and lead to suboptimal decisions.“Hating can interfere with our normal judgement and lead to suboptimal decisions.” Click To Tweet
Ignoring Virtues of The Other Side
Michael Faraday was once asked after a lecture whether he implied that a hated academic rival was always wrong. His reply was short and firm “He’s not that consistent.” Faraday must have recognized the bias from hating and corrected for it with the witty comment.
What we should recognize here is that no situation is ever black or white. We all have our virtues and we all have our weaknesses. However, when possessed by the strong emotions of hate, our perceptions can be distorted to the extent that we fail to recognize any good in the opponent at all. This is driven by consistency bias, which motivates us to form a coherent (“she is all-round bad”) opinion of ourselves and others.
Association Fueled Hate
The principle of association goes that the nature of the news tends to infect the teller. This means that the worse the experience, the worse the impression of anything related to it.
Association is why we blame the messenger who tells us something that we don’t want to hear even when they didn’t cause the bad news. (Of course, this creates an incentive not to speak truth and avoid giving bad news.)
A classic example is the unfortunate and confused weatherman, who receives hate mail, whenever it rains. One went so far as to seek advice from the Arizona State professor of psychology, Robert Cialdini, whose work we have discussed before.
Cialdini explained to him that in light of the destinies of other messengers, he was born lucky. Rain might ruin someone’s holiday plans, but it will rarely change the destiny of a nation, which was the case of Persian war messengers. Delivering good news meant a feast, whereas delivering bad news resulted in their death.
The weatherman left Cialdini’s office with a sense of privilege and relief.
“Doc,” he said on his way out, “I feel a lot better about my job now. I mean, I’m in Phoenix where the sun shines 300 days a year, right? Thank God I don’t do the weather in Buffalo.”
Under the influence of liking or disliking bias we tend to fill gaps in our knowledge by building our conclusions on assumptions, which are based on very little evidence.
Imagine you meet a woman at a party and find her to be a self-centered, unpleasant conversation partner. Now her name comes up as someone who could be asked to contribute to a charity. How likely do you feel it is that she will give to the charity?
In reality, you have no useful knowledge, because there is little to nothing that should make you believe that people who are self-centered are not also generous contributors to charity. The two are unrelated, yet because of the well-known fundamental attribution error, we often assume one is correlated to the other.
By association, you are likely to believe that this woman is not likely to be generous towards charities despite lack of any evidence. And because now you also believe she is stingy and ungenerous, you probably dislike her even more.
This is just an innocent example, but the larger effects of such distortions can be so extreme that they lead to a major miscognition. Each side literally believes that every single bad attribute or crime is attributable to the opponent.
Charlie Munger explains this with a relatively recent example:
When the World Trade Center was destroyed, many Pakistanis immediately concluded that the Hindus did it, while many Muslims concluded that the Jews did it. Such factual distortions often make mediation between opponents locked in hatred either difficult or impossible. Mediations between Israelis and Palestinians are difficult because facts in one side’s history overlap very little with facts from the other side’s. These distortions and the overarching mistrust might be why some conflicts seem to never end.
To varying degrees we value acceptance and affirmation from others. Very few of us wake up wanting to be disliked or rejected. Social approval, at its heart the cause of social influence, shapes behavior and contributes to conformity. Francois VI, Duc de La Rochefoucauld wrote: “We only confess our little faults to persuade people that we have no big ones.”
Remember the old adage, “The nail that sticks out gets hammered down.” This is why we don’t openly speak the truth or question people, we don’t want to be the nail.
It is only normal that we can find more common ground with some people than with others. But are we really destined to fall into the traps of hate or is there a way to take hold of these biases?
That’s a question worth over a hundred million lives. There are ways that psychologists think that we can minimize prejudice against others.
Firstly, we can engage with others in sustained close contact to breed our familiarity. The contact must not only be prolonged, but also positive and cooperative in nature – either working towards a common cause or against a common enemy.
Secondly, we also reduce prejudice by attaining equal status in all aspects, including education, income and legal rights. This effect is further reinforced, when equality is supported not only “on paper”, but also ingrained within broader social norms.
And finally the obvious – we should practice awareness of our own emotions and ability to hold back on the temptations to dismiss others. Whenever confronted with strong feelings it might simply be best to sit back, breathe and do our best to eliminate the distorted thinking.
The decisions that we make are rarely impartial. Most of us already know that we prefer to take advice from people that we like. We also tend to more easily agree with opinions formed by people we like. This tendency to judge in favor of people and symbols we like is called the bias from liking or loving.
We are more likely to ignore faults and comply with wishes of our friends or lovers rather than random strangers. We favor people, products, and actions associated with our favorite celebrities. Sometimes we even distort facts to facilitate love. The influence that our friends, parents, lovers and idols exert on us can be enormous.
In general, this is a good thing, a bias that adds on balance rather than subtracts. It helps us form successful relationships, it helps us fall in love (and stay in love), it helps us form attachments with others that give us great happiness.
But we do want to be aware of where this tendency leads us awry.
For example, some people and companies have learnt to use this influence to their advantage.
In his bestseller on social psychology Influence, Robert Cialdini tells a story about the successful strategy of Tupperware, which at the time reported sales of over $2.5 million a day. As many of us know, the company for a long time sold its kitchenware at parties thrown by friends of the potential customers. At each party there was a Tupperware representative taking orders, but the hostess, the friend of the invitees, received a commission.
These potential customers are not blind to the incentives and social pressures involved. Some of them don’t mind it, others do, but all admit a certain degree of helplessness in their situation. Cialdini recalls a conversation with one of the frustrated guests:
It’s gotten to the point now where I hate to be invited to Tupperware parties. I’ve got all the containers I need; and if I wanted any more, I could buy another brand cheaper in the store. But when a friend calls up, I feel like I have to go. And when I get there, I feel like I have to buy something. What can I do? It’s for one of my friends.
We are more likely to buy in a familiar, friendly setting and under the obligation of friendship rather than from an unfamiliar store or a catalogue. We simply find it much harder to say “no” or disagree when it’s a friend. The possibility of ruining the friendship, or seeing our image altered in the eyes of someone we like, is a powerful motivator to comply.
The Tupperware example is a true “lollapalooza” in favor of manipulating people into buying things. Besides the liking tendency, there are several other factors at play: commitment/consistency bias, a bias from stress, an influence from authority, a reciprocation effect, and some direct incentives and disincentives, at least! (Lollapaloozas, something we’ll talk more about in the future, are when several powerful forces combine to create a non-linear outcome. A good way to think of this conceptually for now is that 1+1=3.)
The liking tendency is so strong that it stretches beyond close friendships. It turns out we are also more likely to act in favor of certain types of strangers. Can you recall meeting someone with whom you hit it off instantly, where it almost seemed like you’d known them for years after a 20-minute conversation? Developing such an instant bond with a stranger may seem like a mythical process, but it rarely is. There are several tactics that can be used to make us like something, or someone, more than we otherwise would.
We all like engaging in activities with beautiful people. This is part of an automatic bias that falls into a category called The Halo Effect.
The Halo Effect occurs when a specific, positive characteristic determines the way a person is viewed by others on other, unrelated traits. In the case of beauty, it’s been shown that we automatically assign favorable yet unrelated traits such as talent, kindness, honesty, and intelligence, with those we find physically attractive.
For the most part, this attribution happens unnoticed. For example, attractive candidates received more than twice as many votes as unattractive candidates in the 1974 Canadian federal elections. Despite the ample evidence of predisposition towards handsome politicians, follow-up research demonstrated that nearly three-quarters of Canadians surveyed strongly denied the influence of physical appearance in their voting decisions.
The power of the Halo Effect is that it’s mostly happening beneath the level of consciousness.“The power of the Halo Effect is that it's mostly happening beneath the level of consciousness.” Click To Tweet
Similar forces are at play when it comes to hiring decisions and pay. While employers deny that they are strongly influenced by looks, studies show otherwise.
In one study evaluating hiring decisions based on simulated interviews, the applicants’ grooming played a greater role in the outcome than job qualifications. Partly, this has a rational basis. We might assume that someone who shows up without the proper “look” for the job may be deficient in other areas. If they couldn’t shave and put a tie on, how are we to expect them to perform with customers? Partly, though, it’s happening subconsciously. Even if we never consciously say to ourselves that “Better grooming = better employee”, we tend to act that way in our hiring.
These effects go even beyond the hiring phase — attractive individuals in the US and Canada have been estimated to earn an average of 12-14 percent more than their unattractive coworkers. Whether this is due to liking bias or perhaps the increased self confidence that comes from above-average looks is hard to say.
Appearance is not the only quality that may skew our perceptions in favor of someone. The next one on the list is similarity.
We like people who resemble us. Whether it’s appearance, opinions, lifestyle or background, we tend to favor people who on some dimension are most similar to ourselves.
A great example of similarity bias is the case of dress. Have you ever been at an event where you felt out of place because you were either overdressed or underdressed? The uneasy feelings are not caused only by your imagination. Numerous studies suggest that we are more likely to do favors, such as giving a dime or signing a petition, to someone who looks like us.
Similarity bias can extend to even such ambiguous traits as interests and background. Many salesmen are trained to look for similarities to produce a favorable and trustwothy image in the eyes of their potential customers. In Influence: The Psychology of Persuasion, Robert Cialdini explains:
If there is camping gear in the trunk, the salespeople might mention, later on, how they love to get away from the city whenever they can; if there are golf balls on the back seat, they might remark that they hope the rain will hold off until they can play the eighteen holes they scheduled for later in the day; if they notice that the car was purchased out of state, they might ask where a customer is from and report—with surprise—that they (or their spouse) were born there, too.
These are just a few of many examples which can be surprisingly effective in producing a sweet feeling of familiarity. Multiple studies illustrate the same pattern. We decide to fill out surveys from people with similar names, buy insurance from agents of similar age and smoking habits, and even decide that those who share our political views deserve their medical treatment sooner than the rest.
There is just one takeaway: even if the similarities are terribly superficial, we still may end up liking the other person more than we should.
“And what will a man naturally come to like and love,
apart from his parent, spouse and child?
Well, he will like and love being liked and loved.”
— Charlie Munger
We are all phenomenal suckers for flattery. These are not my words, but words of Robert Cialdini and they ring a bell. Perhaps, more than anything else in this world we love to be loved and, consequently, we love those that love us.“We love to be loved and, consequently, we love those that love us.” Click To Tweet
Consider the technique of Joe Girard, who has been continuously called the world’s “greatest car salesman” and has made it to the Guinness World Record book.
Each month Joe prints and sends over 13,000 holiday cards to his former customers.While the theme of the card varies depending on the season and celebration, the printed message always remains the same. On each of those cards Girard prints three simple words ”I like you” and his name. He explains:
“There’s nothing else on the card, nothin’ but my name. I’m just telling ’em that I like ’em.” “I like you.” It came in the mail every year, 12 times a year, like clockwork.
Joe understood a simple fact about humans – we love to be loved.
As numerous experiments show, regardless of whether the praise is deserved or not, we cannot help but develop warm feelings to those that provide it. Our reaction can be so automatic, that we develop liking even when the attempt to win our favor is an obvious one, as in the case of Joe.
In addition to liking those that like us and look like us, we also tend to like those who we know. That’s why repeated exposure can be a powerful tool in establishing liking.
There is a fun experiment you can do to understand the power of familiarity.
Take a picture of yourself and create a mirror image in one of the editing tools. Now with the two pictures at hand decide which one – the real or the mirror image you like better. Show the two pictures to a friend and ask her to choose the better one as well.
If you and your friend are like the group on whom this trick was tried, you should notice something odd. Your friend will prefer the true print, whereas you will think you look better on the mirror image. This is because you both prefer the faces you are used to. Your friend always sees you from her perspective, whereas you have learnt recognize and love your mirror image.
The effect of course extends beyond faces into places, names and even ideas.
For example, in elections we might prefer candidates whose names sound more familiar. The Ohio Attorney-General post was claimed by a man who, shortly before his candidacy, changed his last name to Brown – a family name of Ohio political tradition. Apart from his surname, there was little to nothing that separated him from other equally if not more capable candidates.
How could such a thing happen? The answer lies partly in the unconscious way that familiarity affects our liking. Often we don’t realize that our attitude toward something has been influenced by the number of times we have been exposed to it in the past.
Charisma or attraction are not prerequisites for liking — a mere association with someone you like or trust can be enough.
The bias from association shows itself in many other domains and is especially strong when we associate with the person we like the most — ourself. For example, the relationship between a sports fan and his local team can be highly personal even though the association is often based only on shared location. For the fan, however, the team is an important part of his self-identity. If the team or athlete wins, he wins as well, which is why sports can be so emotional. The most dedicated fans are ready to get into fights, burn cars or even kill to defend the honor of their team.
Such associated sense of pride and achievement is as true for celebrities as it is for sports. When Kevin Costner delivered his acceptance speech after winning the best picture award for Dances With Wolves, he said:
“While it may not be as important as the rest of the world situation, it will always be important to us. My family will never forget what happened here; my Native American brothers and sisters, especially the Lakota Sioux, will never forget, and the people I went to high school with will never forget.”
The interesting part of his words is the notion that his high school peers will remember, which is probably true. His former classmates are likely to tell people that they went to school with Costner, even though they themselves had no connection with the success of the movie.
Costner’s words illustrate that even a trivial association with success may reap benefits and breed confidence.
Who else do we like besides ourselves, celebrities and our sports teams?
People we’ve met through those who are close to us – our neighbors, friends and family. It is common sense that a referral from someone we trust is enough to trigger mild liking and favorable initial opinions.
There are a number of companies that use friend referral as a sales tactic. Network providers, insurers and other subscription services offer a number of benefits for those of us who give away our friends’ contact details.
The success of this method rests on the implicit idea that turning down the sales rep who says “your friend Jenny/Allan suggested I call you” feels nearly as bad as turning down Jenny or Allan themselves. This tactic, when well executed, leads to a never-ending chain of new customers.
Perhaps the right question to ask here is not “how can we avoid the bias from liking”, but when should we?
Someone who is conditioned to like the right people and pick their idols carefully can greatly benefit from these biases. Charlie Munger recalls that both he and Warren Buffett benefitted from liking admirable persons:
One common, beneficial example for us both was Warren’s uncle, Fred Buffett, who cheerfully did the endless grocery-store work that Warren and I ended up admiring from a safe distance. Even now, after I have known so many other people, I doubt if it is possible to be a nicer man than Fred Buffett was, and he changed me for the better.
The keywords here are “from a safe distance”.
If dealing with salesmen and others who clearly benefit from your liking, it might be a good idea to check whether you have been influenced. In these unclear cases Cialdini advises us to focus on our feelings rather than the other person’s actions that may produce liking. Ask yourself how much of what you feel is due to liking versus the actual facts of the situation.
The time to call out the defense is when we feel ourselves liking the practitioner more than we should under the circumstances, when we feel manipulated.
Once we have recognized that we like the requester more than we would expect under the given circumstances, we should take a step back and question ourselves. Are you doing the deal because you like someone or is it because it is indeed the best option out there?Mental Model: Bias from Liking/Loving Click To Tweet
Still Interested? Check out some other mental models and biases.
Daniel Kahneman and Amos Tversky spent decades in psychology research to disentangle patterns in errors of human reasoning. Over the course of their work they discovered a variety of logical fallacies that we tend to make, when facing information that appears vaguely familiar. These fallacies lead to bias – irrational behavior based on beliefs that are not always grounded in reality.
In his book Thinking Fast and Slow, which summarizes his and Tversky’s life work, Kahneman introduces biases that stem from the conjunction fallacy – the false belief that a conjunction of two events is more probable than one of the events on its own.
Probability can be a difficult concept. Most of us have an intuitive understanding of what probability is, but there is little consensus on what it actually means. It is just as vague and subjective a concept as democracy, beauty or freedom. However, this is not always troublesome – we can still easily discuss the notion with others. Kahneman reflects:
In all the years I spent asking questions about the probability of events, no one ever raised a hand to ask me, “Sir, what do you mean by probability?” as they would have done if I had asked them to assess a strange concept such as globability.
Everyone acted as if they knew how to answer my questions, although we all understood that it would be unfair to ask them for an explanation of what the word means.
While logicians and statisticians might disagree, probability to most of us is simply a tool that describes our degree of belief. For instance, we know that the sun will rise tomorrow and we consider it near impossible that there will be two suns up in the sky instead of one. In addition to the extremes, there are also events which lie somewhere in the middle on the probability spectrum, such as the degree of belief that it will rain tomorrow.
Despite its vagueness, probability has its virtues. Assigning probabilities helps us make the degree of belief actionable and also communicable to others. If we believe that the probability it will rain tomorrow is 90%, we are likely to carry an umbrella and suggest our family do so as well.
Most of us are already familiar with representativeness and base rates. Consider the classic example of x number of black and y number of white colored marbles in a jar. It is a simple exercise to tell what the probabilities of drawing each color are if you know their base rates (proportion). Using base rates is the obvious approach for estimations when no other information is provided.
However, Kahneman managed to prove that we have a tendency to ignore base rates in light of specific descriptions. He calls this phenomenon the Representativeness Bias. To illustrate representativeness bias, consider the example of seeing a person reading The New York Times on the New York subway. Which do you think would be a better bet about the reading stranger?
1) She has a PhD.
2) She does not have a college degree.
Representativeness would tell you to bet on the PhD, but this is not necessarily a good idea. You should seriously consider the second alternative, because many more non-graduates than PhDs ride in New York subways. While a larger proportion of PhDs may read The New York Times, the total number of New York Times readers with only high school degrees is likely to be much larger, even if the proportion itself is very slim.
In a series of similar experiments, Kahneman’s subjects failed to recognize the base rates in light of individual information. This is unsurprising. Kahneman explains:
On most occasions, people who act friendly are in fact friendly. A professional athlete who is very tall and thin is much more likely to play basketball than football. People with a PhD are more likely to subscribe to The New York Times than people who ended their education after high school. Young men are more likely than elderly women to drive aggressively.
While following representativeness bias might improve your overall accuracy, it will not always be the statistically optimal approach.
Michael Lewis in his bestseller Moneyball tells a story of Oakland A’s baseball team coach, Billy Beane, who recognized this fallacy and used it to his advantage. When recruiting new players for the team, instead of relying on scouts he relied heavily on statistics of past performance. This approach allowed him to build a team of great players that were passed up by other teams because they did not look the part. Needless to say, the team achieved excellent results at a low cost.
While representativeness bias occurs when we fail to account for low base rates, conjunction fallacy occurs when we assign a higher probability to an event of higher specificity. This violates the laws of probability.
Consider the following study:
Participants were asked to rank four possible outcomes of the next Wimbledon tournament from most to least probable. Björn Borg was the dominant tennis player of the day when the study was conducted. These were the outcomes:
A. Borg will win the match.
B. Borg will lose the first set.
C. Borg will lose the first set but win the match.
D. Borg will win the first set but lose the match.
How would you order them?
Kahneman was surprised to see that most subjects ordered the chances by directly contradicting the laws of logic and probability. He explains:
The critical items are B and C. B is the more inclusive event and its probability must be higher than that of an event it includes. Contrary to logic, but not to representativeness or plausibility, 72% assigned B a lower probability than C.
If you thought about the problem carefully you drew the following diagram in your head. Losing the first set will always, by definition, be a more probable event than losing the first set and winning the match.
As discussed in our piece on the Narrative Fallacy, the best-known and most controversial of Kahneman and Tversky’s experiments involved a fictitious lady called Linda. The fictional character was created to illustrate the role heuristics play in our judgement and how it can be incompatible with logic. This is how they described Linda.
Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.
Kahneman conducted a series of experiments, in which he showed that representativeness tends to cloud our judgements and that we ignore the base rates in light of stories. The Linda problem started off with the task to estimate the plausibility of 9 different scenarios that subjects were supposed to rank in order of likelihood.
Linda is a teacher in elementary school.
Linda works in a bookstore and takes yoga classes.
Linda is active in the feminist movement.
Linda is a psychiatric social worker.
Linda is a member of the League of Women Voters.
Linda is a bank teller.
Linda is an insurance salesperson.
Linda is a bank teller and is active in the feminist movement.
Kahneman was startled to see that his subjects judged the likelihood of Linda being a bank teller and a feminist more likely than her being just a bank teller. As explained earlier, doing so makes little sense. He went on to explore the phenomenon further:
In what we later described as “increasingly desperate” attempts to eliminate the error, we introduced large groups of people to Linda and asked them this simple question:
Which alternative is more probable?
Linda is a bank teller.
Linda is a bank teller and is active in the feminist movement.
This stark version of the problem made Linda famous in some circles, and it earned us years of controversy. About 85% to 90% of undergraduates at several major universities chose the second option, contrary to logic.
What is especially interesting about these results is that, even when aware of the biases in place, we do not discard them.
When I asked my large undergraduate class in some indignation, “Do you realize that you have violated an elementary logical rule?” someone in the back row shouted, “So what?” and a graduate student who made the same error explained herself by saying, “I thought you just asked for my opinion.”
The issue is not constrained to students and but also affects professionals.
The naturalist Stephen Jay Gould described his own struggle with the Linda problem. He knew the correct answer, of course, and yet, he wrote, “a little homunculus in my head continues to jump up and down, shouting at me—‘but she can’t just be a bank teller; read the description.”
Our brains simply seem to prefer consistency over logic.
Representativeness and conjunction fallacy occur, because we make the mental shortcut from our perceived plausibility of a scenario to its probability.
The most coherent stories are not necessarily the most probable, but they are plausible, and the notions of coherence, plausibility, and probability are easily confused by the unwary. Representativeness belongs to a cluster of closely related basic assessments that are likely to be generated together. The most representative outcomes combine with the personality description to produce the most coherent stories.
Kahneman warns us about the effects of these biases on our perception of expert opinion and forecasting. He explains that we are more likely to believe scenarios that are illustrative rather than probable.
The uncritical substitution of plausibility for probability has pernicious effects on judgments when scenarios are used as tools of forecasting. Consider these two scenarios, which were presented to different groups, with a request to evaluate their probability:
A massive flood somewhere in North America next year, in which more than 1,000 people drown
An earthquake in California sometime next year, causing a flood in which more than 1,000 people drown
The California earthquake scenario is more plausible than the North America scenario, although its probability is certainly smaller. As expected, probability judgments were higher for the richer and more detailed scenario, contrary to logic. This is a trap for forecasters and their clients: adding detail to scenarios makes them more persuasive, but less likely to come true.
In order to appreciate the role of plausibility, he suggests we have a look at an example without an accompanying explanation.
Which alternative is more probable?
Jane is a teacher.
Jane is a teacher and walks to work.
In this case, when evaluating plausibility and coherence there are no quick answers to the probability question and we can easily conclude that the first one is more likely. The rule goes that in the absence of a competing intuition, logic prevails.
The first lesson to thinking clearly is to question how you think. We should not simply believe whatever comes to our mind – our beliefs must be constrained by logic. You don’t have to become an expert in probability to tame your intuition, but having a grasp of simple concepts will help. There are two main rules that are worth repeating in light of representativeness bias:
1) All probabilities add up to 100%.
This means that if you believe that there’s a 90% chance it will rain tomorrow, there’s a 10% of chance that it will not rain tomorrow.
However, since you believe that there is only 90% chance that it will rain tomorrow, you cannot be 95% certain that it will rain tomorrow morning.
We typically make this type of error, when we mean to say that, if it rains, there’s 95% probability it will happen in the morning. That’s a different claim and the probability of raining tomorrow morning under such premises is 0.9*0.95=85.5%.
This also means the odds that, if it rains, it will not rain in the morning, are 90.0%-85.5% = 4.5%.
2) The second principle is called the Bayes rule.
It allows us to correctly adjust our beliefs with the diagnosticity of the evidence. Bayes rule follows the formula:
In essence the formula states that the posterior odds are proportional to prior odds times the likelihood. Kahneman crystallizes two keys to disciplined Bayesian reasoning:
• Anchor your judgment of the probability of an outcome on a plausible base rate.
• Question the diagnosticity of your evidence.
Kahnmenan explains it with an example:
If you believe that 3% of graduate students are enrolled in computer science (the base rate), and you also believe that the description of Tom is 4 times more likely for a graduate student in computer science than in other fields, then Bayes’s rule says you must believe that the probability that Tom is a computer science student is now 11%.
Four times as likely means that we expect roughly 80% of all computer science students to resemble Tom. We use this proportion to obtain the adjusted odds. (The calculation goes as follows: 0.03*0.8/(0.03*0.8+((1-0.03)*(1-0.8)))=11%)
The easiest way to become better at making decisions is by making sure you question your assumptions and follow strong evidence. When evidence is anecdotal, adjust minimally and trust the base rates. Odds are, you will be pleasantly surprised.
Want More? Check out our ever-growing collection of mental models and biases and get to work.
“The difficulty lies not in the new ideas,
but in escaping the old ones, which ramify,
for those brought up as most of us have been,
into every corner of our minds.”
— John Maynard Keynes
Ben Franklin tells an interesting little story in his autobiography. Facing opposition to being reelected Clerk of the General Assembly, he sought to gain favor with the member so vocally opposing him:
Having heard that he had in his library a certain very scarce and curious book, I wrote a note to him, expressing my desire of perusing that book, and requesting he would do me the favour of lending it to me for a few days. He sent it immediately, and I return’d it in about a week with another note, expressing strongly my sense of the favour.
When we next met in the House, he spoke to me (which he had never done before), and with great civility; and he ever after manifested a readiness to serve me on all occasions, so that we became great friends, and our friendship continued to his death.
This is another instance of the truth of an old maxim I had learned, which says, “He that has once done you a kindness will be more ready to do you another, than he whom you yourself have obliged.”
The man, having lent Franklin a rare and valuable book, sought to stay consistent with his past actions. He wouldn’t, of course, lend a book to an unworthy man, would he?
Scottish philosopher and economist Adam Smith said in The Theory of Moral Sentiments:
The opinion which we entertain of our own character depends entirely on our judgments concerning our past conduct. It is so disagreeable to think ill of ourselves, that we often purposely turn away our view from those circumstances which might render that judgment unfavorable.
Even when it acts against our best interest our tendency is to be consistent with our prior commitments, ideas, thoughts, words, and actions. As a byproduct of confirmation bias, we rarely seek disconfirming evidence of what we believe. This, after all, makes it easier to maintain our positive self-image.
Part of the reason this happens is our desire to appear and feel like we’re right. We also want to show people our conviction. This shouldn’t come as a surprise. Society values consistency and conviction even when it is wrong.
We associate consistency with intellectual and personal strength, rationality, honesty and stability. On the other hand, the person who is perceived as inconsistent is also seen as confused, two-faced, even mentally ill in certain extreme circumstances.
A politician, for example, who wavers, gets labelled a flip flopper and can lose an election over it (John Kerry). A CEO who risks everything on a successful bet and holds a conviction that no one else holds is held to be a hero (Elon Musk).
But it’s not just our words and actions that nudge our subconscious, but also how other people see us. There is a profound truth behind Eminem’s lyrics: I am, whatever you say I am. If I wasn’t, then why would I say I am?
If you think I’m talented, I become more talented in your eyes — in part because you labelling me as talented filters the way you see me. You start seeing more of my genius and less of my normal-ness, simply by way of staying consistent with your own words.
In his book Outliers, Malcolm Gladwell talks about how teachers simply identifying students as smart not only affected how the teachers saw their work but, more importantly, affected the opportunities teachers gave the students. Smarter students received better opportunities, which, we can reason, offers them better experiences. This is turn makes them better. It’s almost a self-fulfilling prophecy.
And the more we invest in our beliefs of ourselves or others—think money, effort, or pain, the more sunk costs we have and the harder it becomes to change our mind. It doesn’t matter if we’re right. It doesn’t matter if the Ikea bookshelf sucks, we’re going to love it.
In Too Much Invested to Quit, psychologist Allan Teger says something similar of the Vietnam War: “
The longer the war continued, the more difficult it was to justify the additional investments in terms of the value of possible victory. On the other hand, the longer the war continued, the more difficult it became to write off the tremendous losses without having anything to show for them.
As a consequence, there are few rules we abide by more than the “Don’t make any promises that you can’t keep.” This, generally speaking, is a great rule that keeps society together by ensuring that our commitments for the most part are real and reliable.
Aside from the benefits of preserving our public image, being consistent is simply easier and leads to a more predictable and consistent life. By being consistent in our habits and with previous decisions, we significantly reduce the need to think and can go on “auto-pilot” for most of our lives.
However beneficial these biases are, they too deserve deeper understanding and caution. Sometimes our drive to appear consistent can lure us into choices we otherwise would consider against our best interests. This is the essence of a harmful bias as opposed to a benign one: We are hurting ourselves and others by committing it.
Part of why commitment can be so dangerous is because it is like a slippery slope – you only need a single slip to slide down completely. Therefore compliance to even tiny requests, which initially appear insignificant, have a good probability of leading to full commitment later.
People whose job it is to persuade us know this.
Among the more blunt techniques on the spectrum are those reported by a used-car sales manager in Robert Cialdini’s book Influence. The dealer knows the power of commitment and that if we comply a little now, we are likely to comply fully later on. His advice to other sellers goes as follows:
“Put ’em on paper. Get the customer’s OK on paper. Get the money up front. Control ’em. Control the deal. Ask ’em if they would buy the car right now if the price is right. Pin ’em down.”
This technique will be obvious to most of us. However, there are also more subtle ways to make us comply without us noticing.
A great example of a subtle compliance practitioner is Jo-Ellen Demitrius, the woman currently reputed to be the best consultant in the business of jury selection.
Whenever screening potential jurors before a trial she asks an artful question:
“If you were the only person who believed in my client’s innocence, could you withstand the pressure of the rest of the jury to change your mind?”
It’s unlikely that any self-respecting prospective juror would answer negatively. And, now that the juror has made the implicit promise, it is unlikely that once selected he will give in to the pressure exerted by the rest of the jury.
Innocent questions and requests like this can be a great springboard for initiating a cycle of compliance.
A great case study for compliance is the tactics that Chinese soliders employed on American war captives during the Korean War. The Chinese were particularly effective in getting Americans to inform on one another. In fact, nearly all American prisoners in the Chinese camps are said to have collaborated with the enemy in one way or another.
This was striking, since such behavior was rarely observed among American war prisoners during WWII. It raises the question of what secret trades led to the success of the Chinese?
Unlike the North Koreans, the Chinese did not treat the victims harshly. Instead they engaged in what they called “lenient policy” towards the captives, which was, in reality, a clever series of psychological assaults.
In their exploits the Chinese relied heavily on commitment and consistency tactics to receive the compliance they desired. At first, the Americans were not too collaborative, as they had been trained to provide only name, rank, and serial number, but the Chinese were patient.
They started with seemingly small but frequent requests to repeat statements like “The United States is not perfect” and “In a Communist country, unemployment is not a problem.” Once these requests had been complied with, the heaviness of the requests grew. Someone who had just agreed that United States was not perfect would be encouraged to expand on his thoughts about specific imperfections. Later he might be asked to write up and read out a list of these imperfections in a discussion group with other prisoners. “After all, it’s what you really believe, isn’t it?” The Chinese would then broadcast the essay readings not only to the whole camp, but to other camps and even the American forces in South Korea. Suddenly the soldier would find himself a “collaborator” of the enemy.
The awareness that the essays did not contradict his beliefs could even change his self-image to be consistent with the new “collaborator” label, often resulting in more cooperation with the enemy.
It is not surprising that very few American soldiers were able to avoid such “collaboration” altogether.
The small request growing into bigger requests as applied by the Chinese on American soldiers is also called the Foot-in-the-door Technique. It was first discovered by two scientists – Freedman and Fraser, who had worked on an experiment in which a fake volunteer worker asked home owners to allow a public-service billboard to be installed on their front lawns.
To get a better idea of how it would look, the home owners were even shown a photograph depicting an attractive house that was almost completely obscured by an ugly sign reading DRIVE CAREFULLY. While the request was quite understandably denied by 83 percent of residents, one particular group reacted favorably.
Two weeks earlier a different “volunteer worker” had come and asked the respondents of this group a similar request to display a much smaller sign that read BE A SAFE DRIVER. The request was so negligible that nearly all of them complied. However, the future effects of that request turned out to be so enormous that 76 percent of this group complied with the bigger, much less reasonable request (the big ugly sign).
At first, even the researchers themselves were baffled by the results and repeated the experiment on similar setups. The effect persisted. Finally, they proposed that the subjects must have distorted their own views about themselves as a result of their initial actions:
What may occur is a change in the person’s feelings about getting involved or taking action. Once he has agreed to a request, his attitude may change, he may become, in his own eyes, the kind of person who does this sort of thing, who agrees to requests made by strangers, who takes action on things he believes in, who cooperates with good causes.
The rule goes that once someone has instilled our self-image where they want it to be, we will comply naturally with the set of requests that adhere to the new self-view. Therefore we must be very careful about agreeing to even the smallest requests. Not only can it make us comply with larger requests later on, but it can make us even more willing to do favors that are only remotely connected to the earlier ones.
Even Cialdini, someone who knows this bias inside-out, admits to his fear that his behavior will be affected by consistency bias:
It scares me enough that I am rarely willing to sign a petition anymore, even for a position I support. Such an action has the potential to influence not only my future behavior but also my self-image in ways I may not want.
Further, once a person’s self-image is altered, all sorts of subtle advantages become available to someone who wants to exploit that new image.
Have you ever witnessed a deal that is a little too good to be true only to later be disappointed? You had already made up your mind, had gotten excited and were ready to pay or sign until a calculation error was discovered. Now with the adjusted price, the offer did not look all that great.
It is likely that the error was not an accident – this technique, also called low-balling, is often used by compliance professionals in sales. Cialdini, having observed the phenomenon among car dealers, tested its effects on his own students.
In an experiment with colleagues, he made two groups of students show up at 7:00 AM in the morning to do a study on “thinking processes”. When they called one group of students they immediately told them that the study starts at 7:00 AM. Unsurprisingly, only 24 percent wanted to participate.
However, for the other group of students, researchers threw a low-ball. The first question was whether they wanted to take part in a study of thinking processes. Fifty-six percent of them replied positively. Now, to those that agreed, the meeting time of 7:00 AM was revealed.
These students were given the opportunity to opt out, but none of them did. In fact, driven by their commitment, 95 percent of the low-balled students showed up to the Psychology Building at 7:00 AM as they had promised.
Do you recognize the similarities between the experiment and the sales situation?
The script of low-balling tends to be the same:
First, an advantage is offered that induces a favorable decision in the manipulator’s direction. Then, after the decision has been made, but before the bargain is sealed, the original advantage is deftly removed (i.e., the price is raised, the time is changed, etc.).
It would seem surprising that anyone would buy under these circumstances, yet many do. Often the self-created justifications provide so many new reasons for the decision that even when the dealer pulls away the original favorable rationale, like a low price, the decision is not changed. We stick with our old decision even in the face of new information!
Of course not everyone complies, but that’s not the point. The effect is strong enough to hold for a good number of buyers, students or anyone else whose rate of compliance we may want to raise.
The first real defense to consistency bias is awareness about the phenomenon and the harm a certain rigidity in our decisions can cause us.
Robert Cialdini suggests two approaches to recognizing when consistency biases are unduly creeping into our decision making. The first one is to listen to our stomachs. Stomach signs display themselves when we realize that the request being pushed is something we don’t want to do.
He recalls a time when a beautiful young woman tried to sell him a membership he most certainly did not need by using the tactics displayed above. He writes:
I remember quite well feeling my stomach tighten as I stammered my agreement. It was a clear call to my brain, “Hey, you’re being taken here!” But I couldn’t see a way out. I had been cornered by my own words. To decline her offer at that point would have meant facing a pair of distasteful alternatives: If I tried to back out by protesting that I was not actually the man-about-town I had claimed to be during the interview, I would come off a liar; trying to refuse without that protest would make me come off a fool for not wanting to save $1,200. I bought the entertainment package, even though I knew I had been set up. The need to be consistent with what I had already said snared me.
But then eventually he came up with the perfect counter-attack for later episodes, which allowed him to get out of the situation gracefully.
Whenever my stomach tells me I would be a sucker to comply with a request merely because doing so would be consistent with some prior commitment I was tricked into, I relay that message to the requester. I don’t try to deny the importance of consistency; I just point out the absurdity of foolish consistency. Whether, in response, the requester shrinks away guiltily or retreats in bewilderment, I am content. I have won; an exploiter has lost.
The second approach concerns the signs that are felt within our heart and is best used when it is not really clear whether the initial commitment was wrongheaded.
Imagine you have recognized that your initial assumptions about a particular deal were not correct. The car is not extraordinarily cheap and the experiment is not as fun if you have to wake up at 6 AM to make it. Here it helps to ask one simple question:
“Knowing what I know, if I could go back in time, would I make the same commitment?”
Ask it frequently enough and the answer might surprise you.
Want More? Check out our ever-growing library of mental models and biases.
Let’s run through a little elementary algebra. Try to do it in your head: What’s 1,506,789 x 9,809 x 5.56 x 0?
Hopefully you didn’t have to whip out the old TI-84 to solve that one. It’s a zero.
This leads us to a mental model called Multiplicative Systems, and understanding it can get to the heart of a lot of issues.
Suppose you were trying to become the best basketball player in the world. You’ve got the following things going for you:
1. God-given talent. You’re 6’9″, quick, skillful, can leap out of the building, and have been the best player in a competitive city since you can remember.
2. Support. You live in a city that reveres basketball and you’re raised by parents who care about your goals.
3. A proven track record. You were the player of the year in a very competitive Division 1 college conference.
4. A clear path forward. You’re selected as the second overall pick in the NBA Draft by the Boston Celtics.
Sounds like you have a shot, right? As good as anyone could have, right? What would you put the odds at of this person becoming one of the better players in the world? Pretty high?
Let’s add one more piece of information:
5. You’ve developed a cocaine habit.
What are your odds now?
This little exercise isn’t an academic one, it’s the sad case of Leonard “Len” Bias, a young basketball prodigy who died of a cocaine overdose after being selected to play in the NBA for the Boston Celtics in 1986. Many call Bias the best basketball player who never played professionally.
What the story of Len Bias illustrates so well is the truth that anything times zero must still be zero, no matter how large the string of numbers preceding it. In some facets of life, all of your hard work, dedication to improvement, and good fortune may still be worth nothing if there is a weak link in the chain.
Something all engineers learn very early on is that a system is no stronger than its weakest component. Take, for example, the case of a nuclear power plant. We have a very good understanding of how to make the nuclear power plant quite safe, nearly indestructible, which it must be considering the magnitude of a failure.
But in reality, what is the weakest link in the chain for most nuclear power plants? The human beings running them. We’re part of the system! And since we’ve yet to perfect the human being, we have yet to perfect the nuclear power plant. How could it be otherwise?
An additive system does not work this way. In an additive system, each component adds on to one another to create the final outcome. Going back to algebra, let’s say our equation was additive rather than multiplicative: 1,506,789 plus 9,809 plus 5.56 plus 0. The answer is 1,516,603.56 — still a pretty big number!
Think of an additive system as something like a great Thanksgiving dinner. You’ve got a great turkey, some whipped potatoes, a mass of stuffing, and a lump of homemade cranberry sauce, and you’re hanging with your family. Awesome!
Let’s say the potatoes get burnt in the oven, and they’re inedible. Problem? Sure, but dinner still works out just fine. Someone shows up with a pie for dessert? Great! But it won’t change the dinner all that much.
The interaction of the parts make the dinner range from good to great. Take some parts away or add new ones in, and you get a different outcome, but not a binary, win/lose one. The meal still happens. Additive systems and multiplicative systems react differently when components are added or taken away.
Most businesses, for example, operate in a multiplicative system. But they too often think they’re operating in additive ones: Ever notice how some businesses will add one feature on top of another to their products but fail at basic customer service, so you leave, never to return? That’s a business that thinks it’s in an additive system when they really need to be resolving the big fat zero in the middle of the equation instead of adding more stuff.
Financial systems are, of course, multiplicative. General Motors, founded in 1908 by William Durant and C.S. Mott, came to dominate the American car market to the tune of 50% market share through a series of brilliant innovations and management practices, and was for many years the dominant and most admirable corporation in America. Even today, after more than a century of competition, no American carmaker produces more automobiles than General Motors.
And yet, the original shareholders of GM ended up with a zero in 2008 as the company went into bankruptcy due to years of financial mismanagement. It didn’t matter than they had several generations of leadership: All of that becomes naught in a multiplicative system.
On a smaller scale, take the case of a young corporate climber who feels they just can’t get ahead. They seem to have all their ducks in a row: great resume, great background, great experience…the problem is that they suck at dealing with other people and treat others like stepping stones. That’s a zero that can negate all of the big numbers preceding it. The rest doesn’t matter.
And so we arrive at the “must be true” conclusion that understanding when you’re in an additive system versus a multiplicative system, and which components need absolute reliability for the system to work, is a critical model to have in your head. Multiplicative thinking is a model related to the greater idea of systems thinking, another mental model well worth acquiring.
Multiplicative Systems is another Farnam Street Mental Model.Multiplicative Systems, from the Farnam Street Latticework of Mental Models is worth reading. Click To Tweet