Over 400,000 people visited Farnam Street last month to learn how to make better decisions, create new ideas, and avoid stupid errors. With more than 100,000 subscribers to our popular weekly digest, we've become an online intellectual hub. To learn more about we what do, start here.

Tag Archives: Mental Models

The Green Lumber Fallacy: The Difference between Talking and Doing

“Clearly, it is unrigorous to equate skills at doing with skills at talking.”
— Nassim Taleb

***

Before we get to the meat, let’s review an elementary idea in biology that will be relevant to our discussion.

If you’re familiar with evolutionary theory, you know that populations of organisms are constantly subjected to “selection pressures” — the rigors of their environment which lead to certain traits being favored and passed down to their offspring and others being thrown into the evolutionary dustbin.

Biologists dub these advantages in reproduction “fitness” — as in, the famously lengthening of giraffe necks gave them greater “fitness” in their environment because it helped them reach high up, untouched leaves.

Fitness is generally a relative concept: Since organisms must compete for scarce resources, their fitnesses are measured in the sense of giving a reproductive advantage over one another.

Just as well, a trait that might provide great fitness in one environment may be useless or even disadvantageous in another. (Imagine draining a pond: Any fitness advantages held by a really incredible fish becomes instantly worthless without water.) Traits also relate to circumstance. An advantage at one time could be a disadvantage at another and vice versa.

This makes fitness an all-important concept in biology: Traits are selected for if they provide fitness to the organism within a given environment.

Got it? OK, let’s get back to the practical world.

***

The Black Swan thinker Nassim Taleb has an interesting take on fitness and selection in the real world:  People who are good “doers” and people who are good “talkers” are often selected for different traits. Be careful not to mix them up.

In his book Antifragile, Taleb uses this idea to invoke a heuristic he’d once used when hiring traders on Wall Street:

The more interesting their conversation, the more cultured they are, the more they will be trapped into thinking that they are effective at what they are doing in real business (something psychologists call the halo effect, the mistake of thinking that skills in, say, skiing translate unfailingly into skills in managing a pottery workshop or a bank department, or that a good chess player would be a good strategist in real life).

Clearly, it is unrigorous to equate skills at doing with skills at talking. My experience of good practitioners is that they can be totally incomprehensible–they do not have to put much energy into turning their insights and internal coherence into elegant style and narratives. Entrepreneurs are selected to be doers, not thinkers, and doers do, they don’t talk, and it would be unfair, wrong, and downright insulting to measure them in the talk department.

In other words, the selection pressures on an entrepreneur are very different from those on a corporate manager or bureaucrat: Entrepreneurs and risk takers succeed or fail not so much on their ability to talk, explain, and rationalize as their ability to get things done.

While the two can often go together, Nassim figured out that they frequently don’t. We judge people as ignorant when it’s really us who are ignorant.

When you think about it, there’s no a priori reason great intellectualizing and great doing must go together: Being able to hack together an incredible piece of code gives you great fitness in the world of software development, while doing great theoretical computer science probably gives you better fitness in academia. The two skills don’t have to be connected. Great economists don’t usually make great investors.

But we often confuse the two realms.  We’re tempted to think that a great investor must be fluent in behavioral economics or a great CEO fluent in Mckinsey-esque management narratives, but in the real world, we see this intuition constantly in violation.

The investor Walter Schloss worked from 9-5, barely left his office, and wasn’t considered an entirely high IQ man, but he compiled one of the great investment records of all time. A young Mark Zuckerberg could hardly be described as a prototypical manager or businessperson, yet somehow built one of the most profitable companies in the world by finding others that complemented his weaknesses.

There are a thousand examples: Our narratives about the type of knowledge or experience we must have or the type of people we must be in order to become successful are often quite wrong; in fact, they border on naive. We think people who talk well can do well, and vice versa. This is simply not always so.

We won’t claim that great doers cannot be great talkers, rationalizers, or intellectuals. Sometimes they are. But if you’re seeking to understand the world properly, it’s good to understand that the two traits are not always co-located. Success, especially in some “narrow” area like plumbing, programming, trading, or marketing, is often achieved by rather non-intellectual folks. Their evolutionary fitness doesn’t come from the ability to talk, but do. This is part of reality.

***

Taleb calls this idea the Green Lumber Fallacy, after a story in the book What I Learned Losing a Million Dollars. Taleb describes it in Antifragile:

In one of the rare noncharlatanic books in finance, descriptively called What I Learned Losing a Million Dollars, the protagonist makes a big discovery. He remarks that a fellow named Joe Siegel, one of the most successful traders in a commodity called “green lumber,” actually thought it was lumber painted green (rather than freshly cut lumber, called green because it had not been dried). And he made it his profession to trade the stuff! Meanwhile the narrator was into grand intellectual theories and narratives of what caused the price of commodities to move and went bust.

It is not just that the successful expert on lumber was ignorant of central matters like the designation “green.” He also knew things about lumber that nonexperts think are unimportant. People we call ignorant might not be ignorant.

The fact that predicting the order flow in lumber and the usual narrative had little to do with the details one would assume from the outside are important. People who do things in the field are not subjected to a set exam; they are selected in the most non-narrative manager — nice arguments don’t make much difference. Evolution does not rely on narratives, humans do. Evolution does not need a word for the color blue.

So let us call the green lumber fallacy the situation in which one mistakes a source of visible knowledge — the greenness of lumber — for another, less visible from the outside, less tractable, less narratable.

The main takeaway is that the real causative factors of success are often hidden from usWe think that knowing the intricacies of green lumber are more important than keeping a close eye on the order flow. We seduce ourselves into overestimating the impact of our intellectualism and then wonder why “idiots” are getting ahead. (Probably hustle and competence.)

But for “skin in the game” operations, selection and evolution don’t care about great talk and ideas unless they translate into results. They care what you do with the thing more than that you know the thing. They care about actually avoiding risk rather than your extensive knowledge of risk management theories. (Of course, in many areas of modernity there is no skin in the game, so talking and rationalizing can be and frequently are selected for.)

As Taleb did with his hiring heuristic, this should teach us to be a little skeptical of taking good talkers at face value, and to be a little skeptical when we see “unexplainable” success in someone we consider “not as smart.” There might be a disconnect we’re not seeing because we’re seduced by narrative. (A problem someone like Lee Kuan Yew avoided by focusing exclusively on what worked.)

And we don’t have to give up our intellectual pursuits in order to appreciate this nugget of wisdom; Taleb is right, but it’s also true that combining the rigorous, skeptical knowledge of “what actually works” with an ever-improving theory structure of the world might be the best combination of all — selected for in many more environments than simple git-er-done ability, which can be extremely domain and environment dependent. (The green lumber guy might not have been much good outside the trading room.)

After all, Taleb himself was both a successful trader and the highest level of intellectual. Even he can’t resist a little theorizing.

Using Multidisciplinary Thinking to Approach Problems in a Complex World

Complex outcomes in human systems are a tough nut to crack when it comes to deciding what’s really true. Any phenomena we might try to explain will have a host of competing theories, many of them seemingly plausible.

So how do we know what to go with?

One idea is to take a nod from the best. One of the most successful “explainers” of human behavior has been the cognitive psychologist Steven Pinker. His books have been massively influential, in part because they combine scientific rigor, explanatory power, and plainly excellent writing.

What’s unique about Pinker is the range of sources he draws on. His book The Better Angels of Our Nature, a cogitation on the decline in relative violence in recent human history, draws on ideas from evolutionary psychology, forensic anthropology, statistics, social history, criminology, and a host of other fields. Pinker, like Vaclav Smil and Jared Diamond, is the opposite of the man with a hammer, ranging over much material to come to his conclusions.

In fact, when asked about the progress of social science as an explanatory arena over time, Pinker credited this cross-disciplinary focus:

Because of the unification with the sciences, there are more genuinely explanatory theories, and there’s a sense of progress, with more non-obvious things being discovered that have profound implications.

But, even better, Pinker gives out an outline for how a multidisciplinary thinker should approach problems in a complex world.

***

Here’s the issue at stake: When we’re viewing a complex phenomena—say, the decline in certain forms of violence in human history—it can be hard to come with up a rigorous explanation. We can’t just set up repeated lab experiments and vary the conditions of human history to see what pops out, as with physics or chemistry.

So out of necessity, we must approach the problem in a different way.

In the above referenced interview, Pinker gives a wonderful example how to do it: Note how he carefully “cross-checks” from a variety of sources of data, developing a 3D view of the landscape he’s trying to assess:

Pinker: Absolutely, I think most philosophers of science would say that all scientific generalizations are probabilistic rather than logically certain, more so for the social sciences because the systems you are studying are more complex than, say, molecules, and because there are fewer opportunities to intervene experimentally and to control every variable. But the exis­tence of the social sciences, including psychology, to the extent that they have discovered anything, shows that, despite the uncontrollability of human behavior, you can make some progress: you can do your best to control the nuisance variables that are not literally in your control; you can have analogues in a laboratory that simulate what you’re interested in and impose an experimental manipulation.

You can be clever about squeezing the last drop of causal information out of a correlational data set, and you can use converging evi­dence, the qualitative narratives of traditional history in combination with quantitative data sets and regression analyses that try to find patterns in them. But I also go to traditional historical narratives, partly as a sanity check. If you’re just manipulating numbers, you never know whether you’ve wan­dered into some preposterous conclusion by taking numbers too seriously that couldn’t possibly reflect reality. Also, it’s the narrative history that provides hypotheses that can then be tested. Very often a historian comes up with some plausible causal story, and that gives the social scientists something to do in squeezing a story out of the numbers.

Warburton: I wonder if you’ve got an example of just that, where you’ve combined the history and the social science?

Pinker: One example is the hypothesis that the Humanitarian Revolution during the Enlightenment, that is, the abolition of slavery, torture, cruel punishments, religious persecution, and so on, was a product of an expansion of empathy, which in turn was fueled by literacy and the consumption of novels and journalis­tic accounts. People read what life was like in other times and places, and then applied their sense of empathy more broadly, which gave them second thoughts about whether it’s a good idea to disembowel someone as a form of criminal punish­ment. So that’s a historical hypothesis. Lynn Hunt, a historian at the University of California–Berkeley, proposed it, and there are some psychological studies that show that, indeed, if people read a first-person account by someone unlike them, they will become more sympathetic to that individual, and also to the category of people that that individual represents.

So now we have a bit of experimental psychology supporting the historical qualita­tive narrative. And, in addition, one can go to economic histo­rians and see that, indeed, there was first a massive increase in the economic efficiency of manufacturing a book, then there was a massive increase in the number of books pub­lished, and finally there was a massive increase in the rate of literacy. So you’ve got a story that has at least three vertices: the historian’s hypothesis; the economic historians identifying exogenous variables that changed prior to the phenomenon we’re trying to explain, so the putative cause occurs before the putative effect; and then you have the experimental manipulation in a laboratory, showing that the intervening link is indeed plausible.

Pinker is saying, Look we can’t just rely on “plausible narratives” generated by folks like the historians. There are too many possibilities that could be correct.

Nor can we rely purely on correlations (i.e., the rise in literacy statistically tracking the decline in violence) — they don’t necessarily offer us a causative explanation. (Does the rise in literacy cause less violence, or is it vice versa? Or, does a third factor cause both?)

However, if we layer in some other known facts from areas we can experiment on — say, psychology or cognitive neuroscience — we can sometimes establish the causal link we need or, at worst, a better hypothesis of reality.

In this case, it would be the finding from psychology that certain forms of literacy do indeed increase empathy (for logical reasons).

Does this method give us absolute proof? No. However, it does allow us to propose and then test, re-test, alter, and strengthen or ultimately reject a hypothesis. (In other words, rigorous thinking.)

We can’t stop here though. We have to take time to examine competing hypotheses — there may be a better fit. The interviewer continues on asking Pinker about this methodology:

Warburton: And so you conclude that the de-centering that occurs through novel-reading and first-person accounts probably did have a causal impact on the willingness of people to be violent to their peers?

Pinker: That’s right. And, of course, one has to rule out alternative hypotheses. One of them could be the growth of affluence: perhaps it’s simply a question of how pleasant your life is. If you live a longer and healthier and more enjoyable life, maybe you place a higher value on life in general, and, by extension, the lives of others. That would be an alternative hypothesis to the idea that there was an expansion of empathy fueled by greater literacy. But that can be ruled out by data from eco­nomic historians that show there was little increase in afflu­ence during the time of the Humanitarian Revolution. The increase in affluence really came later, in the 19th century, with the advent of the Industrial Revolution.

***

Let’s review the process that Pinker has laid out, one that we might think about emulating as we examine the causes of complex phenomena in human systems:

  1. We observe an interesting phenomenon in need of explanation, one we feel capable of exploring.
  2. We propose and examine competing hypotheses that would explain the phenomena (set up in a falsifiable way, in harmony with the divide between science and pseudoscience laid out for us by the great Karl Popper).
  3. We examine a cross-section of: Empirical data relating to the phenomena; sensible qualitative inference (from multiple fields/disciplines, the more fundamental the better), and finally;  “Demonstrable” aspects of nature we are nearly certain about, arising from controlled experiment or other rigorous sources of knowledge ranging from engineering to biology to cognitive neuroscience.

What we end up with is not necessarily a bulletproof explanation, but probably the best we can do if we think carefully. A good cross-disciplinary examination with quantitative and qualitative sources coming into equal play, and a good dose of judgment, can be far more rigorous than the gut instinct or plausible nonsense type stories that many of us lazily spout.

A Word of Caution

Although Pinker’s “multiple vertices” approach to problem solving in complex domains can be powerful, we always have to be on guard for phenomena that we simply cannot explain at our current level of competence: We must have a “too hard” pile when competing explanations come out “too close to call” or we otherwise feel we’re outside of our circle of competence. Always tread carefully and be sure to follow Darwin’s Golden Rule: Contrary facts are more important than confirming ones. Be ready to change your mind, like Darwin, when the facts don’t go your way.

***

Still Interested? For some more Pinker goodness check out our prior posts on his work, or check out a few of his books like How the Mind Works or The Blank Slate: The Modern Denial of Human Nature.

The Map is Not the Territory

What Are You Doing About It? Reaching Deep Fluency with Mental Models

The mental models approach is very intellectually appealing, almost seductive to a certain type of person. (It certainly is for us.)

The whole idea is to take the world’s greatest, most useful ideas and make them work for you!

How hard can it be?

Nearly all of the models themselves are perfectly well understandable by the average well-educated knowledge worker, including all of you reading this piece. Ideas like Bayes’ rule, multiplicative thinking, hindsight bias, or the bias from envy and jealousy, are all obviously true and part of the reality we live in.

There’s a bit of a problem we’re seeing though: People are reading the stuff, enjoying it, agreeing with it…but not taking action. It’s not becoming part of their standard repertoire.

Let’s say you followed up on Bayesian thinking after reading our post on it — you spent some time soaking in Thomas Bayes‘ great wisdom on updating your understanding of the world incrementally and probabilistically rather than changing your mind in black-and-white. Great!

But a week later, what have you done with that knowledge? How has it actually impacted your life? If the honest answer is “It hasn’t,” then haven’t you really wasted your time?

Ironically, it’s this habit of “going halfway” instead of “going all the way,” like Sisyphus constantly getting halfway up the mountain, which is the biggest waste of time!

See, the common reason why people don’t truly “follow through” with all of this stuff is that they haven’t raised their knowledge to a “deep fluency” — they’re skimming the surface. They pick up bits and pieces — some heuristics or biases here, a little physics or biology there, and then call it a day and pull up Netflix. They get a little understanding, but not that much, and certainly no doing.

The better approach, if you actually care about making changes, is to imitate Charlie Munger, Charles Darwin, and Richard Feynman, and start raising your knowledge of the Big Ideas to a deep fluency, and then figuring out systems, processes, and mental tricks to implement them in your own life.

Let’s work through an example.

***

Say you’re just starting to explore all the wonderful literature on heuristics and biases and come across the idea of Confirmation Bias: The idea that once we’ve landed on an idea we really like, we tend to keep looking for further data to confirm our already-held notions rather than trying to disprove our idea.

This is common, widespread, and perfectly natural. We all do it. John Kenneth Galbraith put it best:

“In the choice between changing one’s mind and proving there’s no need to do so, most people get busy on the proof.”

Now, what most people do, the ones you’re trying to outperform, is say “Great idea! Thanks Galbraith.” and then stop thinking about it.

Don’t do that!

The next step would be to push a bit further, to get beyond the sound bite: What’s the process that leads to confirmation bias? Why do I seek confirmatory information and in which contexts am I particularly susceptible? What other models are related to the confirmation bias? How do I solve the problem?

The answers are out there: They’re in Daniel Kahneman and in Charlie Munger and in Elster. They’re available by searching through Farnam Street.

The big question: How far do you go? A good question without a perfect answer. But the best test I can think of is to perform something like the Feynman technique, and to think about the chauffeur problem.

Can you explain it simply to an intelligent layperson, using vivid examples? Can you answer all the follow-ups? That’s fluency. And you must be careful not to fool yourself, because in the wise words of Feynman, “…you are the easiest person to fool.

While that’s great work, you’re not done yet. You have to make the rubber hit the road now. Something has to happen in your life and mind.

The way to do that is to come up with rules, systems, parables, and processes of your own, or to copy someone else’s that are obviously sound.

In the case of Confirmation Bias, we have two wonderful models to copy, one from each of the Charlies — Darwin, and Munger.

Darwin had rule, one we have written about before but will restate here: Make a note, immediately, if you come across a thought or idea that is contrary to something you currently believe. 

As for Munger, he implemented a rule in his own life: “I never allow myself to have an opinion on anything that I don’t know the other side’s argument better than they do.”

Now we’re getting somewhere! With the implementation of those two habits and some well-earned deep fluency, you can immediately, tomorrow, start improving the quality of your decision-making.

Sometimes when we get outside the heuristic/biases stuff, it’s less obvious how to make the “rubber hit the road” — and that will be a constant challenge for you as you take this path.

But that’s also the fun part! With every new idea and model you pick up, you also pick up the opportunity to synthesize for yourself a useful little parable to make it stick or a new habit that will help you use it. Over time, you’ll come up with hundreds of them, and people might even look to you when they’re having problems doing it themselves!

Look at Buffett and Munger — both guys are absolute machines, chock full of pithy little rules and stories they use in order to implement and recall what they’ve learned.

For example, Buffett discovered early on the manipulative psychology behind open-outcry auctions. What did he do? He made a rule to never go to one! That’s how it’s done.

Even if you can’t come up with a great rule like that, you can figure out a way to use any new model or idea you learn. It just takes some creative thinking.

Sometimes it’s just a little mental rule or story that sticks particularly well. (Recall one of the prime lessons from our series on memory: Salient, often used, well-associated, and important information sticks best.)

We did this very thing recently with Lee Kuan Yew’s Rule. What a trite way to refer to the simple idea of asking if something actually works…attributing it to a Singaporean political leader!

But that’s exactly the point. Give the thing a name and a life and, like clockwork, you’ll start recalling it. The phrase “Lee Kuan Yew’s Rule” actually appears in my head when I’m approaching some new system or ideology, and as soon as it does, I find myself backing away from ideology and towards pragmatism. Exactly as I’d hoped.

Your goal should be to create about a thousand of those little tools in your head, attached to a deep fluency in the material from which it came. 

***

I can hear the objection coming. Who has time for this stuff?

You do. It’s about making time for the things that really matter. And what could possibly matter more than upgrading your whole mental operating system? I solemnly promise that you’re spending way more time right now making sub-optimal decisions and trying to deal with the fallout.

If you need help learning to manage your time right this second, check out our Productivity Seminar, one that’s changed some people’s lives entirely. The central idea is to become more thoughtful and deliberate with how you spend your hours. When you start doing that, you’ll notice you do have an hour a day to spend on this Big Ideas stuff. It’s worth the 59 bucks.

If you don’t have 59 bucks, at least imitate Cal Newport and start scheduling your days and put an hour in there for “Getting better at making all of my decisions.”

Once you find that solid hour (or more), start using it in the way outlined above, and let the world’s great knowledge actually start making an impact. Just do a little every day.

What you’ll notice, over the weeks and months and years of doing this, is that your mind will really change! It has to! And with that, your life will change too. The only way to fail at improving your brain is by imitating Sisyphus, pushing the boulder halfway up, over and over.

Unless and until you really understand this, you’ll continue spinning your wheels. So here’s your call to action. Go get to it!

Bias from Disliking/Hating

(This is a follow-up to our post on the Bias from Liking/Loving, which you can find here.)

Think of a cat snarling and spitting, lashing with its tail and standing with its back curved. Her pulse is elevated, blood vessels constricted and muscles tense. This reaction may sound familiar, because everyone has experienced the same tensed-up feeling of rage at least once in their lives.

When rage is directed towards an external object, it becomes hate. Just as we learn to love certain things or people, we learn to hate others.

There are several cognitive processes that awaken the hate within us and most of them stem from our need for self-protection.

Reciprocation

We tend to dislike people who dislike us (and, true to Newton, with equal strength.) The more we perceive they hate us, the more we hate them.

Competition

A lot of hate comes from scarcity and competition. Whenever we compete for resources, our own mistakes can mean good fortune for others. In these cases, we affirm our own standing and preserve our self-esteem by blaming others.

Robert Cialdini explains that because of the competitive environment in American classrooms, school desegregation may increase the tension between children of different races instead of decreasing it. Imagine being a secondary school child:

If you knew the right answer and the teacher called on someone else, you probably hoped that he or she would make a mistake so that you would have a chance to display your knowledge. If you were called on and failed, or if you didn’t even raise your hand to compete, you probably envied and resented your classmates who knew the answer.

At first we are merely annoyed. But then as the situation fails to improve and our frustration grows, we are slowly drawn into false attributions and hate. We keep blaming and associating “the others” who are doing better with the loss and scarcity we are experiencing (or perceive we are experiencing). That is one way our emotional frustration boils into hate.

Us vs. Them

The ability to separate friends from enemies has been critical for our safety and survival. Because mistaking the two can be deadly, our mental processes have evolved to quickly spot potential threats and react accordingly. We are constantly feeding information about others into our “people information lexicon” that forms not only our view of individuals, whom we must decide how to act around, but entire classes of people, as we average out that information.

To shortcut our reactions, we classify narrowly and think in dichotomies: right or wrong, good or bad, heroes or villains. (The type of Grey Thinking we espouse is almost certainly unnatural, but, then again, so is a good golf swing.) Since most of us are merely average at everything we do, even superficial and small differences, such as race or religious affiliation, can become an important source of identification. We are, after all, creatures who seek to belong to groups above all else.

Seeing ourselves as part of a special, different and, in its own way, superior group, decreases our willingness to empathize with the other side. This works both ways – the hostility towards the others also increases the solidarity of the group. In extreme cases, we are so drawn towards the inside view that we create a strong picture of the enemy that has little to do with reality or our initial perceptions.

From Compassion to Hate

We think of ourselves as compassionate, empathetic and cooperative. So why do we learn to hate?

Part of the answer lies in the fact that we think of ourselves in a specific way. If we cannot reach a consensus, then the other side, which is in some way different from us, must necessarily be uncooperative for our assumptions about our own qualities to hold true.

Our inability to examine the situation from all sides and shake our beliefs, together with self-justifying behavior, can lead us to conclude that others are the problem. Such asymmetric views, amplified by strong perceived differences, often fuel hate.

What started off as odd or difficult to understand, has quickly turned into unholy.

If the situation is characterized by competition, we may also see ourselves as a victim. The others, who abuse our rights, take away our privileges or restrict our freedom are seen as bullies who deserve to be punished. We convince ourselves that we are doing good by doing harm to those who threaten to cross the line.

This is understandable. In critical times our survival indeed may depend on our ability to quickly spot and neutralize dangers. The cost of a false positive – mistaking a friend for a foe – is much lower than the potentially fatal false negative of mistaking our adversaries for innocent allies. As a result, it is safest to assume that anything we are not familiar with is dangerous by default. Natural selection, by its nature, “keeps what works,” and this tendency towards distrust of the unfamiliar probably survived in that way.

The Displays of Hate

Physical and psychological pain is very mobilizing. We despise foods that make us nauseous and people that have hurt us. Because we are scared to suffer, we end up either avoiding or destroying the “enemy”, which is why revenge can be pursued with such vengeance. In short, hate is a defense against enduring pain repeatedly.

There are several ways that the bias for disliking and hating display themselves to the outer world. The most obvious of them is war, which seems to have been more or less prevalent throughout the history of mankind.

This would lead us to think that war may well be unavoidable. Charlie Munger offers the more moderate opinion that while hatred and dislike cannot be avoided, the instances of war can be minimized by channeling our hate and fear into less destructive behaviors. (A good political system allows for dissent and disagreement without explosions of blood upheaval.)

Even with the spread of religion, and the advent of advanced civilization, modern war remains pretty savage. But we also get what we observe in present-day Switzerland and the United States, wherein the clever political arrangements of man “channel” the hatreds and dislikings of individuals and groups into nonlethal patterns including elections.

But these dislikings and hatreds that are arguably inherent to our nature never go away completely and transcend themselves into politics. Think of the dichotomies. There is the left versus the right wing, the nationalists versus the communists and libertarians vs. authoritarians. This might be the reason why there are maxims like: “Politics is the art of marshaling hatreds.

Finally, as we move away from politics, arguably the most sophisticated and civilized way of channeling hatred is litigation. Charlie Munger attributes the following words to Warren Buffett:

A major difference between rich and poor people is that the rich people can spend their lives suing their relatives.

While most of us reflect on our memories of growing up with our siblings with fondness, there are cases where the competition for shared attention or resources breeds hatred. If the siblings can afford it, they will sometimes litigate endlessly to lay claims over their parents’ property or attention.

Under the Influence of Bias

There are several ways that bias from hating can interfere with our normal judgement and lead to suboptimal decisions.

Ignoring Virtues of The Other Side

Michael Faraday was once asked after a lecture whether he implied that a hated academic rival was always wrong. His reply was short and firm “He’s not that consistent.” Faraday must have recognized the bias from hating and corrected for it with the witty comment.

What we should recognize here is that no situation is ever black or white. We all have our virtues and we all have our weaknesses. However, when possessed by the strong emotions of hate, our perceptions can be distorted to the extent that we fail to recognize any good in the opponent at all. This is driven by consistency bias, which motivates us to form a coherent (“she is all-round bad”) opinion of ourselves and others.

Association Fueled Hate

The principle of association goes that the nature of the news tends to infect the teller. This means that the worse the experience, the worse the impression of anything related to it.

Association is why we blame the messenger who tells us something that we don’t want to hear even when they didn’t cause the bad news. (Of course, this creates an incentive not to speak truth and avoid giving bad news.)

A classic example is the unfortunate and confused weatherman, who receives hate mail, whenever it rains. One went so far as to seek advice from the Arizona State professor of psychology, Robert Cialdini, whose work we have discussed before.

Cialdini explained to him that in light of the destinies of other messengers, he was born lucky. Rain might ruin someone’s holiday plans, but it will rarely change the destiny of a nation, which was the case of Persian war messengers. Delivering good news meant a feast, whereas delivering bad news resulted in their death.

The weatherman left Cialdini’s office with a sense of privilege and relief.

“Doc,” he said on his way out, “I feel a lot better about my job now. I mean, I’m in Phoenix where the sun shines 300 days a year, right? Thank God I don’t do the weather in Buffalo.”

Fact Distortion

Under the influence of liking or disliking bias we tend to fill gaps in our knowledge by building our conclusions on assumptions, which are based on very little evidence.

Imagine you meet a woman at a party and find her to be a self-centered, unpleasant conversation partner. Now her name comes up as someone who could be asked to contribute to a charity. How likely do you feel it is that she will give to the charity?

In reality, you have no useful knowledge, because there is little to nothing that should make you believe that people who are self-centered are not also generous contributors to charity. The two are unrelated, yet because of the well-known fundamental attribution error, we often assume one is correlated to the other.

By association, you are likely to believe that this woman is not likely to be generous towards charities despite lack of any evidence. And because now you also believe she is stingy and ungenerous, you probably dislike her even more.

This is just an innocent example, but the larger effects of such distortions can be so extreme that they lead to a major miscognition. Each side literally believes that every single bad attribute or crime is attributable to the opponent.

Charlie Munger explains this with a relatively recent example:

When the World Trade Center was destroyed, many Pakistanis immediately concluded that the Hindus did it, while many Muslims concluded that the Jews did it. Such factual distortions often make mediation between opponents locked in hatred either difficult or impossible. Mediations between Israelis and Palestinians are difficult because facts in one side’s history overlap very little with facts from the other side’s. These distortions and the overarching mistrust might be why some conflicts seem to never end.

Avoiding Being Hated

To varying degrees we value acceptance and affirmation from others. Very few of us wake up wanting to be disliked or rejected. Social approval, at its heart the cause of social influence, shapes behavior and contributes to conformity. Francois VI, Duc de La Rochefoucauld wrote: “We only confess our little faults to persuade people that we have no big ones.”

Remember the old adage, “The nail that sticks out gets hammered down.” This is why we don’t openly speak the truth or question people, we don’t want to be the nail.

How do we resolve hate?

It is only normal that we can find more common ground with some people than with others. But are we really destined to fall into the traps of hate or is there a way to take hold of these biases?

That’s a question worth over a hundred million lives. There are ways that psychologists think that we can minimize prejudice against others.

Firstly, we can engage with others in sustained close contact to breed our familiarity. The contact must not only be prolonged, but also positive and cooperative in nature – either working towards a common cause or against a common enemy.

Secondly, we also reduce prejudice by attaining equal status in all aspects, including education, income and legal rights. This effect is further reinforced, when equality is supported not only “on paper”, but also ingrained within broader social norms.

And finally the obvious – we should practice awareness of our own emotions and ability to hold back on the temptations to dismiss others. Whenever confronted with strong feelings it might simply be best to sit back, breathe and do our best to eliminate the distorted thinking.

 

***

Want more? Check out the opposite bias of liking/loving, or check out a whole bunch of mental models.

Mental Model: Bias from Liking/Loving

The decisions that we make are rarely impartial. Most of us already know that we prefer to take advice from people that we like. We also tend to more easily agree with opinions formed by people we like. This tendency to judge in favor of people and symbols we like is called the bias from liking or loving.

We are more likely to ignore faults and comply with wishes of our friends or lovers rather than random strangers. We favor people, products, and actions associated with our favorite celebrities. Sometimes we even distort facts to facilitate love. The influence that our friends, parents, lovers and idols exert on us can be enormous.

In general, this is a good thing, a bias that adds on balance rather than subtracts. It helps us form successful relationships, it helps us fall in love (and stay in love), it helps us form attachments with others that give us great happiness.

But we do want to be aware of where this tendency leads us awry.

For example, some people and companies have learnt to use this influence to their advantage.

In his bestseller on social psychology Influence, Robert Cialdini tells a story about the successful strategy of Tupperware, which at the time reported sales of over $2.5 million a day.

As many of us know, the company for a long time sold its kitchenware at parties thrown by friends of the potential customers. At each party there was a Tupperware representative taking orders, but the hostess, the friend of the invitees, received a commission.

These potential customers are not blind to the incentives and social pressures involved. Some of them don’t mind it, others do, but all admit a certain degree of helplessness in their situation. Cialdini recalls a conversation with one of the frustrated guests:

It’s gotten to the point now where I hate to be invited to Tupperware parties. I’ve got all the containers I need; and if I wanted any more, I could buy another brand cheaper in the store. But when a friend calls up, I feel like I have to go. And when I get there, I feel like I have to buy something. What can I do? It’s for one of my friends.

We are more likely to buy in a familiar, friendly setting and under the obligation of friendship rather than from an unfamiliar store or a catalogue. We simply find it much harder to say “no” or disagree when it’s a friend. The possibility of ruining the friendship, or seeing our image altered in the eyes of someone we like, is a powerful motivator to comply.

The Tupperware example is a true “lollapalooza” in favor of manipulating people into buying things. Besides the liking tendency, there are several other factors at play: commitment/consistency bias, a bias from stress, an influence from authority, a reciprocation effect, and some direct incentives and disincentives, at least! (Lollapaloozas, something we’ll talk more about in the future, are when several powerful forces combine to create a non-linear outcome. A good way to think of this conceptually for now is that 1+1=3.)

***

The liking tendency is so strong that it stretches beyond close friendships. It turns out we are also more likely to act in favor of certain types of strangers.

Can you recall meeting someone with whom you hit it off instantly, where it almost seemed like you’d known them for years after a 20-minute conversation? Developing such an instant bond with a stranger may seem like a mythical process, but it rarely is. There are several tactics that can be used to make us like something, or someone, more than we otherwise would.

Appearance and the Halo Effect

We all like engaging in activities with beautiful people. This is part of an automatic bias that falls into a category called The Halo Effect.

The Halo Effect occurs when a specific, positive characteristic determines the way a person is viewed by others on other, unrelated traits. In the case of beauty, it’s been shown that we automatically assign favorable yet unrelated traits such as talent, kindness, honesty, and intelligence, with those we find physically attractive.

For the most part, this attribution happens unnoticed. For example, attractive candidates received more than twice as many votes as unattractive candidates in the 1974 Canadian federal elections. Despite the ample evidence of predisposition towards handsome politicians, follow-up research demonstrated that nearly three-quarters of Canadians surveyed strongly denied the influence of physical appearance in their voting decisions.

The power of the Halo Effect is that it’s mostly happening beneath the level of consciousness.

Similar forces are at play when it comes to hiring decisions and pay. While employers deny that they are strongly influenced by looks, studies show otherwise.

In one study evaluating hiring decisions based on simulated interviews, the applicants’ grooming played a greater role in the outcome than job qualifications. Partly, this has a rational basis. We might assume that someone who shows up without the proper “look” for the job may be deficient in other areas. If they couldn’t shave and put a tie on, how are we to expect them to perform with customers? Partly, though, it’s happening subconsciously. Even if we never consciously say to ourselves that “Better grooming = better employee”, we tend to act that way in our hiring.

These effects go even beyond the hiring phase — attractive individuals in the US and Canada have been estimated to earn an average of 12-14 percent more than their unattractive coworkers. Whether this is due to liking bias or perhaps the increased self confidence that comes from above-average looks is hard to say.

Appearance is not the only quality that may skew our perceptions in favor of someone. The next one on the list is similarity.

Similarity

We like people who resemble us. Whether it’s appearance, opinions, lifestyle or background, we tend to favor people who on some dimension are most similar to ourselves.

A great example of similarity bias is the case of dress. Have you ever been at an event where you felt out of place because you were either overdressed or underdressed? The uneasy feelings are not caused only by your imagination. Numerous studies suggest that we are more likely to do favors, such as giving a dime or signing a petition, to someone who looks like us.

Similarity bias can extend to even such ambiguous traits as interests and background. Many salesmen are trained to look for similarities to produce a favorable and trustworthy image in the eyes of their potential customers. In Influence: The Psychology of Persuasion, Robert Cialdini explains:

If there is camping gear in the trunk, the salespeople might mention, later on, how they love to get away from the city whenever they can; if there are golf balls on the back seat, they might remark that they hope the rain will hold off until they can play the eighteen holes they scheduled for later in the day; if they notice that the car was purchased out of state, they might ask where a customer is from and report—with surprise—that they (or their spouse) were born there, too.

These are just a few of many examples which can be surprisingly effective in producing a sweet feeling of familiarity. Multiple studies illustrate the same pattern. We decide to fill out surveys from people with similar names, buy insurance from agents of similar age and smoking habits, and even decide that those who share our political views deserve their medical treatment sooner than the rest.

There is just one takeaway: even if the similarities are terribly superficial, we still may end up liking the other person more than we should.

Praise and Compliments

“And what will a man naturally come to like and love,
apart from his parent, spouse and child?
Well, he will like and love being liked and loved.”

— Charlie Munger

We are all phenomenal suckers for flattery. These are not my words, but words of Robert Cialdini and they ring a bell. Perhaps, more than anything else in this world we love to be loved and, consequently, we love those that love us.

Consider the technique of Joe Girard, who has been continuously called the world’s “greatest car salesman” and has made it to the Guinness World Record book.

Each month Joe prints and sends over 13,000 holiday cards to his former customers.While the theme of the card varies depending on the season and celebration, the printed message always remains the same. On each of those cards Girard prints three simple words ”I like you” and his name. He explains:

“There’s nothing else on the card, nothin’ but my name. I’m just telling ’em that I like ’em.” “I like you.” It came in the mail every year, 12 times a year, like clockwork.

Joe understood a simple fact about humans – we love to be loved.

As numerous experiments show, regardless of whether the praise is deserved or not, we cannot help but develop warm feelings to those that provide it. Our reaction can be so automatic, that we develop liking even when the attempt to win our favor is an obvious one, as in the case of Joe.

Familiarity

In addition to liking those that like us and look like us, we also tend to like those who we know. That’s why repeated exposure can be a powerful tool in establishing liking.

There is a fun experiment you can do to understand the power of familiarity.

Take a picture of yourself and create a mirror image in one of the editing tools. Now with the two pictures at hand decide which one – the real or the mirror image you like better. Show the two pictures to a friend and ask her to choose the better one as well.

If you and your friend are like the group on whom this trick was tried, you should notice something odd. Your friend will prefer the true print, whereas you will think you look better on the mirror image. This is because you both prefer the faces you are used to. Your friend always sees you from her perspective, whereas you have learnt recognize and love your mirror image.

The effect of course extends beyond faces into places, names and even ideas.

For example, in elections we might prefer candidates whose names sound more familiar. The Ohio Attorney-General post was claimed by a man who, shortly before his candidacy, changed his last name to Brown – a family name of Ohio political tradition. Apart from his surname, there was little to nothing that separated him from other equally if not more capable candidates.

How could such a thing happen? The answer lies partly in the unconscious way that familiarity affects our liking. Often we don’t realize that our attitude toward something has been influenced by the number of times we have been exposed to it in the past.

Loving by Association and Referral

Charisma or attraction are not prerequisites for liking — a mere association with someone you like or trust can be enough.

The bias from association shows itself in many other domains and is especially strong when we associate with the person we like the most — ourself. For example, the relationship between a sports fan and his local team can be highly personal even though the association is often based only on shared location. For the fan, however, the team is an important part of his self-identity. If the team or athlete wins, he wins as well, which is why sports can be so emotional. The most dedicated fans are ready to get into fights, burn cars or even kill to defend the honor of their team.

Such associated sense of pride and achievement is as true for celebrities as it is for sports. When Kevin Costner delivered his acceptance speech after winning the best picture award for Dances With Wolves, he said:

“While it may not be as important as the rest of the world situation, it will always be important to us. My family will never forget what happened here; my Native American brothers and sisters, especially the Lakota Sioux, will never forget, and the people I went to high school with will never forget.”

The interesting part of his words is the notion that his high school peers will remember, which is probably true. His former classmates are likely to tell people that they went to school with Costner, even though they themselves had no connection with the success of the movie.

Costner’s words illustrate that even a trivial association with success may reap benefits and breed confidence.

Who else do we like besides ourselves, celebrities and our sports teams?

People we’ve met through those who are close to us – our neighbors, friends and family. It is common sense that a referral from someone we trust is enough to trigger mild liking and favorable initial opinions.

There are a number of companies that use friend referral as a sales tactic. Network providers, insurers and other subscription services offer a number of benefits for those of us who give away our friends’ contact details.

The success of this method rests on the implicit idea that turning down the sales rep who says “your friend Jenny/Allan suggested I call you” feels nearly as bad as turning down Jenny or Allan themselves. This tactic, when well executed, leads to a never-ending chain of new customers.

Can We Avoid Liking?

Perhaps the right question to ask here is not “how can we avoid the bias from liking”, but when should we?

Someone who is conditioned to like the right people and pick their idols carefully can greatly benefit from these biases. Charlie Munger recalls that both he and Warren Buffett benefitted from liking admirable persons:

One common, beneficial example for us both was Warren’s uncle, Fred Buffett, who cheerfully did the endless grocery-store work that Warren and I ended up admiring from a safe distance. Even now, after I have known so many other people, I doubt if it is possible to be a nicer man than Fred Buffett was, and he changed me for the better.

The keywords here are “from a safe distance”.

If dealing with salesmen and others who clearly benefit from your liking, it might be a good idea to check whether you have been influenced. In these unclear cases Cialdini advises us to focus on our feelings rather than the other person’s actions that may produce liking. Ask yourself how much of what you feel is due to liking versus the actual facts of the situation.

The time to call out the defense is when we feel ourselves liking the practitioner more than we should under the circumstances, when we feel manipulated.

Once we have recognized that we like the requester more than we would expect under the given circumstances, we should take a step back and question ourselves. Are you doing the deal because you like someone or is it because it is indeed the best option out there?

***

Still Interested? Check out some other mental models and biases.

Mental Model: Bias from Conjunction Fallacy

Daniel Kahneman and Amos Tversky spent decades in psychology research to disentangle patterns in errors of human reasoning. Over the course of their work they discovered a variety of logical fallacies that we tend to make, when facing information that appears vaguely familiar. These fallacies lead to bias – irrational behavior based on beliefs that are not always grounded in reality.

In his book Thinking Fast and Slow, which summarizes his and Tversky’s life work, Kahneman introduces biases that stem from the conjunction fallacy – the false belief that a conjunction of two events is more probable than one of the events on its own.

What is Probability?

Probability can be a difficult concept. Most of us have an intuitive understanding of what probability is, but there is little consensus on what it actually means. It is just as vague and subjective a concept as democracy, beauty or freedom. However, this is not always troublesome – we can still easily discuss the notion with others. Kahneman reflects:

In all the years I spent asking questions about the probability of events, no one ever raised a hand to ask me, “Sir, what do you mean by probability?” as they would have done if I had asked them to assess a strange concept such as globability.

Everyone acted as if they knew how to answer my questions, although we all understood that it would be unfair to ask them for an explanation of what the word means.

While logicians and statisticians might disagree, probability to most of us is simply a tool that describes our degree of belief. For instance, we know that the sun will rise tomorrow and we consider it near impossible that there will be two suns up in the sky instead of one. In addition to the extremes, there are also events which lie somewhere in the middle on the probability spectrum, such as the degree of belief that it will rain tomorrow.

Despite its vagueness, probability has its virtues. Assigning probabilities helps us make the degree of belief actionable and also communicable to others. If we believe that the probability it will rain tomorrow is 90%, we are likely to carry an umbrella and suggest our family do so as well.

Probability, Base Rates and Representativeness

Most of us are already familiar with representativeness and base rates. Consider the classic example of x number of black and y number of white colored marbles in a jar. It is a simple exercise to tell what the probabilities of drawing each color are if you know their base rates (proportion). Using base rates is the obvious approach for estimations when no other information is provided.

However, Kahneman managed to prove that we have a tendency to ignore base rates in light of specific descriptions. He calls this phenomenon the Representativeness Bias. To illustrate representativeness bias, consider the example of seeing a person reading The New York Times on the New York subway. Which do you think would be a better bet about the reading stranger?

1) She has a PhD.
2) She does not have a college degree.

Representativeness would tell you to bet on the PhD, but this is not necessarily a good idea. You should seriously consider the second alternative, because many more non-graduates than PhDs ride in New York subways. While a larger proportion of PhDs may read The New York Times, the total number of New York Times readers with only high school degrees is likely to be much larger, even if the proportion itself is very slim.

In a series of similar experiments, Kahneman’s subjects failed to recognize the base rates in light of individual information. This is unsurprising. Kahneman explains:

On most occasions, people who act friendly are in fact friendly. A professional athlete who is very tall and thin is much more likely to play basketball than football. People with a PhD are more likely to subscribe to The New York Times than people who ended their education after high school. Young men are more likely than elderly women to drive aggressively.

While following representativeness bias might improve your overall accuracy, it will not always be the statistically optimal approach.

Michael Lewis in his bestseller Moneyball tells a story of Oakland A’s baseball team coach, Billy Beane, who recognized this fallacy and used it to his advantage. When recruiting new players for the team, instead of relying on scouts he relied heavily on statistics of past performance. This approach allowed him to build a team of great players that were passed up by other teams because they did not look the part. Needless to say, the team achieved excellent results at a low cost.

Conjunction Fallacy

While representativeness bias occurs when we fail to account for low base rates, conjunction fallacy occurs when we assign a higher probability to an event of higher specificity. This violates the laws of probability.

Consider the following study:

Participants were asked to rank four possible outcomes of the next Wimbledon tournament from most to least probable. Björn Borg was the dominant tennis player of the day when the study was conducted. These were the outcomes:

A. Borg will win the match.
B. Borg will lose the first set.
C. Borg will lose the first set but win the match.
D. Borg will win the first set but lose the match.

How would you order them?

Kahneman was surprised to see that most subjects ordered the chances by directly contradicting the laws of logic and probability. He explains:

The critical items are B and C. B is the more inclusive event and its probability must be higher than that of an event it includes. Contrary to logic, but not to representativeness or plausibility, 72% assigned B a lower probability than C.

If you thought about the problem carefully you drew the following diagram in your head. Losing the first set will always, by definition, be a more probable event than losing the first set and winning the match.
Screen Shot 2016-08-05 at 6.28.30 PM

The Linda Problem

As discussed in our piece on the Narrative Fallacy, the best-known and most controversial of Kahneman and Tversky’s experiments involved a fictitious lady called Linda. The fictional character was created to illustrate the role heuristics play in our judgement and how it can be incompatible with logic. This is how they described Linda.

Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.

Kahneman conducted a series of experiments, in which he showed that representativeness tends to cloud our judgements and that we ignore the base rates in light of stories. The Linda problem started off with the task to estimate the plausibility of 9 different scenarios that subjects were supposed to rank in order of likelihood.

Linda is a teacher in elementary school.
Linda works in a bookstore and takes yoga classes.
Linda is active in the feminist movement.
Linda is a psychiatric social worker.
Linda is a member of the League of Women Voters.
Linda is a bank teller.
Linda is an insurance salesperson.
Linda is a bank teller and is active in the feminist movement.

Kahneman was startled to see that his subjects judged the likelihood of Linda being a bank teller and a feminist more likely than her being just a bank teller. As explained earlier, doing so makes little sense. He went on to explore the phenomenon further:

In what we later described as “increasingly desperate” attempts to eliminate the error, we introduced large groups of people to Linda and asked them this simple question:

Which alternative is more probable?

Linda is a bank teller.
Linda is a bank teller and is active in the feminist movement.

This stark version of the problem made Linda famous in some circles, and it earned us years of controversy. About 85% to 90% of undergraduates at several major universities chose the second option, contrary to logic.

What is especially interesting about these results is that, even when aware of the biases in place, we do not discard them.

When I asked my large undergraduate class in some indignation, “Do you realize that you have violated an elementary logical rule?” someone in the back row shouted, “So what?” and a graduate student who made the same error explained herself by saying, “I thought you just asked for my opinion.”

The issue is not constrained to students and but also affects professionals.

The naturalist Stephen Jay Gould described his own struggle with the Linda problem. He knew the correct answer, of course, and yet, he wrote, “a little homunculus in my head continues to jump up and down, shouting at me—‘but she can’t just be a bank teller; read the description.”

Our brains simply seem to prefer consistency over logic.

The Role of Plausibility

Representativeness and conjunction fallacy occur, because we make the mental shortcut from our perceived plausibility of a scenario to its probability.

The most coherent stories are not necessarily the most probable, but they are plausible, and the notions of coherence, plausibility, and probability are easily confused by the unwary. Representativeness belongs to a cluster of closely related basic assessments that are likely to be generated together. The most representative outcomes combine with the personality description to produce the most coherent stories.

Kahneman warns us about the effects of these biases on our perception of expert opinion and forecasting. He explains that we are more likely to believe scenarios that are illustrative rather than probable.

The uncritical substitution of plausibility for probability has pernicious effects on judgments when scenarios are used as tools of forecasting. Consider these two scenarios, which were presented to different groups, with a request to evaluate their probability:

A massive flood somewhere in North America next year, in which more than 1,000 people drown

An earthquake in California sometime next year, causing a flood in which more than 1,000 people drown

The California earthquake scenario is more plausible than the North America scenario, although its probability is certainly smaller. As expected, probability judgments were higher for the richer and more detailed scenario, contrary to logic. This is a trap for forecasters and their clients: adding detail to scenarios makes them more persuasive, but less likely to come true.

In order to appreciate the role of plausibility, he suggests we have a look at an example without an accompanying explanation.

Which alternative is more probable?

Jane is a teacher.
Jane is a teacher and walks to work.

In this case, when evaluating plausibility and coherence there are no quick answers to the probability question and we can easily conclude that the first one is more likely. The rule goes that in the absence of a competing intuition, logic prevails.

Taming our intuition

The first lesson to thinking clearly is to question how you think. We should not simply believe whatever comes to our mind – our beliefs must be constrained by logic. You don’t have to become an expert in probability to tame your intuition, but having a grasp of simple concepts will help. There are two main rules that are worth repeating in light of representativeness bias:

1) All probabilities add up to 100%.

This means that if you believe that there’s a 90% chance it will rain tomorrow, there’s a 10% of chance that it will not rain tomorrow.

However, since you believe that there is only 90% chance that it will rain tomorrow, you cannot be 95% certain that it will rain tomorrow morning.

We typically make this type of error, when we mean to say that, if it rains, there’s 95% probability it will happen in the morning. That’s a different claim and the probability of raining tomorrow morning under such premises is 0.9*0.95=85.5%.

This also means the odds that, if it rains, it will not rain in the morning, are 90.0%-85.5% = 4.5%.

2) The second principle is called the Bayes rule.

 It allows us to correctly adjust our beliefs with the diagnosticity of the evidence. Bayes rule follows the formula:

Picture1

In essence the formula states that the posterior odds are proportional to prior odds times the likelihood. Kahneman crystallizes two keys to disciplined Bayesian reasoning:

• Anchor your judgment of the probability of an outcome on a plausible base rate.
• Question the diagnosticity of your evidence.

Kahnmenan explains it with an example:

If you believe that 3% of graduate students are enrolled in computer science (the base rate), and you also believe that the description of Tom is 4 times more likely for a graduate student in computer science than in other fields, then Bayes’s rule says you must believe that the probability that Tom is a computer science student is now 11%.

Four times as likely means that we expect roughly 80% of all computer science students to resemble Tom. We use this proportion to obtain the adjusted odds. (The calculation goes as follows: 0.03*0.8/(0.03*0.8+((1-0.03)*(1-0.8)))=11%)

The easiest way to become better at making decisions is by making sure you question your assumptions and follow strong evidence. When evidence is anecdotal, adjust minimally and trust the base rates. Odds are, you will be pleasantly surprised.

***

Want More? Check out our ever-growing collection of mental models and biases and get to work.