Farnam Street helps you make better decisions, innovate, and avoid stupidity.
With over 400,000 monthly readers and more than 93,000 subscribers to our popular weekly digest, we've become an online intellectual hub.
Farnam Street helps you make better decisions, innovate, and avoid stupidity.
With over 400,000 monthly readers and more than 93,000 subscribers to our popular weekly digest, we've become an online intellectual hub.
One of the most impactful books we’ve ever come across is the wonderful Seeking Wisdom: From Darwin to Munger, written by the Swedish investor Peter Bevelin. In the spirit of multidisciplinary learning, Seeking Wisdom is a compendium of ideas from biology, psychology, statistics, physics, economics, and human behavior.
Mr. Bevelin is out with a new book full of wisdom from Warren Buffett & Charlie Munger: All I Want to Know is Where I’m Going to Die So I Never Go There. We were fortunate enough to have a chance to interview Peter recently, and the result is the wonderful discussion below.
The short answer: To improve my thinking. And when I started writing on what later became Seeking Wisdom I can express it even simpler: “I was dumb and wanted to be less dumb.” As Munger says: “It’s ignorance removal…It’s dishonorable to stay stupider than you have to be.” And I had done some stupid things and I had seen a lot of stupidity being done by people in life and in business.
A seed was first planted when I read Charlie Munger’s worldly wisdom speech and another one where he referred to Darwin as a great thinker. So I said to myself: I am 42 now. Why not take some time off business and spend a year learning, reflecting and write about the subject Munger introduced to me – human behavior and judgments.
None of my writings started out as a book project. I wrote my first book – Seeking Wisdom – as a memorandum for myself with the expectation that I could transfer some of its essentials to my children. I learn and write because I want to be a little wiser day by day. I don’t want to be a great-problem-solver. I want to avoid problems – prevent them from happening and doing right from the beginning. And I focus on consequential decisions. To paraphrase Buffett and Munger – decision-making is not about making brilliant decisions, but avoiding terrible ones. Mistakes and dumb decisions are a fact of life and I’m going to make more, but as long as I can avoid the big or “fatal” ones I’m fine.
So I started to read and write to learn what works and not and why. And I liked Munger’s “All I want to know is where I’m going to die so I’ll never go there” approach. And as he said, “You understand it better if you go at it the way we do, which is to identify the main stupidities that do bright people in and then organize your patterns for thinking and developments, so you don’t stumble into those stupidities.” Then I “only” had to a) understand the central “concept” and its derivatives and describe it in as simple way as possible for me and b) organize what I learnt in a way that was logical and useful for me.
And what better way was there to learn this from those who already knew this?
After I learnt some things about our brain, I understood that thinking doesn’t come naturally to us humans – most is just unconscious automatic reactions. Therefore I needed to set up the environment and design a system that helped me make it easier to know what to do and prevent and avoid harm. Things like simple rules of thumbs, tricks and filters. Of course, I could only do that if I first had the foundation. And as the years have passed, I’ve found that filters are a great way to save time and misery. As Buffett says, “I process information very quickly since I have filters in my mind.” And they have to be simple – as the proverb says, “Beware of the door that has too many keys.” The more complicated a process is, the less effective it is.
Why do I write? Because it helps me understand and learn better. And if I can’t write something down clearly, then I have not really understood it. As Buffett says, “I learn while I think when I write it out. Some of the things, I think I think, I find don’t make any sense when I start trying to write them down and explain them to people … And if it can’t stand applying pencil to paper, you’d better think it through some more.”
My own test is one that a physicist friend of mine told me many years ago, ‘You haven’t really understood an idea if you can’t in a simple way describe it to almost anyone.’ Luckily, I don’t have to understand zillion of things to function well.
And even if some of mine and others thoughts ended up as books, they are all living documents and new starting points for further, learning, un-learning and simplifying/clarifying. To quote Feynman, “A great deal of formulation work is done in writing the paper, organizational work, organization. I think of a better way, a better way, a better way of getting there, of proving it. I never do much — I mean, it’s just cleaner, cleaner and cleaner. It’s like polishing a rough-cut vase. The shape, you know what you want and you know what it is. It’s just polishing it. Get it shined, get it clean, and everything else.”
Seeking Wisdom because I had to do a lot of research – reading, talking to people etc. Especially in the field of biology and brain science since I wanted to first understand what influences our behavior. I also spent some time at a Neurosciences Institute to get a better understanding of how our anatomy, physiology and biochemistry constrained our behavior.
And I had to work it out my own way and write it down in my own words so I really could understand it. It took a lot of time but it was a lot of fun to figure it out and I learnt much more and it stuck better than if I just had tried to memorize what somebody else had already written. I may not have gotten everything letter perfect but good enough to be useful for me.
As I said, the expectation wasn’t to create a book. In fact, that would have removed a lot of my motivation. I did it because I had an interest in becoming better. It goes back to the importance of intrinsic motivation. As I wrote in Seeking Wisdom: “If we reward people for doing what they like to do anyway, we sometimes turn what they enjoy doing into work. The reward changes their perception. Instead of doing something because they enjoy doing it, they now do it because they are being paid. The key is what a reward implies. A reward for our achievements makes us feel that we are good at something thereby increasing our motivation. But a reward that feels controlling and makes us feel that we are only doing it because we’re paid to do it, decreases the appeal.”
It may sound like a cliché but the joy was in the journey – reading, learning and writing – not the destination – the finished book. Has the book made a difference for some people? Yes, I hope so but often people revert to their old behavior. Some of them are the same people who – to paraphrase something that is attributed to Churchill – occasionally should check their intentions and strategies against their results. But reality is what Munger once said, “Everyone’s experience is that you teach only what a reader almost knows, and that seldom.” But I am happy that my books had an impact and made a difference to a few people. That’s enough.
It was more fun to write about what works and not in a dialogue format. But also because vivid and hopefully entertaining “lessons” are easier to remember and recall. And you will find a lot of quotes in there that most people haven’t read before.
I wanted to write a book like this to reinforce a couple of concepts in my head. So even if some of the text sometimes comes out like advice to the reader, I always think about what the mathematician Gian-Carlo Rota once said, “The advice we give others is the advice that we ourselves need.”
Some kind of representation that describes how reality is (as it is known today) – a principle, an idea, basic concepts, something that works or not – that I have in my head that helps me know what to do or not. Something that has stood the test of time.
For example some timeless truths are:
I favor underlying principles and notions that I can apply broadly to different and relevant situations. Since some models don’t resemble reality, the word “model” for me is more of an illustration/story of an underlying concept, trick, method, what works etc. that agrees with reality (as Munger once said, “Models which underlie reality”) and help me remember and more easily make associations.
But I don’t judge or care how others label it or do it – models, concepts, default positions … The important thing is that whatever we use, it reflects and agrees with reality and that it works for us to help us understand or explain a situation or know what to do or not do. Useful and good enough guide me. I am pretty pragmatic – whatever works is fine. I follow Deng Xiaoping, “I don’t care whether the cat is black or white as long as it catches mice.” As Feynman said, “What is the best method to obtain the solution to a problem? The answer is, any way that works.”
I’ll tell you about a thing Feynman said on education which I remind myself of from time to time in order not to complicate things (from Richard P. Feynman, Michael A. Gottlieb, Ralph Leighton, Feynman’s Tips on Physics: A Problem-Solving Supplement to the Feynman Lectures on Physics):
“There’s a round table on three legs. Where should you lean on it, so the table will be the most unstable?”
The student’s solution was, “Probably on top of one of the legs, but let me see: I’ll calculate how much force will produce what lift, and so on, at different places.”
Then I said, “Never mind calculating. Can you imagine a real table?”
“But that’s not the way you’re supposed to do it!”
“Never mind how you’re supposed to do it; you’ve got a real table here with the various legs, you see? Now, where do you think you’d lean? What would happen if you pushed down directly over a leg?”
I say, “That’s right; and what happens if you push down near the edge, halfway between two of the legs?”
“It flips over!”
I say, “OK! That’s better!”
The point is that the student had not realized that these were not just mathematical problems; they described a real table with legs. Actually, it wasn’t a real table, because it was perfectly circular, the legs were straight up and down, and so on. But it nearly described, roughly speaking, a real table, and from knowing what a real table does, you can get a very good idea of what this table does without having to calculate anything – you know darn well where you have to lean to make the table flip over. So, how to explain that, I don’t know! But once you get the idea that the problems are not mathematical problems but physical problems, it helps a lot.
Anyway, that’s just two ways of solving this problem. There’s no unique way of doing any specific problem. By greater and greater ingenuity, you can find ways that require less and less work, but that takes experience.
Ideas from biology and psychology since many stupidities are caused by not understanding human nature (and you get illustrations of this nearly every day). And most of our tendencies were already known by the classic writers (Publilius Syrus, Seneca, Aesop, Cicero etc.)
Others that I find very useful both in business and private is the ideas of Quantification (without the fancy math), Margin of safety, Backups, Trust, Constraints/Weakest link, Good or Bad Economics slash Competitive advantage, Opportunity cost, Scale effects. I also think Keynes idea of changing your mind when you get new facts or information is very useful.
But since reality isn’t divided into different categories but involves a lot of factors interacting, I need to synthesize many ideas and concepts.
I don’t know about that but what I often see among many smart people agrees with Munger’s comment: “All this stuff is really quite obvious and yet most people don’t really know it in a way where they can use it.”
Anyway, I believe if you really understand an idea and what it means – not only memorizing it – you should be able to work out its different applications and functional equivalents. Take a simple big idea – think on it – and after a while you see its wider applications. To use Feynman’s advice, “It is therefore of first-rate importance that you know how to “triangulate” – that is, to know how to figure something out from what you already know.” As a good friend says, “Learn the basic ideas, and the rest will fill itself in. Either you get it or you don’t.”
Most of us learn and memorize a specific concept or method etc. and learn about its application in one situation. But when the circumstances change we don’t know what to do and we don’t see that the concept may have a wider application and can be used in many situations.
Take for example one big and useful idea – Scale effects. That the scale of size, time and outcomes changes things – characteristics, proportions, effects, behavior…and what is good or not must be tied to scale. This is a very fundamental idea from math. Munger described some of this idea’s usefulness in his worldly wisdom speech. One effect from this idea I often see people miss and I believe is important is group size and behavior. That trust, feeling of affection and altruistic actions breaks down as group size increases, which of course is important to know in business settings. I wrote about this in Seeking Wisdom (you can read more if you type in Dunbar Number on Google search). I know of some businesses that understand the importance of this and split up companies into smaller ones when they get too big (one example is Semco).
Another general idea is “Gresham’s Law” that can be generalized to any process or system where the bad drives out the good. Like natural selection or “We get what we select for” (and as Garrett Hardin writes, “The more general principle is: We get whatever we reward for).
While we are on the subject of mental models etc., let me bring up another thing that distinguishes the great thinkers from us ordinary mortals. Their ability to quickly assess and see the essence of a situation – the critical things that really matter and what can be ignored. They have a clear notion of what they want to achieve or avoid and then they have this ability to zoom in on the key factor(s) involved.
One reason to why they can do that is because they have a large repertoire of stored personal and vicarious experiences and concepts in their heads. They are masters at pattern recognition and connection. Some call it intuition but as Herbert Simon once said, “The situation has provided a cue; this cue has given the expert access to information stored in memory, and the information provides the answer. Intuition is nothing more and nothing less than recognition.”
It is about making associations. For example, roughly like this:
Situation X Association (what does this remind me of?) to experience, concept, metaphor, analogy, trick, filter… (Assuming of course we are able to see the essence of the situation) What counts and what doesn’t? What works/not? What to do or what to explain?
Let’s take employing someone as an example (or looking at a business proposal). This reminds me of one key factor – trustworthiness and Buffett’s story, “If you’re looking for a manager, find someone who is intelligent, energetic and has integrity. If he doesn’t have the last, make sure he lacks the first two.”
I believe Buffett and Munger excel at this – they have seen and experienced so much about what works and not in business and behavior.
Buffett referred to the issue of trust, chain letters and pattern recognition at the latest annual meeting:
You can get into a lot of trouble with management that lacks integrity… If you’ve got an intelligent, energetic guy or woman who is pursuing a course of action, which gets put on the front page it could make you very unhappy. You can get into a lot of trouble. ..We’ve seen patterns…Pattern recognition is very important in evaluating humans and businesses. Pattern recognition isn’t one hundred percent and none of the patterns exactly repeat themselves, but there are certain things in business and securities markets that we’ve seen over and over and frequently come to a bad end but frequently look extremely good in the short run. One which I talked about last year was the chain letter scheme. You’re going to see chain letters for the rest of your life. Nobody calls them chain letters because that’s a connotation that will scare you off but they’re disguised as chain letters and many of the schemes on Wall Street, which are designed to fool people, have that particular aspect to it…There were patterns at Valeant certainly…if you go and watch the Senate hearings, you will see there are patterns that should have been picked up on.
This is what he wrote on chain letters in the 2014 annual report:
In the late 1960s, I attended a meeting at which an acquisitive CEO bragged of his “bold, imaginative accounting.” Most of the analysts listening responded with approving nods, seeing themselves as having found a manager whose forecasts were certain to be met, whatever the business results might be. Eventually, however, the clock struck twelve, and everything turned to pumpkins and mice. Once again, it became evident that business models based on the serial issuances of overpriced shares – just like chain-letter models – most assuredly redistribute wealth, but in no way create it. Both phenomena, nevertheless, periodically blossom in our country – they are every promoter’s dream – though often they appear in a carefully-crafted disguise. The ending is always the same: Money flows from the gullible to the fraudster. And with stocks, unlike chain letters, the sums hijacked can be staggering.
And of course, the more prepared we are or the more relevant concepts and “experiences” we have in our heads, the better we all will be at this. How do we get there? Reading, learning and practice so we know it “fluently.” There are no shortcuts. We have to work at it and apply it to the real world.
As a reminder to myself so I understand my limitation and “circle”, I keep a paragraph from Munger’s USC Gould School of Law Commencement Address handy so when I deal with certain issues, I don’t fool myself into believing I am Max Planck when I’m really the Chauffeur:
In this world I think we have two kinds of knowledge: One is Planck knowledge, that of the people who really know. They’ve paid the dues, they have the aptitude. Then we’ve got chauffeur knowledge. They have learned to prattle the talk. They may have a big head of hair. They often have fine timbre in their voices. They make a big impression. But in the end what they’ve got is chauffeur knowledge masquerading as real knowledge.
One trick or notion I see many of us struggling with because it goes against our intuition is the concept of inversion – to learn to think “in negatives” which goes against our normal tendency to concentrate on for example, what we want to achieve or confirmations instead of what we want to avoid and disconfirmations. Another example of this is the importance of missing confirming evidence (I call it the “Sherlock trick”) – that negative evidence and events that don’t happen, matter when something implies they should be present or happen.
Another example that is counterintuitive is Newton’s 3d law that forces work in pairs. One object exerts a force on a second object, but the second object also exerts a force equal and opposite in direction to the force acting on it – the first object. As Newton wrote, “If you press a stone with your finger, the finger is also pressed by the stone.” Same as revenge (reciprocation).
One that immediately comes to mind is one I have mentioned in the introduction in two of my books is someone I am fortunate to have as a friend – Peter Kaufman. An outstanding thinker and a great businessman and human being. On a scale of 1 to 10, he is a 15.
Their ethics and their ethos of clarity, simplicity and common sense. These two gentlemen are outstanding in their instant ability to exclude bad ideas, what doesn’t work, bad people, scenarios that don’t matter, etc. so they can focus on what matters. Also my amazement that their ethics and ideas haven’t been more replicated. But I assume the answer lies in what Munger once said, “The reason our ideas haven’t spread faster is they’re too simple.”
This reminds me something my father-in-law once told me (a man I learnt a lot from) – the curse of knowledge and the curse of academic title. My now deceased father-in-law was an inventor and manager. He did not have any formal education but was largely self-taught. Once a big corporation asked for his services to solve a problem their 60 highly educated engineers could not solve. He solved the problem. The engineers said, “It can’t be that simple.” It was like they were saying that, “Here we have 6 years of school, an academic title, lots of follow up education. Therefore an engineering problem must be complicated”. Like Buffett once said of Ben Graham’s ideas, “I think that it comes down to those ideas – although they sound so simple and commonplace that it kind of seems like a waste to go to school and get a PhD in Economics and have it all come back to that. It’s a little like spending eight years in divinity school and having somebody tell you that the 10 commandments were all that counted. There is a certain natural tendency to overlook anything that simple and important.”
(I must admit that in the past I had a tendency to be extra drawn to elegant concepts and distracting me from the simple truths.)
The best thing I have done is marrying my wife. As Buffett says and it is so so true, “Choosing a spouse is the most important decision in your life…You need everything to be stable, and if that decision isn’t good, it may affect every other decision in life, including your business decisions…If you are lucky on health and…on your spouse, you are a long way home.”
A good “investment” is taking the time to continuously improve. It just takes curiosity and a desire to know and understand – real interest. And for me this is fun.
Every day is a little different but I read every day.
There is not one single book or one single idea that has done it. I have picked up things from different books (still do). And there are different books and articles that made a difference during different periods of my life. Meeting and learning from certain people and my own practical experiences has been more important in my development. As an example – When I was in my 30s a good friend told me something that has been very useful in looking at products and businesses. He said I should always ask who the real customer is: “Who ultimately decides what to buy and what are their decision criteria and how are they measured and rewarded and who pays?”
But looking back, if I have had a book like Poor Charlie’s Almanack when I was younger I would have saved myself some misery. And of course, when it comes to business, managing and investing, nothing beats learning from Warren Buffett’s Letters to Berkshire Hathaway Shareholders.
Another thing I have found is that it is way better to read and reread fewer books but good and timeless ones and then think. Unfortunately many people absorb too many new books and information without thinking.
Let me finish this with some quotes from my new book that I believe we all can learn from:
Finally, I wish you and your readers an excellent day – Everyday!
The mental models approach is very intellectually appealing, almost seductive to a certain type of person. (It certainly is for us.)
The whole idea is to take the world’s greatest, most useful ideas and make them work for you!
How hard can it be?
Nearly all of the models themselves are perfectly well understandable by the average well-educated knowledge worker, including all of you reading this piece. Ideas like Bayes’ rule, multiplicative thinking, hindsight bias, or the bias from envy and jealousy, are all obviously true and part of the reality we live in.
There’s a bit of a problem we’re seeing though: People are reading the stuff, enjoying it, agreeing with it…but not taking action. It’s not becoming part of their standard repertoire.
Let’s say you followed up on Bayesian thinking after reading our post on it — you spent some time soaking in Thomas Bayes’ great wisdom on updating your understanding of the world incrementally and probabilistically rather than changing your mind in black-and-white. Great!
But a week later, what have you done with that knowledge? How has it actually impacted your life? If the honest answer is “It hasn’t,” then haven’t you really wasted your time?
Ironically, it’s this habit of “going halfway” instead of “going all the way,” like Sisyphus constantly getting halfway up the mountain, which is the biggest waste of time!
See, the common reason why people don’t truly “follow through” with all of this stuff is that they haven’t raised their knowledge to a “deep fluency” — they’re skimming the surface. They pick up bits and pieces — some heuristics or biases here, a little physics or biology there, and then call it a day and pull up Netflix. They get a little understanding, but not that much, and certainly no doing.
The better approach, if you actually care about making changes, is to imitate Charlie Munger, Charles Darwin, and Richard Feynman, and start raising your knowledge of the Big Ideas to a deep fluency, and then figuring out systems, processes, and mental tricks to implement them in your own life.
Let’s work through an example.
Say you’re just starting to explore all the wonderful literature on heuristics and biases and come across the idea of Confirmation Bias: The idea that once we’ve landed on an idea we really like, we tend to keep looking for further data to confirm our already-held notions rather than trying to disprove our idea.
This is common, widespread, and perfectly natural. We all do it. John Kenneth Galbraith put it best:
“In the choice between changing one’s mind and proving there’s no need to do so, most people get busy on the proof.”
Now, what most people do, the ones you’re trying to outperform, is say “Great idea! Thanks Galbraith.” and then stop thinking about it.
Don’t do that!
The next step would be to push a bit further, to get beyond the sound bite: What’s the process that leads to confirmation bias? Why do I seek confirmatory information and in which contexts am I particularly susceptible? What other models are related to the confirmation bias? How do I solve the problem?
The big question: How far do you go? A good question without a perfect answer. But the best test I can think of is to perform something like the Feynman technique, and to think about the chauffeur problem.
Can you explain it simply to an intelligent layperson, using vivid examples? Can you answer all the follow-ups? That’s fluency. And you must be careful not to fool yourself, because in the wise words of Feynman, “…you are the easiest person to fool.“
While that’s great work, you’re not done yet. You have to make the rubber hit the road now. Something has to happen in your life and mind.
The way to do that is to come up with rules, systems, parables, and processes of your own, or to copy someone else’s that are obviously sound.
In the case of Confirmation Bias, we have two wonderful models to copy, one from each of the Charlies — Darwin, and Munger.
Darwin had rule, one we have written about before but will restate here: Make a note, immediately, if you come across a thought or idea that is contrary to something you currently believe.
As for Munger, he implemented a rule in his own life: “I never allow myself to have an opinion on anything that I don’t know the other side’s argument better than they do.”
Now we’re getting somewhere! With the implementation of those two habits and some well-earned deep fluency, you can immediately, tomorrow, start improving the quality of your decision-making.
Sometimes when we get outside the heuristic/biases stuff, it’s less obvious how to make the “rubber hit the road” — and that will be a constant challenge for you as you take this path.
But that’s also the fun part! With every new idea and model you pick up, you also pick up the opportunity to synthesize for yourself a useful little parable to make it stick or a new habit that will help you use it. Over time, you’ll come up with hundreds of them, and people might even look to you when they’re having problems doing it themselves!
Look at Buffett and Munger — both guys are absolute machines, chock full of pithy little rules and stories they use in order to implement and recall what they’ve learned.
For example, Buffett discovered early on the manipulative psychology behind open-outcry auctions. What did he do? He made a rule to never go to one! That’s how it’s done.
Even if you can’t come up with a great rule like that, you can figure out a way to use any new model or idea you learn. It just takes some creative thinking.
Sometimes it’s just a little mental rule or story that sticks particularly well. (Recall one of the prime lessons from our series on memory: Salient, often used, well-associated, and important information sticks best.)
We did this very thing recently with Lee Kuan Yew’s Rule. What a trite way to refer to the simple idea of asking if something actually works…attributing it to a Singaporean political leader!
But that’s exactly the point. Give the thing a name and a life and, like clockwork, you’ll start recalling it. The phrase “Lee Kuan Yew’s Rule” actually appears in my head when I’m approaching some new system or ideology, and as soon as it does, I find myself backing away from ideology and towards pragmatism. Exactly as I’d hoped.
Your goal should be to create about a thousand of those little tools in your head, attached to a deep fluency in the material from which it came.
I can hear the objection coming. Who has time for this stuff?
You do. It’s about making time for the things that really matter. And what could possibly matter more than upgrading your whole mental operating system? I solemnly promise that you’re spending way more time right now making sub-optimal decisions and trying to deal with the fallout.
If you need help learning to manage your time right this second, check out our Productivity Seminar, one that’s changed some people’s lives entirely. The central idea is to become more thoughtful and deliberate with how you spend your hours. When you start doing that, you’ll notice you do have an hour a day to spend on this Big Ideas stuff. It’s worth the 59 bucks.
If you don’t have 59 bucks, at least imitate Cal Newport and start scheduling your days and put an hour in there for “Getting better at making all of my decisions.”
Once you find that solid hour (or more), start using it in the way outlined above, and let the world’s great knowledge actually start making an impact. Just do a little every day.
What you’ll notice, over the weeks and months and years of doing this, is that your mind will really change! It has to! And with that, your life will change too. The only way to fail at improving your brain is by imitating Sisyphus, pushing the boulder halfway up, over and over.
Unless and until you really understand this, you’ll continue spinning your wheels. So here’s your call to action. Go get to it!
(This is a follow-up to our post on the Bias from Liking/Loving, which you can find here.)
Think of a cat snarling and spitting, lashing with its tail and standing with its back curved. Her pulse is elevated, blood vessels constricted and muscles tense. This reaction may sound familiar, because everyone has experienced the same tensed-up feeling of rage at least once in their lives.
When rage is directed towards an external object, it becomes hate. Just as we learn to love certain things or people, we learn to hate others.
There are several cognitive processes that awaken the hate within us and most of them stem from our need for self-protection.
We tend to dislike people who dislike us (and, true to Newton, with equal strength.) The more we perceive they hate us, the more we hate them.
A lot of hate comes from scarcity and competition. Whenever we compete for resources, our own mistakes can mean good fortune for others. In these cases, we affirm our own standing and preserve our self-esteem by blaming others.
Robert Cialdini explains that because of the competitive environment in American classrooms, school desegregation may increase the tension between children of different races instead of decreasing it. Imagine being a secondary school child:
If you knew the right answer and the teacher called on someone else, you probably hoped that he or she would make a mistake so that you would have a chance to display your knowledge. If you were called on and failed, or if you didn’t even raise your hand to compete, you probably envied and resented your classmates who knew the answer.
At first we are merely annoyed. But then as the situation fails to improve and our frustration grows, we are slowly drawn into false attributions and hate. We keep blaming and associating “the others” who are doing better with the loss and scarcity we are experiencing (or perceive we are experiencing). That is one way our emotional frustration boils into hate.
The ability to separate friends from enemies has been critical for our safety and survival. Because mistaking the two can be deadly, our mental processes have evolved to quickly spot potential threats and react accordingly. We are constantly feeding information about others into our “people information lexicon” that forms not only our view of individuals, whom we must decide how to act around, but entire classes of people, as we average out that information.
To shortcut our reactions, we classify narrowly and think in dichotomies: right or wrong, good or bad, heroes or villains. (The type of Grey Thinking we espouse is almost certainly unnatural, but, then again, so is a good golf swing.) Since most of us are merely average at everything we do, even superficial and small differences, such as race or religious affiliation, can become an important source of identification. We are, after all, creatures who seek to belong to groups above all else.
Seeing ourselves as part of a special, different and, in its own way, superior group, decreases our willingness to empathize with the other side. This works both ways – the hostility towards the others also increases the solidarity of the group. In extreme cases, we are so drawn towards the inside view that we create a strong picture of the enemy that has little to do with reality or our initial perceptions.
We think of ourselves as compassionate, empathetic and cooperative. So why do we learn to hate?
Part of the answer lies in the fact that we think of ourselves in a specific way. If we cannot reach a consensus, then the other side, which is in some way different from us, must necessarily be uncooperative for our assumptions about our own qualities to hold true.
Our inability to examine the situation from all sides and shake our beliefs, together with self-justifying behavior, can lead us to conclude that others are the problem. Such asymmetric views, amplified by strong perceived differences, often fuel hate.
What started off as odd or difficult to understand, has quickly turned into unholy.
If the situation is characterized by competition, we may also see ourselves as a victim. The others, who abuse our rights, take away our privileges or restrict our freedom are seen as bullies who deserve to be punished. We convince ourselves that we are doing good by doing harm to those who threaten to cross the line.
This is understandable. In critical times our survival indeed may depend on our ability to quickly spot and neutralize dangers. The cost of a false positive – mistaking a friend for a foe – is much lower than the potentially fatal false negative of mistaking our adversaries for innocent allies. As a result, it is safest to assume that anything we are not familiar with is dangerous by default. Natural selection, by its nature, “keeps what works,” and this tendency towards distrust of the unfamiliar probably survived in that way.
Physical and psychological pain is very mobilizing. We despise foods that make us nauseous and people that have hurt us. Because we are scared to suffer, we end up either avoiding or destroying the “enemy”, which is why revenge can be pursued with such vengeance. In short, hate is a defense against enduring pain repeatedly.
There are several ways that the bias for disliking and hating display themselves to the outer world. The most obvious of them is war, which seems to have been more or less prevalent throughout the history of mankind.
This would lead us to think that war may well be unavoidable. Charlie Munger offers the more moderate opinion that while hatred and dislike cannot be avoided, the instances of war can be minimized by channeling our hate and fear into less destructive behaviors. (A good political system allows for dissent and disagreement without explosions of blood upheaval.)
Even with the spread of religion, and the advent of advanced civilization, modern war remains pretty savage. But we also get what we observe in present-day Switzerland and the United States, wherein the clever political arrangements of man “channel” the hatreds and dislikings of individuals and groups into nonlethal patterns including elections.
But these dislikings and hatreds that are arguably inherent to our nature never go away completely and transcend themselves into politics. Think of the dichotomies. There is the left versus the right wing, the nationalists versus the communists and libertarians vs. authoritarians. This might be the reason why there are maxims like: “Politics is the art of marshaling hatreds.”
Finally, as we move away from politics, arguably the most sophisticated and civilized way of channeling hatred is litigation. Charlie Munger attributes the following words to Warren Buffett:
“A major difference between rich and poor people is that the rich people can spend their lives suing their relatives.”
While most of us reflect on our memories of growing up with our siblings with fondness, there are cases where the competition for shared attention or resources breeds hatred. If the siblings can afford it, they will sometimes litigate endlessly to lay claims over their parents’ property or attention.
There are several ways that bias from hating can interfere with our normal judgement and lead to suboptimal decisions.“Hating can interfere with our normal judgement and lead to suboptimal decisions.” Click To Tweet
Ignoring Virtues of The Other Side
Michael Faraday was once asked after a lecture whether he implied that a hated academic rival was always wrong. His reply was short and firm “He’s not that consistent.” Faraday must have recognized the bias from hating and corrected for it with the witty comment.
What we should recognize here is that no situation is ever black or white. We all have our virtues and we all have our weaknesses. However, when possessed by the strong emotions of hate, our perceptions can be distorted to the extent that we fail to recognize any good in the opponent at all. This is driven by consistency bias, which motivates us to form a coherent (“she is all-round bad”) opinion of ourselves and others.
Association Fueled Hate
The principle of association goes that the nature of the news tends to infect the teller. This means that the worse the experience, the worse the impression of anything related to it.
Association is why we blame the messenger who tells us something that we don’t want to hear even when they didn’t cause the bad news. (Of course, this creates an incentive not to speak truth and avoid giving bad news.)
A classic example is the unfortunate and confused weatherman, who receives hate mail, whenever it rains. One went so far as to seek advice from the Arizona State professor of psychology, Robert Cialdini, whose work we have discussed before.
Cialdini explained to him that in light of the destinies of other messengers, he was born lucky. Rain might ruin someone’s holiday plans, but it will rarely change the destiny of a nation, which was the case of Persian war messengers. Delivering good news meant a feast, whereas delivering bad news resulted in their death.
The weatherman left Cialdini’s office with a sense of privilege and relief.
“Doc,” he said on his way out, “I feel a lot better about my job now. I mean, I’m in Phoenix where the sun shines 300 days a year, right? Thank God I don’t do the weather in Buffalo.”
Under the influence of liking or disliking bias we tend to fill gaps in our knowledge by building our conclusions on assumptions, which are based on very little evidence.
Imagine you meet a woman at a party and find her to be a self-centered, unpleasant conversation partner. Now her name comes up as someone who could be asked to contribute to a charity. How likely do you feel it is that she will give to the charity?
In reality, you have no useful knowledge, because there is little to nothing that should make you believe that people who are self-centered are not also generous contributors to charity. The two are unrelated, yet because of the well-known fundamental attribution error, we often assume one is correlated to the other.
By association, you are likely to believe that this woman is not likely to be generous towards charities despite lack of any evidence. And because now you also believe she is stingy and ungenerous, you probably dislike her even more.
This is just an innocent example, but the larger effects of such distortions can be so extreme that they lead to a major miscognition. Each side literally believes that every single bad attribute or crime is attributable to the opponent.
Charlie Munger explains this with a relatively recent example:
When the World Trade Center was destroyed, many Pakistanis immediately concluded that the Hindus did it, while many Muslims concluded that the Jews did it. Such factual distortions often make mediation between opponents locked in hatred either difficult or impossible. Mediations between Israelis and Palestinians are difficult because facts in one side’s history overlap very little with facts from the other side’s. These distortions and the overarching mistrust might be why some conflicts seem to never end.
To varying degrees we value acceptance and affirmation from others. Very few of us wake up wanting to be disliked or rejected. Social approval, at its heart the cause of social influence, shapes behavior and contributes to conformity. Francois VI, Duc de La Rochefoucauld wrote: “We only confess our little faults to persuade people that we have no big ones.”
Remember the old adage, “The nail that sticks out gets hammered down.” This is why we don’t openly speak the truth or question people, we don’t want to be the nail.
It is only normal that we can find more common ground with some people than with others. But are we really destined to fall into the traps of hate or is there a way to take hold of these biases?
That’s a question worth over a hundred million lives. There are ways that psychologists think that we can minimize prejudice against others.
Firstly, we can engage with others in sustained close contact to breed our familiarity. The contact must not only be prolonged, but also positive and cooperative in nature – either working towards a common cause or against a common enemy.
Secondly, we also reduce prejudice by attaining equal status in all aspects, including education, income and legal rights. This effect is further reinforced, when equality is supported not only “on paper”, but also ingrained within broader social norms.
And finally the obvious – we should practice awareness of our own emotions and ability to hold back on the temptations to dismiss others. Whenever confronted with strong feelings it might simply be best to sit back, breathe and do our best to eliminate the distorted thinking.
The decisions that we make are rarely impartial. Most of us already know that we prefer to take advice from people that we like. We also tend to more easily agree with opinions formed by people we like. This tendency to judge in favor of people and symbols we like is called the bias from liking or loving.
We are more likely to ignore faults and comply with wishes of our friends or lovers rather than random strangers. We favor people, products, and actions associated with our favorite celebrities. Sometimes we even distort facts to facilitate love. The influence that our friends, parents, lovers and idols exert on us can be enormous.
In general, this is a good thing, a bias that adds on balance rather than subtracts. It helps us form successful relationships, it helps us fall in love (and stay in love), it helps us form attachments with others that give us great happiness.
But we do want to be aware of where this tendency leads us awry.
For example, some people and companies have learnt to use this influence to their advantage.
In his bestseller on social psychology Influence, Robert Cialdini tells a story about the successful strategy of Tupperware, which at the time reported sales of over $2.5 million a day. As many of us know, the company for a long time sold its kitchenware at parties thrown by friends of the potential customers. At each party there was a Tupperware representative taking orders, but the hostess, the friend of the invitees, received a commission.
These potential customers are not blind to the incentives and social pressures involved. Some of them don’t mind it, others do, but all admit a certain degree of helplessness in their situation. Cialdini recalls a conversation with one of the frustrated guests:
It’s gotten to the point now where I hate to be invited to Tupperware parties. I’ve got all the containers I need; and if I wanted any more, I could buy another brand cheaper in the store. But when a friend calls up, I feel like I have to go. And when I get there, I feel like I have to buy something. What can I do? It’s for one of my friends.
We are more likely to buy in a familiar, friendly setting and under the obligation of friendship rather than from an unfamiliar store or a catalogue. We simply find it much harder to say “no” or disagree when it’s a friend. The possibility of ruining the friendship, or seeing our image altered in the eyes of someone we like, is a powerful motivator to comply.
The Tupperware example is a true “lollapalooza” in favor of manipulating people into buying things. Besides the liking tendency, there are several other factors at play: commitment/consistency bias, a bias from stress, an influence from authority, a reciprocation effect, and some direct incentives and disincentives, at least! (Lollapaloozas, something we’ll talk more about in the future, are when several powerful forces combine to create a non-linear outcome. A good way to think of this conceptually for now is that 1+1=3.)
The liking tendency is so strong that it stretches beyond close friendships. It turns out we are also more likely to act in favor of certain types of strangers. Can you recall meeting someone with whom you hit it off instantly, where it almost seemed like you’d known them for years after a 20-minute conversation? Developing such an instant bond with a stranger may seem like a mythical process, but it rarely is. There are several tactics that can be used to make us like something, or someone, more than we otherwise would.
We all like engaging in activities with beautiful people. This is part of an automatic bias that falls into a category called The Halo Effect.
The Halo Effect occurs when a specific, positive characteristic determines the way a person is viewed by others on other, unrelated traits. In the case of beauty, it’s been shown that we automatically assign favorable yet unrelated traits such as talent, kindness, honesty, and intelligence, with those we find physically attractive.
For the most part, this attribution happens unnoticed. For example, attractive candidates received more than twice as many votes as unattractive candidates in the 1974 Canadian federal elections. Despite the ample evidence of predisposition towards handsome politicians, follow-up research demonstrated that nearly three-quarters of Canadians surveyed strongly denied the influence of physical appearance in their voting decisions.
The power of the Halo Effect is that it’s mostly happening beneath the level of consciousness.“The power of the Halo Effect is that it's mostly happening beneath the level of consciousness.” Click To Tweet
Similar forces are at play when it comes to hiring decisions and pay. While employers deny that they are strongly influenced by looks, studies show otherwise.
In one study evaluating hiring decisions based on simulated interviews, the applicants’ grooming played a greater role in the outcome than job qualifications. Partly, this has a rational basis. We might assume that someone who shows up without the proper “look” for the job may be deficient in other areas. If they couldn’t shave and put a tie on, how are we to expect them to perform with customers? Partly, though, it’s happening subconsciously. Even if we never consciously say to ourselves that “Better grooming = better employee”, we tend to act that way in our hiring.
These effects go even beyond the hiring phase — attractive individuals in the US and Canada have been estimated to earn an average of 12-14 percent more than their unattractive coworkers. Whether this is due to liking bias or perhaps the increased self confidence that comes from above-average looks is hard to say.
Appearance is not the only quality that may skew our perceptions in favor of someone. The next one on the list is similarity.
We like people who resemble us. Whether it’s appearance, opinions, lifestyle or background, we tend to favor people who on some dimension are most similar to ourselves.
A great example of similarity bias is the case of dress. Have you ever been at an event where you felt out of place because you were either overdressed or underdressed? The uneasy feelings are not caused only by your imagination. Numerous studies suggest that we are more likely to do favors, such as giving a dime or signing a petition, to someone who looks like us.
Similarity bias can extend to even such ambiguous traits as interests and background. Many salesmen are trained to look for similarities to produce a favorable and trustwothy image in the eyes of their potential customers. In Influence: The Psychology of Persuasion, Robert Cialdini explains:
If there is camping gear in the trunk, the salespeople might mention, later on, how they love to get away from the city whenever they can; if there are golf balls on the back seat, they might remark that they hope the rain will hold off until they can play the eighteen holes they scheduled for later in the day; if they notice that the car was purchased out of state, they might ask where a customer is from and report—with surprise—that they (or their spouse) were born there, too.
These are just a few of many examples which can be surprisingly effective in producing a sweet feeling of familiarity. Multiple studies illustrate the same pattern. We decide to fill out surveys from people with similar names, buy insurance from agents of similar age and smoking habits, and even decide that those who share our political views deserve their medical treatment sooner than the rest.
There is just one takeaway: even if the similarities are terribly superficial, we still may end up liking the other person more than we should.
“And what will a man naturally come to like and love,
apart from his parent, spouse and child?
Well, he will like and love being liked and loved.”
— Charlie Munger
We are all phenomenal suckers for flattery. These are not my words, but words of Robert Cialdini and they ring a bell. Perhaps, more than anything else in this world we love to be loved and, consequently, we love those that love us.“We love to be loved and, consequently, we love those that love us.” Click To Tweet
Consider the technique of Joe Girard, who has been continuously called the world’s “greatest car salesman” and has made it to the Guinness World Record book.
Each month Joe prints and sends over 13,000 holiday cards to his former customers.While the theme of the card varies depending on the season and celebration, the printed message always remains the same. On each of those cards Girard prints three simple words ”I like you” and his name. He explains:
“There’s nothing else on the card, nothin’ but my name. I’m just telling ’em that I like ’em.” “I like you.” It came in the mail every year, 12 times a year, like clockwork.
Joe understood a simple fact about humans – we love to be loved.
As numerous experiments show, regardless of whether the praise is deserved or not, we cannot help but develop warm feelings to those that provide it. Our reaction can be so automatic, that we develop liking even when the attempt to win our favor is an obvious one, as in the case of Joe.
In addition to liking those that like us and look like us, we also tend to like those who we know. That’s why repeated exposure can be a powerful tool in establishing liking.
There is a fun experiment you can do to understand the power of familiarity.
Take a picture of yourself and create a mirror image in one of the editing tools. Now with the two pictures at hand decide which one – the real or the mirror image you like better. Show the two pictures to a friend and ask her to choose the better one as well.
If you and your friend are like the group on whom this trick was tried, you should notice something odd. Your friend will prefer the true print, whereas you will think you look better on the mirror image. This is because you both prefer the faces you are used to. Your friend always sees you from her perspective, whereas you have learnt recognize and love your mirror image.
The effect of course extends beyond faces into places, names and even ideas.
For example, in elections we might prefer candidates whose names sound more familiar. The Ohio Attorney-General post was claimed by a man who, shortly before his candidacy, changed his last name to Brown – a family name of Ohio political tradition. Apart from his surname, there was little to nothing that separated him from other equally if not more capable candidates.
How could such a thing happen? The answer lies partly in the unconscious way that familiarity affects our liking. Often we don’t realize that our attitude toward something has been influenced by the number of times we have been exposed to it in the past.
Charisma or attraction are not prerequisites for liking — a mere association with someone you like or trust can be enough.
The bias from association shows itself in many other domains and is especially strong when we associate with the person we like the most — ourself. For example, the relationship between a sports fan and his local team can be highly personal even though the association is often based only on shared location. For the fan, however, the team is an important part of his self-identity. If the team or athlete wins, he wins as well, which is why sports can be so emotional. The most dedicated fans are ready to get into fights, burn cars or even kill to defend the honor of their team.
Such associated sense of pride and achievement is as true for celebrities as it is for sports. When Kevin Costner delivered his acceptance speech after winning the best picture award for Dances With Wolves, he said:
“While it may not be as important as the rest of the world situation, it will always be important to us. My family will never forget what happened here; my Native American brothers and sisters, especially the Lakota Sioux, will never forget, and the people I went to high school with will never forget.”
The interesting part of his words is the notion that his high school peers will remember, which is probably true. His former classmates are likely to tell people that they went to school with Costner, even though they themselves had no connection with the success of the movie.
Costner’s words illustrate that even a trivial association with success may reap benefits and breed confidence.
Who else do we like besides ourselves, celebrities and our sports teams?
People we’ve met through those who are close to us – our neighbors, friends and family. It is common sense that a referral from someone we trust is enough to trigger mild liking and favorable initial opinions.
There are a number of companies that use friend referral as a sales tactic. Network providers, insurers and other subscription services offer a number of benefits for those of us who give away our friends’ contact details.
The success of this method rests on the implicit idea that turning down the sales rep who says “your friend Jenny/Allan suggested I call you” feels nearly as bad as turning down Jenny or Allan themselves. This tactic, when well executed, leads to a never-ending chain of new customers.
Perhaps the right question to ask here is not “how can we avoid the bias from liking”, but when should we?
Someone who is conditioned to like the right people and pick their idols carefully can greatly benefit from these biases. Charlie Munger recalls that both he and Warren Buffett benefitted from liking admirable persons:
One common, beneficial example for us both was Warren’s uncle, Fred Buffett, who cheerfully did the endless grocery-store work that Warren and I ended up admiring from a safe distance. Even now, after I have known so many other people, I doubt if it is possible to be a nicer man than Fred Buffett was, and he changed me for the better.
The keywords here are “from a safe distance”.
If dealing with salesmen and others who clearly benefit from your liking, it might be a good idea to check whether you have been influenced. In these unclear cases Cialdini advises us to focus on our feelings rather than the other person’s actions that may produce liking. Ask yourself how much of what you feel is due to liking versus the actual facts of the situation.
The time to call out the defense is when we feel ourselves liking the practitioner more than we should under the circumstances, when we feel manipulated.
Once we have recognized that we like the requester more than we would expect under the given circumstances, we should take a step back and question ourselves. Are you doing the deal because you like someone or is it because it is indeed the best option out there?Mental Model: Bias from Liking/Loving Click To Tweet
Still Interested? Check out some other mental models and biases.
Daniel Kahneman and Amos Tversky spent decades in psychology research to disentangle patterns in errors of human reasoning. Over the course of their work they discovered a variety of logical fallacies that we tend to make, when facing information that appears vaguely familiar. These fallacies lead to bias – irrational behavior based on beliefs that are not always grounded in reality.
In his book Thinking Fast and Slow, which summarizes his and Tversky’s life work, Kahneman introduces biases that stem from the conjunction fallacy – the false belief that a conjunction of two events is more probable than one of the events on its own.
Probability can be a difficult concept. Most of us have an intuitive understanding of what probability is, but there is little consensus on what it actually means. It is just as vague and subjective a concept as democracy, beauty or freedom. However, this is not always troublesome – we can still easily discuss the notion with others. Kahneman reflects:
In all the years I spent asking questions about the probability of events, no one ever raised a hand to ask me, “Sir, what do you mean by probability?” as they would have done if I had asked them to assess a strange concept such as globability.
Everyone acted as if they knew how to answer my questions, although we all understood that it would be unfair to ask them for an explanation of what the word means.
While logicians and statisticians might disagree, probability to most of us is simply a tool that describes our degree of belief. For instance, we know that the sun will rise tomorrow and we consider it near impossible that there will be two suns up in the sky instead of one. In addition to the extremes, there are also events which lie somewhere in the middle on the probability spectrum, such as the degree of belief that it will rain tomorrow.
Despite its vagueness, probability has its virtues. Assigning probabilities helps us make the degree of belief actionable and also communicable to others. If we believe that the probability it will rain tomorrow is 90%, we are likely to carry an umbrella and suggest our family do so as well.
Most of us are already familiar with representativeness and base rates. Consider the classic example of x number of black and y number of white colored marbles in a jar. It is a simple exercise to tell what the probabilities of drawing each color are if you know their base rates (proportion). Using base rates is the obvious approach for estimations when no other information is provided.
However, Kahneman managed to prove that we have a tendency to ignore base rates in light of specific descriptions. He calls this phenomenon the Representativeness Bias. To illustrate representativeness bias, consider the example of seeing a person reading The New York Times on the New York subway. Which do you think would be a better bet about the reading stranger?
1) She has a PhD.
2) She does not have a college degree.
Representativeness would tell you to bet on the PhD, but this is not necessarily a good idea. You should seriously consider the second alternative, because many more non-graduates than PhDs ride in New York subways. While a larger proportion of PhDs may read The New York Times, the total number of New York Times readers with only high school degrees is likely to be much larger, even if the proportion itself is very slim.
In a series of similar experiments, Kahneman’s subjects failed to recognize the base rates in light of individual information. This is unsurprising. Kahneman explains:
On most occasions, people who act friendly are in fact friendly. A professional athlete who is very tall and thin is much more likely to play basketball than football. People with a PhD are more likely to subscribe to The New York Times than people who ended their education after high school. Young men are more likely than elderly women to drive aggressively.
While following representativeness bias might improve your overall accuracy, it will not always be the statistically optimal approach.
Michael Lewis in his bestseller Moneyball tells a story of Oakland A’s baseball team coach, Billy Beane, who recognized this fallacy and used it to his advantage. When recruiting new players for the team, instead of relying on scouts he relied heavily on statistics of past performance. This approach allowed him to build a team of great players that were passed up by other teams because they did not look the part. Needless to say, the team achieved excellent results at a low cost.
While representativeness bias occurs when we fail to account for low base rates, conjunction fallacy occurs when we assign a higher probability to an event of higher specificity. This violates the laws of probability.
Consider the following study:
Participants were asked to rank four possible outcomes of the next Wimbledon tournament from most to least probable. Björn Borg was the dominant tennis player of the day when the study was conducted. These were the outcomes:
A. Borg will win the match.
B. Borg will lose the first set.
C. Borg will lose the first set but win the match.
D. Borg will win the first set but lose the match.
How would you order them?
Kahneman was surprised to see that most subjects ordered the chances by directly contradicting the laws of logic and probability. He explains:
The critical items are B and C. B is the more inclusive event and its probability must be higher than that of an event it includes. Contrary to logic, but not to representativeness or plausibility, 72% assigned B a lower probability than C.
If you thought about the problem carefully you drew the following diagram in your head. Losing the first set will always, by definition, be a more probable event than losing the first set and winning the match.
As discussed in our piece on the Narrative Fallacy, the best-known and most controversial of Kahneman and Tversky’s experiments involved a fictitious lady called Linda. The fictional character was created to illustrate the role heuristics play in our judgement and how it can be incompatible with logic. This is how they described Linda.
Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.
Kahneman conducted a series of experiments, in which he showed that representativeness tends to cloud our judgements and that we ignore the base rates in light of stories. The Linda problem started off with the task to estimate the plausibility of 9 different scenarios that subjects were supposed to rank in order of likelihood.
Linda is a teacher in elementary school.
Linda works in a bookstore and takes yoga classes.
Linda is active in the feminist movement.
Linda is a psychiatric social worker.
Linda is a member of the League of Women Voters.
Linda is a bank teller.
Linda is an insurance salesperson.
Linda is a bank teller and is active in the feminist movement.
Kahneman was startled to see that his subjects judged the likelihood of Linda being a bank teller and a feminist more likely than her being just a bank teller. As explained earlier, doing so makes little sense. He went on to explore the phenomenon further:
In what we later described as “increasingly desperate” attempts to eliminate the error, we introduced large groups of people to Linda and asked them this simple question:
Which alternative is more probable?
Linda is a bank teller.
Linda is a bank teller and is active in the feminist movement.
This stark version of the problem made Linda famous in some circles, and it earned us years of controversy. About 85% to 90% of undergraduates at several major universities chose the second option, contrary to logic.
What is especially interesting about these results is that, even when aware of the biases in place, we do not discard them.
When I asked my large undergraduate class in some indignation, “Do you realize that you have violated an elementary logical rule?” someone in the back row shouted, “So what?” and a graduate student who made the same error explained herself by saying, “I thought you just asked for my opinion.”
The issue is not constrained to students and but also affects professionals.
The naturalist Stephen Jay Gould described his own struggle with the Linda problem. He knew the correct answer, of course, and yet, he wrote, “a little homunculus in my head continues to jump up and down, shouting at me—‘but she can’t just be a bank teller; read the description.”
Our brains simply seem to prefer consistency over logic.
Representativeness and conjunction fallacy occur, because we make the mental shortcut from our perceived plausibility of a scenario to its probability.
The most coherent stories are not necessarily the most probable, but they are plausible, and the notions of coherence, plausibility, and probability are easily confused by the unwary. Representativeness belongs to a cluster of closely related basic assessments that are likely to be generated together. The most representative outcomes combine with the personality description to produce the most coherent stories.
Kahneman warns us about the effects of these biases on our perception of expert opinion and forecasting. He explains that we are more likely to believe scenarios that are illustrative rather than probable.
The uncritical substitution of plausibility for probability has pernicious effects on judgments when scenarios are used as tools of forecasting. Consider these two scenarios, which were presented to different groups, with a request to evaluate their probability:
A massive flood somewhere in North America next year, in which more than 1,000 people drown
An earthquake in California sometime next year, causing a flood in which more than 1,000 people drown
The California earthquake scenario is more plausible than the North America scenario, although its probability is certainly smaller. As expected, probability judgments were higher for the richer and more detailed scenario, contrary to logic. This is a trap for forecasters and their clients: adding detail to scenarios makes them more persuasive, but less likely to come true.
In order to appreciate the role of plausibility, he suggests we have a look at an example without an accompanying explanation.
Which alternative is more probable?
Jane is a teacher.
Jane is a teacher and walks to work.
In this case, when evaluating plausibility and coherence there are no quick answers to the probability question and we can easily conclude that the first one is more likely. The rule goes that in the absence of a competing intuition, logic prevails.
The first lesson to thinking clearly is to question how you think. We should not simply believe whatever comes to our mind – our beliefs must be constrained by logic. You don’t have to become an expert in probability to tame your intuition, but having a grasp of simple concepts will help. There are two main rules that are worth repeating in light of representativeness bias:
1) All probabilities add up to 100%.
This means that if you believe that there’s a 90% chance it will rain tomorrow, there’s a 10% of chance that it will not rain tomorrow.
However, since you believe that there is only 90% chance that it will rain tomorrow, you cannot be 95% certain that it will rain tomorrow morning.
We typically make this type of error, when we mean to say that, if it rains, there’s 95% probability it will happen in the morning. That’s a different claim and the probability of raining tomorrow morning under such premises is 0.9*0.95=85.5%.
This also means the odds that, if it rains, it will not rain in the morning, are 90.0%-85.5% = 4.5%.
2) The second principle is called the Bayes rule.
It allows us to correctly adjust our beliefs with the diagnosticity of the evidence. Bayes rule follows the formula:
In essence the formula states that the posterior odds are proportional to prior odds times the likelihood. Kahneman crystallizes two keys to disciplined Bayesian reasoning:
• Anchor your judgment of the probability of an outcome on a plausible base rate.
• Question the diagnosticity of your evidence.
Kahnmenan explains it with an example:
If you believe that 3% of graduate students are enrolled in computer science (the base rate), and you also believe that the description of Tom is 4 times more likely for a graduate student in computer science than in other fields, then Bayes’s rule says you must believe that the probability that Tom is a computer science student is now 11%.
Four times as likely means that we expect roughly 80% of all computer science students to resemble Tom. We use this proportion to obtain the adjusted odds. (The calculation goes as follows: 0.03*0.8/(0.03*0.8+((1-0.03)*(1-0.8)))=11%)
The easiest way to become better at making decisions is by making sure you question your assumptions and follow strong evidence. When evidence is anecdotal, adjust minimally and trust the base rates. Odds are, you will be pleasantly surprised.
Want More? Check out our ever-growing collection of mental models and biases and get to work.
“The difficulty lies not in the new ideas,
but in escaping the old ones, which ramify,
for those brought up as most of us have been,
into every corner of our minds.”
— John Maynard Keynes
Ben Franklin tells an interesting little story in his autobiography. Facing opposition to being reelected Clerk of the General Assembly, he sought to gain favor with the member so vocally opposing him:
Having heard that he had in his library a certain very scarce and curious book, I wrote a note to him, expressing my desire of perusing that book, and requesting he would do me the favour of lending it to me for a few days. He sent it immediately, and I return’d it in about a week with another note, expressing strongly my sense of the favour.
When we next met in the House, he spoke to me (which he had never done before), and with great civility; and he ever after manifested a readiness to serve me on all occasions, so that we became great friends, and our friendship continued to his death.
This is another instance of the truth of an old maxim I had learned, which says, “He that has once done you a kindness will be more ready to do you another, than he whom you yourself have obliged.”
The man, having lent Franklin a rare and valuable book, sought to stay consistent with his past actions. He wouldn’t, of course, lend a book to an unworthy man, would he?
Scottish philosopher and economist Adam Smith said in The Theory of Moral Sentiments:
The opinion which we entertain of our own character depends entirely on our judgments concerning our past conduct. It is so disagreeable to think ill of ourselves, that we often purposely turn away our view from those circumstances which might render that judgment unfavorable.
Even when it acts against our best interest our tendency is to be consistent with our prior commitments, ideas, thoughts, words, and actions. As a byproduct of confirmation bias, we rarely seek disconfirming evidence of what we believe. This, after all, makes it easier to maintain our positive self-image.
Part of the reason this happens is our desire to appear and feel like we’re right. We also want to show people our conviction. This shouldn’t come as a surprise. Society values consistency and conviction even when it is wrong.
We associate consistency with intellectual and personal strength, rationality, honesty and stability. On the other hand, the person who is perceived as inconsistent is also seen as confused, two-faced, even mentally ill in certain extreme circumstances.
A politician, for example, who wavers, gets labelled a flip flopper and can lose an election over it (John Kerry). A CEO who risks everything on a successful bet and holds a conviction that no one else holds is held to be a hero (Elon Musk).
But it’s not just our words and actions that nudge our subconscious, but also how other people see us. There is a profound truth behind Eminem’s lyrics: I am, whatever you say I am. If I wasn’t, then why would I say I am?
If you think I’m talented, I become more talented in your eyes — in part because you labelling me as talented filters the way you see me. You start seeing more of my genius and less of my normal-ness, simply by way of staying consistent with your own words.
In his book Outliers, Malcolm Gladwell talks about how teachers simply identifying students as smart not only affected how the teachers saw their work but, more importantly, affected the opportunities teachers gave the students. Smarter students received better opportunities, which, we can reason, offers them better experiences. This is turn makes them better. It’s almost a self-fulfilling prophecy.
And the more we invest in our beliefs of ourselves or others—think money, effort, or pain, the more sunk costs we have and the harder it becomes to change our mind. It doesn’t matter if we’re right. It doesn’t matter if the Ikea bookshelf sucks, we’re going to love it.
In Too Much Invested to Quit, psychologist Allan Teger says something similar of the Vietnam War: “
The longer the war continued, the more difficult it was to justify the additional investments in terms of the value of possible victory. On the other hand, the longer the war continued, the more difficult it became to write off the tremendous losses without having anything to show for them.
As a consequence, there are few rules we abide by more than the “Don’t make any promises that you can’t keep.” This, generally speaking, is a great rule that keeps society together by ensuring that our commitments for the most part are real and reliable.
Aside from the benefits of preserving our public image, being consistent is simply easier and leads to a more predictable and consistent life. By being consistent in our habits and with previous decisions, we significantly reduce the need to think and can go on “auto-pilot” for most of our lives.
However beneficial these biases are, they too deserve deeper understanding and caution. Sometimes our drive to appear consistent can lure us into choices we otherwise would consider against our best interests. This is the essence of a harmful bias as opposed to a benign one: We are hurting ourselves and others by committing it.
Part of why commitment can be so dangerous is because it is like a slippery slope – you only need a single slip to slide down completely. Therefore compliance to even tiny requests, which initially appear insignificant, have a good probability of leading to full commitment later.
People whose job it is to persuade us know this.
Among the more blunt techniques on the spectrum are those reported by a used-car sales manager in Robert Cialdini’s book Influence. The dealer knows the power of commitment and that if we comply a little now, we are likely to comply fully later on. His advice to other sellers goes as follows:
“Put ’em on paper. Get the customer’s OK on paper. Get the money up front. Control ’em. Control the deal. Ask ’em if they would buy the car right now if the price is right. Pin ’em down.”
This technique will be obvious to most of us. However, there are also more subtle ways to make us comply without us noticing.
A great example of a subtle compliance practitioner is Jo-Ellen Demitrius, the woman currently reputed to be the best consultant in the business of jury selection.
Whenever screening potential jurors before a trial she asks an artful question:
“If you were the only person who believed in my client’s innocence, could you withstand the pressure of the rest of the jury to change your mind?”
It’s unlikely that any self-respecting prospective juror would answer negatively. And, now that the juror has made the implicit promise, it is unlikely that once selected he will give in to the pressure exerted by the rest of the jury.
Innocent questions and requests like this can be a great springboard for initiating a cycle of compliance.
A great case study for compliance is the tactics that Chinese soliders employed on American war captives during the Korean War. The Chinese were particularly effective in getting Americans to inform on one another. In fact, nearly all American prisoners in the Chinese camps are said to have collaborated with the enemy in one way or another.
This was striking, since such behavior was rarely observed among American war prisoners during WWII. It raises the question of what secret trades led to the success of the Chinese?
Unlike the North Koreans, the Chinese did not treat the victims harshly. Instead they engaged in what they called “lenient policy” towards the captives, which was, in reality, a clever series of psychological assaults.
In their exploits the Chinese relied heavily on commitment and consistency tactics to receive the compliance they desired. At first, the Americans were not too collaborative, as they had been trained to provide only name, rank, and serial number, but the Chinese were patient.
They started with seemingly small but frequent requests to repeat statements like “The United States is not perfect” and “In a Communist country, unemployment is not a problem.” Once these requests had been complied with, the heaviness of the requests grew. Someone who had just agreed that United States was not perfect would be encouraged to expand on his thoughts about specific imperfections. Later he might be asked to write up and read out a list of these imperfections in a discussion group with other prisoners. “After all, it’s what you really believe, isn’t it?” The Chinese would then broadcast the essay readings not only to the whole camp, but to other camps and even the American forces in South Korea. Suddenly the soldier would find himself a “collaborator” of the enemy.
The awareness that the essays did not contradict his beliefs could even change his self-image to be consistent with the new “collaborator” label, often resulting in more cooperation with the enemy.
It is not surprising that very few American soldiers were able to avoid such “collaboration” altogether.
The small request growing into bigger requests as applied by the Chinese on American soldiers is also called the Foot-in-the-door Technique. It was first discovered by two scientists – Freedman and Fraser, who had worked on an experiment in which a fake volunteer worker asked home owners to allow a public-service billboard to be installed on their front lawns.
To get a better idea of how it would look, the home owners were even shown a photograph depicting an attractive house that was almost completely obscured by an ugly sign reading DRIVE CAREFULLY. While the request was quite understandably denied by 83 percent of residents, one particular group reacted favorably.
Two weeks earlier a different “volunteer worker” had come and asked the respondents of this group a similar request to display a much smaller sign that read BE A SAFE DRIVER. The request was so negligible that nearly all of them complied. However, the future effects of that request turned out to be so enormous that 76 percent of this group complied with the bigger, much less reasonable request (the big ugly sign).
At first, even the researchers themselves were baffled by the results and repeated the experiment on similar setups. The effect persisted. Finally, they proposed that the subjects must have distorted their own views about themselves as a result of their initial actions:
What may occur is a change in the person’s feelings about getting involved or taking action. Once he has agreed to a request, his attitude may change, he may become, in his own eyes, the kind of person who does this sort of thing, who agrees to requests made by strangers, who takes action on things he believes in, who cooperates with good causes.
The rule goes that once someone has instilled our self-image where they want it to be, we will comply naturally with the set of requests that adhere to the new self-view. Therefore we must be very careful about agreeing to even the smallest requests. Not only can it make us comply with larger requests later on, but it can make us even more willing to do favors that are only remotely connected to the earlier ones.
Even Cialdini, someone who knows this bias inside-out, admits to his fear that his behavior will be affected by consistency bias:
It scares me enough that I am rarely willing to sign a petition anymore, even for a position I support. Such an action has the potential to influence not only my future behavior but also my self-image in ways I may not want.
Further, once a person’s self-image is altered, all sorts of subtle advantages become available to someone who wants to exploit that new image.
Have you ever witnessed a deal that is a little too good to be true only to later be disappointed? You had already made up your mind, had gotten excited and were ready to pay or sign until a calculation error was discovered. Now with the adjusted price, the offer did not look all that great.
It is likely that the error was not an accident – this technique, also called low-balling, is often used by compliance professionals in sales. Cialdini, having observed the phenomenon among car dealers, tested its effects on his own students.
In an experiment with colleagues, he made two groups of students show up at 7:00 AM in the morning to do a study on “thinking processes”. When they called one group of students they immediately told them that the study starts at 7:00 AM. Unsurprisingly, only 24 percent wanted to participate.
However, for the other group of students, researchers threw a low-ball. The first question was whether they wanted to take part in a study of thinking processes. Fifty-six percent of them replied positively. Now, to those that agreed, the meeting time of 7:00 AM was revealed.
These students were given the opportunity to opt out, but none of them did. In fact, driven by their commitment, 95 percent of the low-balled students showed up to the Psychology Building at 7:00 AM as they had promised.
Do you recognize the similarities between the experiment and the sales situation?
The script of low-balling tends to be the same:
First, an advantage is offered that induces a favorable decision in the manipulator’s direction. Then, after the decision has been made, but before the bargain is sealed, the original advantage is deftly removed (i.e., the price is raised, the time is changed, etc.).
It would seem surprising that anyone would buy under these circumstances, yet many do. Often the self-created justifications provide so many new reasons for the decision that even when the dealer pulls away the original favorable rationale, like a low price, the decision is not changed. We stick with our old decision even in the face of new information!
Of course not everyone complies, but that’s not the point. The effect is strong enough to hold for a good number of buyers, students or anyone else whose rate of compliance we may want to raise.
The first real defense to consistency bias is awareness about the phenomenon and the harm a certain rigidity in our decisions can cause us.
Robert Cialdini suggests two approaches to recognizing when consistency biases are unduly creeping into our decision making. The first one is to listen to our stomachs. Stomach signs display themselves when we realize that the request being pushed is something we don’t want to do.
He recalls a time when a beautiful young woman tried to sell him a membership he most certainly did not need by using the tactics displayed above. He writes:
I remember quite well feeling my stomach tighten as I stammered my agreement. It was a clear call to my brain, “Hey, you’re being taken here!” But I couldn’t see a way out. I had been cornered by my own words. To decline her offer at that point would have meant facing a pair of distasteful alternatives: If I tried to back out by protesting that I was not actually the man-about-town I had claimed to be during the interview, I would come off a liar; trying to refuse without that protest would make me come off a fool for not wanting to save $1,200. I bought the entertainment package, even though I knew I had been set up. The need to be consistent with what I had already said snared me.
But then eventually he came up with the perfect counter-attack for later episodes, which allowed him to get out of the situation gracefully.
Whenever my stomach tells me I would be a sucker to comply with a request merely because doing so would be consistent with some prior commitment I was tricked into, I relay that message to the requester. I don’t try to deny the importance of consistency; I just point out the absurdity of foolish consistency. Whether, in response, the requester shrinks away guiltily or retreats in bewilderment, I am content. I have won; an exploiter has lost.
The second approach concerns the signs that are felt within our heart and is best used when it is not really clear whether the initial commitment was wrongheaded.
Imagine you have recognized that your initial assumptions about a particular deal were not correct. The car is not extraordinarily cheap and the experiment is not as fun if you have to wake up at 6 AM to make it. Here it helps to ask one simple question:
“Knowing what I know, if I could go back in time, would I make the same commitment?”
Ask it frequently enough and the answer might surprise you.
Want More? Check out our ever-growing library of mental models and biases.