Over 400,000 people visited Farnam Street last month to learn how to make better decisions, create new ideas, and avoid stupid errors. With more than 100,000 subscribers to our popular weekly digest, we've become an online intellectual hub. To learn more about what we do, start here.

Tag Archives: Mental Models

Blog Posts, Book Reviews, and Abstracts: On Shallowness

We’re quite glad that you read Farnam Street, and we hope we’re always offering you a massive amount of value. (If not, email us and tell us what we can do more effectively.)

But there’s a message all of our readers should appreciate: Blog posts are not enough to generate the deep fluency you need to truly understand or get better at something. We offer a starting point, not an end point.

This goes just as well for book reviews, abstracts, cliff's notes, and a good deal of short-form journalism.

This is a hard message for some who want a shortcut. They want the “gist” and the “high level takeaways”, without doing the work or eating any of the broccoli. They think that’s all it takes: Check out a 5-minute read, and instantly their decision making and understanding of the world will improve right-quick. Most blogs, of course, encourage this kind of shallowness. Because it makes you feel that the whole thing is pretty easy.

Here’s the problem: The world is more complex than that. It doesn’t actually work this way. The nuanced detail behind every “high level takeaway” gives you the context needed to use it in the real world. The exceptions, the edge cases, and the contradictions.

Let me give you an example.

A high-level takeaway from reading Kahneman’s Thinking Fast, and Slow would be that we are subject to something he and Amos Tversky call the Representativeness Heuristic. We create models of things in our head, and then fit our real-world experiences to the model, often over-fitting drastically. A very useful idea.

However, that’s not enough. There are so many follow-up questions. Where do we make the most mistakes? Why does our mind create these models? Where is this generally useful? What are the nuanced examples of where this tendency fails us? And so on. Just knowing about the Heuristic, knowing that it exists, won't perform any work for you.

Or take the rise of human species as laid out by Yuval Harari. It’s great to post on his theory; how myths laid the foundation for our success, how “natural” is probably a useless concept the way it’s typically used, and how biology is the great enabler.

But Harari’s book itself contains the relevant detail that fleshes all of this out. And further, his bibliography is full of resources that demand your attention to get even more backup. How did he develop that idea? You have to look to find out.

Why do all this? Because without the massive, relevant detail, your mind is built on a house of cards.

What Farnam Street and a lot of other great resources give you is something like a brief map of the territory.

Welcome to Colonial Williamsburg! Check out the re-enactors, the museum, and the theatre. Over there is the Revolutionary City. Gettysburg is 4 hours north. Washington D.C. is closer to 2.5 hours.

Great – now you have a lay of the land. Time to dig in and actually learn about the American Revolution. (This book is awesome, if you actually want to do that.)

Going back to Kahneman, one of his and Tversky’s great findings was the concept of the Availability Heuristic. Basically, the mind operates on what it has close at hand.

As Kahneman puts it, “An essential design feature of the associative machine is that it represents only activated ideas. Information that is not retrieved (even unconsciously) from memory might as well not exist. System 1 excels at constructing the best possible story that incorporates ideas currently activated, but it does not (cannot) allow for information it does not have.”

That means that in the moment of decision making, when you’re thinking hard on some complex problem you face, it’s unlikely that your mind is working all that successfully without the details. It doesn't have anything to draw on. It’d be like a chess player who read a book about great chess players, but who hadn’t actually studied all of their moves. Not very effective.

The great difficulty, of course, is that we lack the time to dig deep into everything. Opportunity costs and trade-offs are quite real.

That’s why you must develop excellent filters. What’s worth learning this deeply? We think it’s the first-principle style mental models. The great ideas from physical systems, biological systems, and human systems. The new-new thing you’re studying is probably either A. Wrong or B. Built on one of those great ideas anyways. Farnam Street, in a way, is just a giant filtering mechanism to get you started down the hill.

But don't stop there. Don't stop at the starting line. Resolve to increase your depth and stop thinking you can have it all in 5 minutes or less. Use our stuff, and whoever else's stuff you like, as an entrée to the real thing.

(P.S. If you need to learn how to focus, check this out; if you need to learn how to read more effectively, go with this.)

Charlie Munger on Getting Rich, Wisdom, Focus, Fake Knowledge and More

“In the chronicles of American financial history,” writes David Clark in Tao of Charlie Munger: A Compilation of Quotes from Berkshire Hathaway's Vice Chairman on Life, Business, and the Pursuit of Wealth, “Charlie Munger will be seen as the proverbial enigma wrapped in a paradox— he is both a mystery and a contradiction at the same time.”

On one hand, Munger received an elite education and it shows: He went to Cal Tech to train as a meteorologist for the Second World War and then attended Harvard Law School and eventually opened his own law firm. That part of his success makes sense.

Yet here's a man who never took a single course in economics, business, marketing, finance, psychology or accounting, and managed to become one of the greatest, most admired, and most honorable businessmen of our age, noted by essentially all observers for the originality of his thoughts, especially about business and human behavior. You don't learn that in law school, at Harvard or anywhere else.

Bill Gates said of him: “He is truly the broadest thinker I have ever encountered.” His business partner Warren Buffett put it another way: “He comes equipped for rationality…I would say that to try and typecast Charlie in terms of any other human that I can think of, no one would fit. He's got his own mold.”

How does such an extreme result happen? How is such an original and unduly capable mind formed? In the case of Munger, it's clearly a combination of unusual genetics and an unusual approach to learning and life.

While we can't have his genetics, we can try to steal his approach to rationality. There's almost no limit to the amount one could learn from studying the Munger mind, so let's at least get started by running down some of his best ideas.

***

Wisdom and Circle of Competence

“Knowing what you don’t know is more useful than being brilliant.”
“Acknowledging what you don’t know is the dawning of wisdom.”

Identify your circle of competence and use your knowledge, when possible, to stay away from things you don't understand. There are no points for difficulty at work or in life.  Avoiding stupidity is easier than seeking brilliance.

Of course this relates to another of Munger's sayings, “People are trying to be smart—all I am trying to do is not to be idiotic, but it’s harder than most people think.”

And this reminds me of perhaps my favorite Mungerism of all time, the very quote that sits right beside my desk:

“It is remarkable how much long-term advantage people like us have gotten by trying to be consistently not stupid, instead of trying to be very intelligent.”

***

Divergence

“Mimicking the herd, invites regression to the mean.”

Here's a simple axiom to live by: If you do what everyone else does, you're going to get the same result that everyone else gets. This means, taking out luck (good or bad), if you act average, you're going to be average. If you want to move away from average, you must diverge. You must be different. And if you want to outperform, you must be different and correct. As Munger would say, “How could it be otherwise?”

***

Know When to Fold Em

“Life, in part, is like a poker game, wherein you have to learn to quit sometimes when holding a much-loved hand— you must learn to handle mistakes and new facts that change the odds.”

Mistakes are an opportunity to grow. How we handle adversity is up to us. This is how we become personally antifragile.

***

False Models

Echoing Einstein, who said that “Not everything that counts can be counted, and not everything that can be counted counts,” Munger said about his and Buffett's shift to acquiring high quality businesses for Berkshire Hathaway:

“Once we’d gotten over the hurdle of recognizing that a thing could be a bargain based on quantitative measures that would have horrified Graham, we started thinking about better businesses.”

***

Being Lazy

“Sit on your ass. You’re paying less to brokers, you’re listening to less nonsense, and if it works, the tax system gives you an extra one, two, or three percentage points per annum.”

Time is the friend to a good business and the enemy of the poor business. It's also the friend of knowledge and the enemy of the new and novel. As Seneca said “Time discovers truth.”

***

Investing is a Pari-mutual System

You’re looking for a mispriced gamble,” says Munger. “That’s what investing is. And you have to know enough to know whether the gamble is mispriced. That’s value investing.”  At another time he added: “You should remember that good ideas are rare— when the odds are greatly in your favor, bet heavily.

May the odds forever be in your favor. Actually, learning properly is one way you can tilt the odds in your favor.

***

Focus

When asked about his success, Munger says, “I succeeded because I have a long attention span.”

Long attention spans allow for a deep understanding of subjects. When combined with deliberate practice focus allows you to increase your skills and get out of your rut. The Art of Focus is a divergent and correct strategy that can help you identify where the leverage points are and apply your effort toward them.

***

Fake Knowledge

“Smart people aren’t exempt from professional disasters from overconfidence.”

We're so used to outsourcing our thinking to others that we've forgotten what it's like to really understand something from all perspectives. We've forgotten just how much work that takes. The path of least resistance, however, is just a click away. Fake knowledge, which comes from reading headlines and skimming the news seems harmless, but it's not because it makes us overconfident. It's better to remember a simple trick: anything you're getting easily through google or twitter is likely to be widely known and should not be given undue weight.

However, Munger adds, “If people weren’t wrong so often, we wouldn’t be so rich.

***

Sit Quietly

Echoing Pascal, who said some version of ‘All of humanity's problems stem from man's inability to sit quietly in a room alone,' Munger adds an investing twist:  “It’s waiting that helps you as an investor, and a lot of people just can’t stand to wait.”

The ability to be alone with your thoughts, and turn ideas over and over, without the do something syndrome affects so many of us. A perfectly reasonable deviation is to hold your ground and await more information.

***

Deal With Reality

“I think that one should recognize reality even when one doesn’t like it; indeed, especially when one doesn’t like it.”

Munger clearly learned from Joseph Tussman's wisdom. This means facing harsh truths that you have forced yourself to ignore. It means meeting the world on the worlds terms, not how you wish it would be. If this causes temporary pain, so be it. “Your pain,” writes Kahil Gibran in The Prophet, “is the breaking of the shell that encloses your understanding.”

***

There is No Free Lunch

We like quick solutions that don't require a lot of effort. We're drawn to the modern equivalent of an old hustler selling an all curing tonic. Only the world does not work that way. Munger expands:

“There isn’t a single formula. You need to know a lot about business and human nature and the numbers…It is unreasonable to expect that there is a magic system that will do it for you.”

Acquiring knowledge is hard work. It's reading and adding to your knowledge so it compounds. It's going deep and developing fluency, something Darwin knew well.

***

Maximization/Minimization

In business we often find that the winning system goes almost ridiculously far in maximizing and or minimizing one or a few variables— like the discount warehouses of Costco.

When everything is a priority nothing is a priority. Attempting to maximize competing variables is a recipe for disaster. Picking one variable, and relentlessly focusing on it, which is an effective strategy, diverges from the norm. It's hard to compete with businesses who have correctly identified the right variables to maximize or minimize. When you focus on one variable, you'll increase the odds you're quick and nimble — and can respond to changes in the terrain.

***

Map and Terrain

At Berkshire there has never been a master plan. Anyone who wanted to do it, we fired because it takes on a life of its own and doesn’t cover new reality. We want people taking into account new information.”

Plans are maps that we become attached to. Once we've told everyone there is a plan and what that plan is, especially multi-year plans, we're psychologically more likely to hold to it because coming out and changing it would be admitting we're wrong. This creates a scenario where we're staking the odds against us in changing when things change. Detailed 5-year plans (that will clearly be wrong) are as disastrous as overly-general five year plans (which can never be wrong). Scrap it, isolate the key variables that you need to maximize and minimize, and follow the agile path blazed by Henry Singleton and followed by Buffett and Munger.

***

The Keys to Good Government

There are three keys: honesty, effectiveness, and efficiency.

Munger says:

“In a democracy, everyone takes turns. But if you really want a lot of wisdom, it’s better to concentrate decisions and process in one person. It’s no accident that Singapore has a much better record, given where it started, than the United States. There, power was concentrated in an enormously talented person, Lee Kuan Yew, who was the Warren Buffett of Singapore.”

Lee Kuan Yew put it this way himself: “With few exceptions, democracy has not brought good government to new developing countries. . . . What Asians value may not necessarily be what Americans or Europeans value. Westerners value the freedoms and liberties of the individual. As an Asian of Chinese cultural background, my values are for a government which is honest, effective, and efficient.”

***

One Step At a Time

“Spend each day trying to be a little wiser than you were when you woke up. Discharge your duties faithfully and well. Slug it out one inch at a time, day by day. At the end of the day— if you live long enough— most people get what they deserve.”

An incremental approach to life that reminds one of the nature of compounding. There will always be some going faster than you but we can learn from the Darwinian guide to overachieving your natural IQ. In order for this approach to be effective you need a long axis of time as well as continuous incremental progress.

***

Getting Rich

“The desire to get rich fast is pretty dangerous.” 

Getting rich is a function of being happy with what you have, spending less than you make, and time.

***

Mental Models

“Know the big ideas in the big disciplines and use them routinely— all of them, not just a few.”

Mental Models are the big ideas from multiple disciplines. While most people agree these are worth knowing, they often think they can identify which models will add the most value, and in so doing they miss something important. There is a reason that the “know nothing” index fund almost always beats the investors who think they “know.” Understanding this idea in greater detail, will change a lot of things including how you read. Acquiring the big ideas — without selectivity — is the way to mimic a know nothing index fund.

***

Know-it-alls

“I try to get rid of people who always confidently answer questions about which they don’t have any real knowledge.”

Few things have made as much of a difference in my life as systemically eliminating (and when not possible, reducing the importance of) people who think they know the answer to everything.

***

Stoic Resolve

“There’s no way that you can live an adequate life without many mistakes. In fact, one trick in life is to get so you can handle mistakes. Failure to handle psychological denial is a common way for people to go broke.”

While we all make mistakes, it's how we respond to failure that defines us.

***

Thinking

“We all are learning, modifying, or destroying ideas all the time. Rapid destruction of your ideas when the time is right is one of the most valuable qualities you can acquire. You must force yourself to consider arguments on the other side.”

“It’s bad to have an opinion you’re proud of if you can’t state the arguments for the other side better than your opponents. This is a great mental discipline.”

Thinking is a lot of work. “My first thought,” William Deresiewicz said in one of my favorite speeches, “is never my best thought. My first thought is always someone else’s; it’s always what I’ve already heard about the subject, always the conventional wisdom.”

***

Choose Your Associates Wisely

“Oh, it’s just so useful dealing with people you can trust and getting all the others the hell out of your life. It ought to be taught as a catechism. . . . But wise people want to avoid other people who are just total rat poison, and there are a lot of them.”

No comment needed there.

***

Complement Tao of Charlie Munger with this excellent Peter Bevelin Interview.

The Green Lumber Fallacy: The Difference between Talking and Doing

“Clearly, it is unrigorous to equate skills at doing with skills at talking.”
— Nassim Taleb

***

Before we get to the meat, let's review an elementary idea in biology that will be relevant to our discussion.

If you're familiar with evolutionary theory, you know that populations of organisms are constantly subjected to “selection pressures” — the rigors of their environment which lead to certain traits being favored and passed down to their offspring and others being thrown into the evolutionary dustbin.

Biologists dub these advantages in reproduction “fitness” — as in, the famously lengthening of giraffe necks gave them greater “fitness” in their environment because it helped them reach high up, untouched leaves.

Fitness is generally a relative concept: Since organisms must compete for scarce resources, their fitnesses are measured in the sense of giving a reproductive advantage over one another.

Just as well, a trait that might provide great fitness in one environment may be useless or even disadvantageous in another. (Imagine draining a pond: Any fitness advantages held by a really incredible fish becomes instantly worthless without water.) Traits also relate to circumstance. An advantage at one time could be a disadvantage at another and vice versa.

This makes fitness an all-important concept in biology: Traits are selected for if they provide fitness to the organism within a given environment.

Got it? OK, let's get back to the practical world.

***

The Black Swan thinker Nassim Taleb has an interesting take on fitness and selection in the real world:  People who are good “doers” and people who are good “talkers” are often selected for different traits. Be careful not to mix them up.

In his book Antifragile, Taleb uses this idea to invoke a heuristic he'd once used when hiring traders on Wall Street:

The more interesting their conversation, the more cultured they are, the more they will be trapped into thinking that they are effective at what they are doing in real business (something psychologists call the halo effect, the mistake of thinking that skills in, say, skiing translate unfailingly into skills in managing a pottery workshop or a bank department, or that a good chess player would be a good strategist in real life).

Clearly, it is unrigorous to equate skills at doing with skills at talking. My experience of good practitioners is that they can be totally incomprehensible–they do not have to put much energy into turning their insights and internal coherence into elegant style and narratives. Entrepreneurs are selected to be doers, not thinkers, and doers do, they don't talk, and it would be unfair, wrong, and downright insulting to measure them in the talk department.

In other words, the selection pressures on an entrepreneur are very different from those on a corporate manager or bureaucrat: Entrepreneurs and risk takers succeed or fail not so much on their ability to talk, explain, and rationalize as their ability to get things done.

While the two can often go together, Nassim figured out that they frequently don't. We judge people as ignorant when it's really us who are ignorant.

When you think about it, there's no a priori reason great intellectualizing and great doing must go together: Being able to hack together an incredible piece of code gives you great fitness in the world of software development, while doing great theoretical computer science probably gives you better fitness in academia. The two skills don't have to be connected. Great economists don't usually make great investors.

But we often confuse the two realms.  We're tempted to think that a great investor must be fluent in behavioral economics or a great CEO fluent in Mckinsey-esque management narratives, but in the real world, we see this intuition constantly in violation.

The investor Walter Schloss worked from 9-5, barely left his office, and wasn't considered an entirely high IQ man, but he compiled one of the great investment records of all time. A young Mark Zuckerberg could hardly be described as a prototypical manager or businessperson, yet somehow built one of the most profitable companies in the world by finding others that complemented his weaknesses.

There are a thousand examples: Our narratives about the type of knowledge or experience we must have or the type of people we must be in order to become successful are often quite wrong; in fact, they border on naive. We think people who talk well can do well, and vice versa. This is simply not always so.

We won't claim that great doers cannot be great talkers, rationalizers, or intellectuals. Sometimes they are. But if you're seeking to understand the world properly, it's good to understand that the two traits are not always co-located. Success, especially in some “narrow” area like plumbing, programming, trading, or marketing, is often achieved by rather non-intellectual folks. Their evolutionary fitness doesn't come from the ability to talk, but do. This is part of reality.

***

Taleb calls this idea the Green Lumber Fallacy, after a story in the book What I Learned Losing a Million Dollars. Taleb describes it in Antifragile:

In one of the rare noncharlatanic books in finance, descriptively called What I Learned Losing a Million Dollars, the protagonist makes a big discovery. He remarks that a fellow named Joe Siegel, one of the most successful traders in a commodity called “green lumber,” actually thought it was lumber painted green (rather than freshly cut lumber, called green because it had not been dried). And he made it his profession to trade the stuff! Meanwhile the narrator was into grand intellectual theories and narratives of what caused the price of commodities to move and went bust.

It is not just that the successful expert on lumber was ignorant of central matters like the designation “green.” He also knew things about lumber that nonexperts think are unimportant. People we call ignorant might not be ignorant.

The fact that predicting the order flow in lumber and the usual narrative had little to do with the details one would assume from the outside are important. People who do things in the field are not subjected to a set exam; they are selected in the most non-narrative manager — nice arguments don't make much difference. Evolution does not rely on narratives, humans do. Evolution does not need a word for the color blue.

So let us call the green lumber fallacy the situation in which one mistakes a source of visible knowledge — the greenness of lumber — for another, less visible from the outside, less tractable, less narratable.

The main takeaway is that the real causative factors of success are often hidden from usWe think that knowing the intricacies of green lumber are more important than keeping a close eye on the order flow. We seduce ourselves into overestimating the impact of our intellectualism and then wonder why “idiots” are getting ahead. (Probably hustle and competence.)

But for “skin in the game” operations, selection and evolution don't care about great talk and ideas unless they translate into results. They care what you do with the thing more than that you know the thing. They care about actually avoiding risk rather than your extensive knowledge of risk management theories. (Of course, in many areas of modernity there is no skin in the game, so talking and rationalizing can be and frequently are selected for.)

As Taleb did with his hiring heuristic, this should teach us to be a little skeptical of taking good talkers at face value, and to be a little skeptical when we see “unexplainable” success in someone we consider “not as smart.” There might be a disconnect we're not seeing because we're seduced by narrative. (A problem someone like Lee Kuan Yew avoided by focusing exclusively on what worked.)

And we don't have to give up our intellectual pursuits in order to appreciate this nugget of wisdom; Taleb is right, but it's also true that combining the rigorous, skeptical knowledge of “what actually works” with an ever-improving theory structure of the world might be the best combination of all — selected for in many more environments than simple git-er-done ability, which can be extremely domain and environment dependent. (The green lumber guy might not have been much good outside the trading room.)

After all, Taleb himself was both a successful trader and the highest level of intellectual. Even he can't resist a little theorizing.

Using Multidisciplinary Thinking to Approach Problems in a Complex World

Complex outcomes in human systems are a tough nut to crack when it comes to deciding what's really true. Any phenomena we might try to explain will have a host of competing theories, many of them seemingly plausible.

So how do we know what to go with?

One idea is to take a nod from the best. One of the most successful “explainers” of human behavior has been the cognitive psychologist Steven Pinker. His books have been massively influential, in part because they combine scientific rigor, explanatory power, and plainly excellent writing.

What's unique about Pinker is the range of sources he draws on. His book The Better Angels of Our Nature, a cogitation on the decline in relative violence in recent human history, draws on ideas from evolutionary psychology, forensic anthropology, statistics, social history, criminology, and a host of other fields. Pinker, like Vaclav Smil and Jared Diamond, is the opposite of the man with a hammer, ranging over much material to come to his conclusions.

In fact, when asked about the progress of social science as an explanatory arena over time, Pinker credited this cross-disciplinary focus:

Because of the unification with the sciences, there are more genuinely explanatory theories, and there’s a sense of progress, with more non-obvious things being discovered that have profound implications.

But, even better, Pinker gives out an outline for how a multidisciplinary thinker should approach problems in a complex world.

***

Here's the issue at stake: When we're viewing a complex phenomena—say, the decline in certain forms of violence in human history—it can be hard to come with up a rigorous explanation. We can't just set up repeated lab experiments and vary the conditions of human history to see what pops out, as with physics or chemistry.

So out of necessity, we must approach the problem in a different way.

In the above referenced interview, Pinker gives a wonderful example how to do it: Note how he carefully “cross-checks” from a variety of sources of data, developing a 3D view of the landscape he's trying to assess:

Pinker: Absolutely, I think most philosophers of science would say that all scientific generalizations are probabilistic rather than logically certain, more so for the social sciences because the systems you are studying are more complex than, say, molecules, and because there are fewer opportunities to intervene experimentally and to control every variable. But the exis­tence of the social sciences, including psychology, to the extent that they have discovered anything, shows that, despite the uncontrollability of human behavior, you can make some progress: you can do your best to control the nuisance variables that are not literally in your control; you can have analogues in a laboratory that simulate what you’re interested in and impose an experimental manipulation.

You can be clever about squeezing the last drop of causal information out of a correlational data set, and you can use converging evi­dence, the qualitative narratives of traditional history in combination with quantitative data sets and regression analyses that try to find patterns in them. But I also go to traditional historical narratives, partly as a sanity check. If you’re just manipulating numbers, you never know whether you’ve wan­dered into some preposterous conclusion by taking numbers too seriously that couldn’t possibly reflect reality. Also, it’s the narrative history that provides hypotheses that can then be tested. Very often a historian comes up with some plausible causal story, and that gives the social scientists something to do in squeezing a story out of the numbers.

Warburton: I wonder if you’ve got an example of just that, where you’ve combined the history and the social science?

Pinker: One example is the hypothesis that the Humanitarian Revolution during the Enlightenment, that is, the abolition of slavery, torture, cruel punishments, religious persecution, and so on, was a product of an expansion of empathy, which in turn was fueled by literacy and the consumption of novels and journalis­tic accounts. People read what life was like in other times and places, and then applied their sense of empathy more broadly, which gave them second thoughts about whether it’s a good idea to disembowel someone as a form of criminal punish­ment. So that’s a historical hypothesis. Lynn Hunt, a historian at the University of California–Berkeley, proposed it, and there are some psychological studies that show that, indeed, if people read a first-person account by someone unlike them, they will become more sympathetic to that individual, and also to the category of people that that individual represents.

So now we have a bit of experimental psychology supporting the historical qualita­tive narrative. And, in addition, one can go to economic histo­rians and see that, indeed, there was first a massive increase in the economic efficiency of manufacturing a book, then there was a massive increase in the number of books pub­lished, and finally there was a massive increase in the rate of literacy. So you’ve got a story that has at least three vertices: the historian’s hypothesis; the economic historians identifying exogenous variables that changed prior to the phenomenon we’re trying to explain, so the putative cause occurs before the putative effect; and then you have the experimental manipulation in a laboratory, showing that the intervening link is indeed plausible.

Pinker is saying, Look we can't just rely on “plausible narratives” generated by folks like the historians. There are too many possibilities that could be correct.

Nor can we rely purely on correlations (i.e., the rise in literacy statistically tracking the decline in violence) — they don't necessarily offer us a causative explanation. (Does the rise in literacy cause less violence, or is it vice versa? Or, does a third factor cause both?)

However, if we layer in some other known facts from areas we can experiment on — say, psychology or cognitive neuroscience — we can sometimes establish the causal link we need or, at worst, a better hypothesis of reality.

In this case, it would be the finding from psychology that certain forms of literacy do indeed increase empathy (for logical reasons).

Does this method give us absolute proof? No. However, it does allow us to propose and then test, re-test, alter, and strengthen or ultimately reject a hypothesis. (In other words, rigorous thinking.)

We can't stop here though. We have to take time to examine competing hypotheses — there may be a better fit. The interviewer continues on asking Pinker about this methodology:

Warburton: And so you conclude that the de-centering that occurs through novel-reading and first-person accounts probably did have a causal impact on the willingness of people to be violent to their peers?

Pinker: That’s right. And, of course, one has to rule out alternative hypotheses. One of them could be the growth of affluence: perhaps it’s simply a question of how pleasant your life is. If you live a longer and healthier and more enjoyable life, maybe you place a higher value on life in general, and, by extension, the lives of others. That would be an alternative hypothesis to the idea that there was an expansion of empathy fueled by greater literacy. But that can be ruled out by data from eco­nomic historians that show there was little increase in afflu­ence during the time of the Humanitarian Revolution. The increase in affluence really came later, in the 19th century, with the advent of the Industrial Revolution.

***

Let's review the process that Pinker has laid out, one that we might think about emulating as we examine the causes of complex phenomena in human systems:

  1. We observe an interesting phenomenon in need of explanation, one we feel capable of exploring.
  2. We propose and examine competing hypotheses that would explain the phenomena (set up in a falsifiable way, in harmony with the divide between science and pseudoscience laid out for us by the great Karl Popper).
  3. We examine a cross-section of: Empirical data relating to the phenomena; sensible qualitative inference (from multiple fields/disciplines, the more fundamental the better), and finally;  “Demonstrable” aspects of nature we are nearly certain about, arising from controlled experiment or other rigorous sources of knowledge ranging from engineering to biology to cognitive neuroscience.

What we end up with is not necessarily a bulletproof explanation, but probably the best we can do if we think carefully. A good cross-disciplinary examination with quantitative and qualitative sources coming into equal play, and a good dose of judgment, can be far more rigorous than the gut instinct or plausible nonsense type stories that many of us lazily spout.

A Word of Caution

Although Pinker's “multiple vertices” approach to problem solving in complex domains can be powerful, we always have to be on guard for phenomena that we simply cannot explain at our current level of competence: We must have a “too hard” pile when competing explanations come out “too close to call” or we otherwise feel we're outside of our circle of competence. Always tread carefully and be sure to follow Darwin's Golden Rule: Contrary facts are more important than confirming ones. Be ready to change your mind, like Darwin, when the facts don't go your way.

***

Still Interested? For some more Pinker goodness check out our prior posts on his work, or check out a few of his books like How the Mind Works or The Blank Slate: The Modern Denial of Human Nature.

The Map is Not the Territory

What Are You Doing About It? Reaching Deep Fluency with Mental Models

The mental models approach is very intellectually appealing, almost seductive to a certain type of person. (It certainly is for us.)

The whole idea is to take the world's greatest, most useful ideas and make them work for you!

How hard can it be?

Nearly all of the models themselves are perfectly well understandable by the average well-educated knowledge worker, including all of you reading this piece. Ideas like Bayes' rule, multiplicative thinking, hindsight bias, or the bias from envy and jealousy, are all obviously true and part of the reality we live in.

There's a bit of a problem we're seeing though: People are reading the stuff, enjoying it, agreeing with it…but not taking action. It's not becoming part of their standard repertoire.

Let's say you followed up on Bayesian thinking after reading our post on it — you spent some time soaking in Thomas Bayes‘ great wisdom on updating your understanding of the world incrementally and probabilistically rather than changing your mind in black-and-white. Great!

But a week later, what have you done with that knowledge? How has it actually impacted your life? If the honest answer is “It hasn't,” then haven't you really wasted your time?

Ironically, it's this habit of “going halfway” instead of “going all the way,” like Sisyphus constantly getting halfway up the mountain, which is the biggest waste of time!

See, the common reason why people don't truly “follow through” with all of this stuff is that they haven't raised their knowledge to a “deep fluency” — they're skimming the surface. They pick up bits and pieces — some heuristics or biases here, a little physics or biology there, and then call it a day and pull up Netflix. They get a little understanding, but not that much, and certainly no doing.

The better approach, if you actually care about making changes, is to imitate Charlie Munger, Charles Darwin, and Richard Feynman, and start raising your knowledge of the Big Ideas to a deep fluency, and then figuring out systems, processes, and mental tricks to implement them in your own life.

Let's work through an example.

***

Say you're just starting to explore all the wonderful literature on heuristics and biases and come across the idea of Confirmation Bias: The idea that once we've landed on an idea we really like, we tend to keep looking for further data to confirm our already-held notions rather than trying to disprove our idea.

This is common, widespread, and perfectly natural. We all do it. John Kenneth Galbraith put it best:

“In the choice between changing one's mind and proving there's no need to do so, most people get busy on the proof.”

Now, what most people do, the ones you're trying to outperform, is say “Great idea! Thanks Galbraith.” and then stop thinking about it.

Don't do that!

The next step would be to push a bit further, to get beyond the sound bite: What's the process that leads to confirmation bias? Why do I seek confirmatory information and in which contexts am I particularly susceptible? What other models are related to the confirmation bias? How do I solve the problem?

The answers are out there: They're in Daniel Kahneman and in Charlie Munger and in Elster. They're available by searching through Farnam Street.

The big question: How far do you go? A good question without a perfect answer. But the best test I can think of is to perform something like the Feynman technique, and to think about the chauffeur problem.

Can you explain it simply to an intelligent layperson, using vivid examples? Can you answer all the follow-ups? That's fluency. And you must be careful not to fool yourself, because in the wise words of Feynman, “…you are the easiest person to fool.

While that's great work, you're not done yet. You have to make the rubber hit the road now. Something has to happen in your life and mind.

The way to do that is to come up with rules, systems, parables, and processes of your own, or to copy someone else's that are obviously sound.

In the case of Confirmation Bias, we have two wonderful models to copy, one from each of the Charlies — Darwin, and Munger.

Darwin had rule, one we have written about before but will restate here: Make a note, immediately, if you come across a thought or idea that is contrary to something you currently believe. 

As for Munger, he implemented a rule in his own life: “I never allow myself to have an opinion on anything that I don’t know the other side’s argument better than they do.”

Now we're getting somewhere! With the implementation of those two habits and some well-earned deep fluency, you can immediately, tomorrow, start improving the quality of your decision-making.

Sometimes when we get outside the heuristic/biases stuff, it's less obvious how to make the “rubber hit the road” — and that will be a constant challenge for you as you take this path.

But that's also the fun part! With every new idea and model you pick up, you also pick up the opportunity to synthesize for yourself a useful little parable to make it stick or a new habit that will help you use it. Over time, you'll come up with hundreds of them, and people might even look to you when they're having problems doing it themselves!

Look at Buffett and Munger — both guys are absolute machines, chock full of pithy little rules and stories they use in order to implement and recall what they've learned.

For example, Buffett discovered early on the manipulative psychology behind open-outcry auctions. What did he do? He made a rule to never go to one! That's how it's done.

Even if you can't come up with a great rule like that, you can figure out a way to use any new model or idea you learn. It just takes some creative thinking.

Sometimes it's just a little mental rule or story that sticks particularly well. (Recall one of the prime lessons from our series on memory: Salient, often used, well-associated, and important information sticks best.)

We did this very thing recently with Lee Kuan Yew's Rule. What a trite way to refer to the simple idea of asking if something actually works…attributing it to a Singaporean political leader!

But that's exactly the point. Give the thing a name and a life and, like clockwork, you'll start recalling it. The phrase “Lee Kuan Yew's Rule” actually appears in my head when I'm approaching some new system or ideology, and as soon as it does, I find myself backing away from ideology and towards pragmatism. Exactly as I'd hoped.

Your goal should be to create about a thousand of those little tools in your head, attached to a deep fluency in the material from which it came. 

***

I can hear the objection coming. Who has time for this stuff?

You do. It's about making time for the things that really matter. And what could possibly matter more than upgrading your whole mental operating system? I solemnly promise that you're spending way more time right now making sub-optimal decisions and trying to deal with the fallout.

If you need help learning to manage your time right this second, check out our Productivity Seminar, one that's changed some people's lives entirely. The central idea is to become more thoughtful and deliberate with how you spend your hours. When you start doing that, you'll notice you do have an hour a day to spend on this Big Ideas stuff. It's worth the 59 bucks.

If you don't have 59 bucks, at least imitate Cal Newport and start scheduling your days and put an hour in there for “Getting better at making all of my decisions.”

Once you find that solid hour (or more), start using it in the way outlined above, and let the world's great knowledge actually start making an impact. Just do a little every day.

What you'll notice, over the weeks and months and years of doing this, is that your mind will really change! It has to! And with that, your life will change too. The only way to fail at improving your brain is by imitating Sisyphus, pushing the boulder halfway up, over and over.

Unless and until you really understand this, you'll continue spinning your wheels. So here's your call to action. Go get to it!

Bias from Disliking/Hating

(This is a follow-up to our post on the Bias from Liking/Loving, which you can find here.)

Think of a cat snarling and spitting, lashing with its tail and standing with its back curved. Her pulse is elevated, blood vessels constricted and muscles tense. This reaction may sound familiar, because everyone has experienced the same tensed-up feeling of rage at least once in their lives.

When rage is directed towards an external object, it becomes hate. Just as we learn to love certain things or people, we learn to hate others.

There are several cognitive processes that awaken the hate within us and most of them stem from our need for self-protection.

Reciprocation

We tend to dislike people who dislike us (and, true to Newton, with equal strength.) The more we perceive they hate us, the more we hate them.

Competition

A lot of hate comes from scarcity and competition. Whenever we compete for resources, our own mistakes can mean good fortune for others. In these cases, we affirm our own standing and preserve our self-esteem by blaming others.

Robert Cialdini explains that because of the competitive environment in American classrooms, school desegregation may increase the tension between children of different races instead of decreasing it. Imagine being a secondary school child:

If you knew the right answer and the teacher called on someone else, you probably hoped that he or she would make a mistake so that you would have a chance to display your knowledge. If you were called on and failed, or if you didn't even raise your hand to compete, you probably envied and resented your classmates who knew the answer.

At first we are merely annoyed. But then as the situation fails to improve and our frustration grows, we are slowly drawn into false attributions and hate. We keep blaming and associating “the others” who are doing better with the loss and scarcity we are experiencing (or perceive we are experiencing). That is one way our emotional frustration boils into hate.

Us vs. Them

The ability to separate friends from enemies has been critical for our safety and survival. Because mistaking the two can be deadly, our mental processes have evolved to quickly spot potential threats and react accordingly. We are constantly feeding information about others into our “people information lexicon” that forms not only our view of individuals, whom we must decide how to act around, but entire classes of people, as we average out that information.

To shortcut our reactions, we classify narrowly and think in dichotomies: right or wrong, good or bad, heroes or villains. (The type of Grey Thinking we espouse is almost certainly unnatural, but, then again, so is a good golf swing.) Since most of us are merely average at everything we do, even superficial and small differences, such as race or religious affiliation, can become an important source of identification. We are, after all, creatures who seek to belong to groups above all else.

Seeing ourselves as part of a special, different and, in its own way, superior group, decreases our willingness to empathize with the other side. This works both ways – the hostility towards the others also increases the solidarity of the group. In extreme cases, we are so drawn towards the inside view that we create a strong picture of the enemy that has little to do with reality or our initial perceptions.

From Compassion to Hate

We think of ourselves as compassionate, empathetic and cooperative. So why do we learn to hate?

Part of the answer lies in the fact that we think of ourselves in a specific way. If we cannot reach a consensus, then the other side, which is in some way different from us, must necessarily be uncooperative for our assumptions about our own qualities to hold true.

Our inability to examine the situation from all sides and shake our beliefs, together with self-justifying behavior, can lead us to conclude that others are the problem. Such asymmetric views, amplified by strong perceived differences, often fuel hate.

What started off as odd or difficult to understand, has quickly turned into unholy.

If the situation is characterized by competition, we may also see ourselves as a victim. The others, who abuse our rights, take away our privileges or restrict our freedom are seen as bullies who deserve to be punished. We convince ourselves that we are doing good by doing harm to those who threaten to cross the line.

This is understandable. In critical times our survival indeed may depend on our ability to quickly spot and neutralize dangers. The cost of a false positive – mistaking a friend for a foe – is much lower than the potentially fatal false negative of mistaking our adversaries for innocent allies. As a result, it is safest to assume that anything we are not familiar with is dangerous by default. Natural selection, by its nature, “keeps what works,” and this tendency towards distrust of the unfamiliar probably survived in that way.

The Displays of Hate

Physical and psychological pain is very mobilizing. We despise foods that make us nauseous and people that have hurt us. Because we are scared to suffer, we end up either avoiding or destroying the “enemy”, which is why revenge can be pursued with such vengeance. In short, hate is a defense against enduring pain repeatedly.

There are several ways that the bias for disliking and hating display themselves to the outer world. The most obvious of them is war, which seems to have been more or less prevalent throughout the history of mankind.

This would lead us to think that war may well be unavoidable. Charlie Munger offers the more moderate opinion that while hatred and dislike cannot be avoided, the instances of war can be minimized by channeling our hate and fear into less destructive behaviors. (A good political system allows for dissent and disagreement without explosions of blood upheaval.)

Even with the spread of religion, and the advent of advanced civilization, modern war remains pretty savage. But we also get what we observe in present-day Switzerland and the United States, wherein the clever political arrangements of man “channel” the hatreds and dislikings of individuals and groups into nonlethal patterns including elections.

But these dislikings and hatreds that are arguably inherent to our nature never go away completely and transcend themselves into politics. Think of the dichotomies. There is the left versus the right wing, the nationalists versus the communists and libertarians vs. authoritarians. This might be the reason why there are maxims like: “Politics is the art of marshaling hatreds.

Finally, as we move away from politics, arguably the most sophisticated and civilized way of channeling hatred is litigation. Charlie Munger attributes the following words to Warren Buffett:

A major difference between rich and poor people is that the rich people can spend their lives suing their relatives.

While most of us reflect on our memories of growing up with our siblings with fondness, there are cases where the competition for shared attention or resources breeds hatred. If the siblings can afford it, they will sometimes litigate endlessly to lay claims over their parents' property or attention.

Under the Influence of Bias

There are several ways that bias from hating can interfere with our normal judgement and lead to suboptimal decisions.

Ignoring Virtues of The Other Side

Michael Faraday was once asked after a lecture whether he implied that a hated academic rival was always wrong. His reply was short and firm “He’s not that consistent.” Faraday must have recognized the bias from hating and corrected for it with the witty comment.

What we should recognize here is that no situation is ever black or white. We all have our virtues and we all have our weaknesses. However, when possessed by the strong emotions of hate, our perceptions can be distorted to the extent that we fail to recognize any good in the opponent at all. This is driven by consistency bias, which motivates us to form a coherent (“she is all-round bad”) opinion of ourselves and others.

Association Fueled Hate

The principle of association goes that the nature of the news tends to infect the teller. This means that the worse the experience, the worse the impression of anything related to it.

Association is why we blame the messenger who tells us something that we don't want to hear even when they didn't cause the bad news. (Of course, this creates an incentive not to speak truth and avoid giving bad news.)

A classic example is the unfortunate and confused weatherman, who receives hate mail, whenever it rains. One went so far as to seek advice from the Arizona State professor of psychology, Robert Cialdini, whose work we have discussed before.

Cialdini explained to him that in light of the destinies of other messengers, he was born lucky. Rain might ruin someone’s holiday plans, but it will rarely change the destiny of a nation, which was the case of Persian war messengers. Delivering good news meant a feast, whereas delivering bad news resulted in their death.

The weatherman left Cialdini’s office with a sense of privilege and relief.

“Doc,” he said on his way out, “I feel a lot better about my job now. I mean, I'm in Phoenix where the sun shines 300 days a year, right? Thank God I don't do the weather in Buffalo.”

Fact Distortion

Under the influence of liking or disliking bias we tend to fill gaps in our knowledge by building our conclusions on assumptions, which are based on very little evidence.

Imagine you meet a woman at a party and find her to be a self-centered, unpleasant conversation partner. Now her name comes up as someone who could be asked to contribute to a charity. How likely do you feel it is that she will give to the charity?

In reality, you have no useful knowledge, because there is little to nothing that should make you believe that people who are self-centered are not also generous contributors to charity. The two are unrelated, yet because of the well-known fundamental attribution error, we often assume one is correlated to the other.

By association, you are likely to believe that this woman is not likely to be generous towards charities despite lack of any evidence. And because now you also believe she is stingy and ungenerous, you probably dislike her even more.

This is just an innocent example, but the larger effects of such distortions can be so extreme that they lead to a major miscognition. Each side literally believes that every single bad attribute or crime is attributable to the opponent.

Charlie Munger explains this with a relatively recent example:

When the World Trade Center was destroyed, many Pakistanis immediately concluded that the Hindus did it, while many Muslims concluded that the Jews did it. Such factual distortions often make mediation between opponents locked in hatred either difficult or impossible. Mediations between Israelis and Palestinians are difficult because facts in one side's history overlap very little with facts from the other side's. These distortions and the overarching mistrust might be why some conflicts seem to never end.

Avoiding Being Hated

To varying degrees we value acceptance and affirmation from others. Very few of us wake up wanting to be disliked or rejected. Social approval, at its heart the cause of social influence, shapes behavior and contributes to conformity. Francois VI, Duc de La Rochefoucauld wrote: “We only confess our little faults to persuade people that we have no big ones.”

Remember the old adage, “The nail that sticks out gets hammered down.” This is why we don't openly speak the truth or question people, we don't want to be the nail.

How do we resolve hate?

It is only normal that we can find more common ground with some people than with others. But are we really destined to fall into the traps of hate or is there a way to take hold of these biases?

That’s a question worth over a hundred million lives. There are ways that psychologists think that we can minimize prejudice against others.

Firstly, we can engage with others in sustained close contact to breed our familiarity. The contact must not only be prolonged, but also positive and cooperative in nature – either working towards a common cause or against a common enemy.

Secondly, we also reduce prejudice by attaining equal status in all aspects, including education, income and legal rights. This effect is further reinforced, when equality is supported not only “on paper”, but also ingrained within broader social norms.

And finally the obvious – we should practice awareness of our own emotions and ability to hold back on the temptations to dismiss others. Whenever confronted with strong feelings it might simply be best to sit back, breathe and do our best to eliminate the distorted thinking.

 

***

Want more? Check out the opposite bias of liking/loving, or check out a whole bunch of mental models.