Tag: Members

Warren Buffett: The Inner Scorecard

“The big question about how people behave is whether they've got an Inner Scorecard or an Outer Scorecard. It helps if you can be satisfied with an Inner Scorecard.”
— Warren Buffett

***

Human beings are, in large part, driven by the admiration of their peers.

We seek to satisfy a deep biological need by acting in such a way that we feel praise and adulation; for our wealth, our success, our skills, our looks. It could be anything. The trait we are admired for matters less than the admiration itself. The admiration is the token we dance for. We feel envy when others are getting more tokens than us, and we pity ourselves when we're not getting any.

There's nothing inherently wrong with this. The pursuit of (deserved) admiration causes us to drive and accomplish. It's a part of the explanation for why the human world has moved along so far from where it started — we're willing to do extraordinary things that are extraordinarily difficult, like starting a company from scratch, inventing a new and better product, solving some ridiculously complicated theorem, or conquering unknown territory.

This is all well and good.

The problems come when we start compromising our own standards, those we have set for ourselves, in order to earn admiration. False, undeserved admiration.

Warren Buffett frequently relates an interesting way to frame this problem. From Alice Schroeder's Buffett biography The Snowball:

Lookit. Would you rather be the world’s greatest lover, but have everyone think you’re the world’s worst lover? Or would you rather be the world’s worst lover but have everyone think you’re the world’s greatest lover? Now, that’s an interesting question. “Here’s another one. If the world couldn’t see your results, would you rather be thought of as the world’s greatest investor but in reality have the world’s worst record? Or be thought of as the world’s worst investor when you were actually the best?

Buffett's getting at a rather fundamental model he's used most of his life: The Inner Scorecard. It's a major reason Buffett has stayed so successful for so long, with so little failure or scandal intervening: While most are are “checking the official time,” Buffett is setting his watch by an internal clock!

The investor Guy Spier once won a charity lunch with Buffett, and related his experience in a book called The Education of a Value Investor. He immediately recognized Buffett's lack of falseness:

One of Buffett’s defining characteristics is that he so clearly lives by his own inner scorecard. It isn’t just that he does what’s right, but that he does what’s right for him. As I saw during our lunch, there’s nothing fake or forced about him. He sees no reason to compromise his standards or violate his beliefs. Indeed, he has told Berkshire’s shareholders that there are things he could do that would make the company bigger and more profitable, but he’s not prepared to do them. For example, he resists laying people off or selling holdings that he could easily replace with more profitable businesses. Likewise, some investors have complained that Berkshire would be much more profitable if he’d moved its tax domicile to Bermuda as many other insurers have done. But Buffett doesn’t want to base his company in Bermuda even though it would be legal and would have saved tens of billions in taxes.

We don't, by the way, claim Buffett has an unblemished record. That would not be accurate. But it does seem that his record is far more spotless than others who have climbed as far as he has.

If Buffett was “setting his clock externally” — living by the standards of others — he would not have been able to maintain the independence of mind that led him to avoid a number of financial bubbles and tremendous personal misery.

What Buffett and a lot of other people who have been successful in life — true success, not money — have in common is that they're able to remember what we all set out to do: live a fulfilling life! Not get rich. Not get famous. Not even get admiration, necessarily. But to live a satisfying existence and help others around them do the same.

It's not that getting rich or famous or admired can't be deeply satisfying. It can be! I'm positive Buffett deeply enjoys his wealth and status. He's got more “admiration tokens” than almost anyone in the world.

But all of that can be ruined very, very easily along the way by making too many compromises, by living according to an external scorecard rather than an internal one. How many stories have you heard of famous and/or wealthy folks becoming entrapped in constant lawsuits, bickering, loneliness, and pure unhappiness? A countless number, right?

Bernie Madoff achieved great admiration and wealth, but was he happy? He made it clear, after he'd been caught, that he wasn't. Here was a guy who had all the admiration tokens in the world, an External Scorecard showing an A+, and what happened when he lost it all? He felt relieved.

So, did fame or wealth actually work in giving him a satisfying and fulfilling life? No!

The little mental trick is to remember that success, money, fame, and beauty, all the things we pursue, are merely the numeratorIf the denominator — shame, regret, unhappiness, loneliness — is too large, our “Life Satisfaction Score” ends up being tiny, worthless. Even if we have all that good stuff!

Nassim Taleb once related a very similar idea:

The optimal solution to being independent and upright while remaining a social animal is: to seek first your own self-respect and, secondarily and conditionally, that of others, provided your external image does not conflict with your own self-respect. Most people get it backwards and seek the admiration of the collective and something called “a good reputation” at the expense of self-worth for, alas, the two are in frequent conflict under modernity.

It's so simple. This is why you see people that “should be happy” who are not. Big denominators destroy self-worth.

***

Adam Smith addressed this issue similarly about 225 years ago in his lesser known, though equally useful book The Theory of Moral Sentiments. Here's how he put it:

Man naturally desires, not only to be loved, but to be lovely; or to be that thing which is the natural and proper object of love. He naturally dreads, not only to be hated, but to be hateful; or to be that thing which is the natural and proper object of hatred. He desires, not only praise, but praiseworthiness; or to be that thing which, though it should be praised by nobody, is, however, the natural and proper object of praise. He dreads, not only blame, but blame-worthiness; or to be that thing which, though it should be blamed by nobody, is, however, the natural and proper object of blame.

To Smith, happiness was a combination of being loved and lovely: In modern terms, his wording makes it sound like he means “loved by others and also beautiful.”

But as you read on, you see that's not what he meant. He adds “Hated, but hateful.” “Praise, but praiseworthiness.” “Blame, but blame-worthiness.”

He's saying we're only happy if we're successful by an Inner Scorecard! We can't just earn praise, we must be praiseworthy. We can't just be loved, we must be loveable. It makes all the difference in the world. Our dissatisfaction with ourselves will always trump the satisfaction we feel with false rewards. We must, as Charlie Munger puts itearn and deserve the success we desire.

There's a simple word for this: Authenticity. We seek it, and we're only happy when we feel we've achieved it. It can't be faked. And the way to get there is to remember the Inner Scorecard and start grading yourself accordingly.

The Narrative Fallacy and What You Can Do About It

“These types of stories strike a deep chord: They give us deep, affecting reasons on which to hang our understanding of reality. They help us make sense of our own lives. And, most importantly, they frequently cause us to believe we can predict the future. The problem is, most of them are a sham.”

***

The Narrative Fallacy

A typical biography starts by describing the subject’s young life, trying to show how the ultimate painting began as just a sketch. In Walter Isaacson’s biography of Steve Jobs, for example, Isaacson illustrates that Jobs’s success was determined to a great degree by the childhood influence of his father. Paul Jobs, a careful, detailed-oriented engineer and craftsman – would carefully craft the backs of fences and cabinets even if no one would see – who Jobs later found out was not his biological father. The combination of his adoption and his craftsman father planted the seeds of Steve’s adult personality: his penchant for design detail, his need to prove himself, his messianic zeal. The recent movie starring Michael Fassbender especially plays up the latter cause; Jobs’s feeling of abandonment drove his success. Fassbender’s emotional portrayal earned him an Oscar nomination.

Nassim Taleb describes a memorable experience of a similar type in his book The Black Swan. In Rome, Taleb is having an animated discussion with a professor who has read his first book Fooled by Randomness, parts of which promote the idea that our mind creates more cause-and-effect links than reality would support. The professor proceeds to congratulate Taleb on his great luck by being born in Lebanon:

…had you grown up in a Protestant society where people are told that efforts are linked to rewards and individual responsibility is emphasized, you would never have seen the world in such a manner. You were able to see luck and separate cause-and-effect because of your Eastern Orthodox Mediterranean heritage.

These types of stories strike a deep chord: They give us deep, affecting reasons on which to hang our understanding of reality. They help us make sense of our own lives. And, most importantly, they frequently cause us to believe we can predict the future. The problem is, most of them are a sham.

As flattered as he was by the professor’s praise, Nassim Taleb knew instantly that attributing his success to his background was a fallacy:

How do I know that this attribution to the background is bogus? I did my own empirical test by checking how many traders with my background who experienced the same war become skeptical empiricists, and found none out of twenty-six.

The professor who had just praised the idea that we overestimate our ability to understand cause-and-effect couldn’t stop himself from committing the very same error in conversation with Taleb himself.

Steve Jobs felt the same about the idea that his adoption had anything but a coincidental effect on his success:

There’s some notion that because I was abandoned, I worked very hard so I could do well and make my parents wish they had me back, or some such nonsense, but that’s ridiculous […] Knowing I was adopted may have made me feel more independent, but I have never felt abandoned. I’ve always felt special. My parents made me feel special.

Such is the power of the Narrative Fallacy — the backward-looking mental tripwire that causes us to attribute a linear and discernable cause-and-effect chain to our knowledge of the past. As Nassim points out, there is a deep biological basis to the problem: we are inundated with so much sensory information that our brains have no other choice; we must put things in order so we can process the world around us. It’s implicit in how we understand the world. When the coffee cup falls, we need to know why it fell. (We knocked it over.) If someone gets the job instead of us, we need to know why they were deemed better. (They had more experience, they were more likeable.) Without a deep search for reasons, we would go around with blinders on, one thing simply happening after another. The world does not make sense without cause-and-effect.

This necessary mental function serves us well, in general. But we also must come to terms with the types of situations where our broadly useful “ordering” function causes us to make errors.

***

We fall for narrative regularly and in a wide variety of contexts. Sports are the most obvious example: Try to recall the last time you watched a profile of a famous athlete — the rise from obscurity, the rags-to-riches story, the dream-turned-reality. How did ESPN portray the success of the athlete?

If it was like most profiles, you'd most likely see some combination of the following: Parents or a coach that pushed him/her to strive for excellence; a natural gift for the sport, or at the very least, a strong inborn athleticism; an impactful life event and/or some form of adversity; and a hard work ethic.

The copy might read as follows,

It was clear from a young age that Steven was destined for greatness. He was taller than his whole class, had skills that none of his peers had, and a mother who never let him indulge laziness or sloth. Losing his father at a young age pushed him to work harder than ever, knowing he’d have to support his family. And once he met someone willing to tutor him, someone like Central High basketball coach Ed Johnson, the future was all but assured — Steven was going to be an NBA player come hell or high-water.

If you were to read this story about how a tall, strong, fast, skilled young man with good coaching and a hard work ethic came to dominate the NBA, would you stop for even a second to question that those were the total causes of his success? If you’re like most people, the answer is no. We hear about similar stories over and over again.

The problem is, these stories are subject to a deep narrative fallacy. Here’s why: think again about the supposed causes of Steven’s success — work ethic, great parents, strong coaching, a formative life event. How many young men in the United States alone have the exact same background and yet failed to achieve their dreams of NBA stardom? The question answers itself: there are probably thousands of them.

That’s the problem with narrative: it lures us into believing that we can explain the past through cause-and-effect when we hear a story that supports our prior beliefs. To take the case of our fictitious basketball player, we have been conditioned to believe that hard work, pushy parents, and coaches, and natural gifts lead to fame and success.

To be clear, many of these factors are contributive in important ways. There aren’t many short, slow guys with no work ethic and bad hand-eye coordination playing in the NBA. And passion goes a long way towards deserved success. But if it’s true that there are a thousand times as many similarly qualified men who do not end up playing professional basketball, then our diagnosis must be massively incomplete. There is no other sensible explanation.

What might we be missing? The list is endless, but some combination of luck, opportunism, and timing must have played into Steven’s and any other NBA player's success. It is very difficult to understand cause-and-effect chains, and this is a simple example compared to a complex case like a war or an economic crisis, which would have had multiple causes working in a variety of directions and enough red herrings to make a proper analysis difficult.

When it comes to understanding success in basketball, the problem might seem benign. (Although if you’re an NBA executive, maybe not so benign.) But the rest of us deal with it in a practical way all the time. Taleb points out in his book that you could prove how narratives influence our decision making by giving a friend a great detective novel and then, before they got to the final reveal, asking them to give the odds of each individual suspect being the culprit. It’s almost certain that unless you allowed them to write down the odds as they went along, they’d add up to more than 100% — the better the novel, the higher the number. This would be nonsense from a probability standpoint, the odds must add to 100%, but each narrative is so strong that we lose our bearings.

Of course, the more salient the given reasons, the more likely we are to attribute them as causes. By salient, we mean evidence which is mentally vivid and seemingly relevant to the situation — as in the case of the young basketball prodigy losing his father, a common, Disney-made plot point. We cling to these highly available images and they cause us to make avoidable errors in judgment, as with those of us afraid to board an airplane because of the thought of a harrowing airplane crash, one that is statistically unlikely to occur in many thousands of lifetimes.

***

Daniel Kahneman makes this point wonderfully in his book Thinking: Fast, and Slow, in a chapter titled “Linda: Less is More”.

The best-known and most controversial of our experiments involved a fictitious lady called Linda. Amos and I made up the Linda problem to provide conclusive evidence of the role of heuristics in judgment and of their incompatibility with logic. This is how we described Linda:

Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.

[…]

In what we later described as an “increasingly desperate” attempt to eliminate the error, we introduced large groups of people to Linda and asked them this simple question:

Which alternative is more probable?
Linda is a bank teller.
Linda is a bank teller and is active in the feminist movement.

This stark version of the problem made Linda famous in some circles, and it earned us years of controversy. About 85% to 90% of undergraduates at several major universities chose the second option, contrary to logic.

This again demonstrates the power of narrative. We are so willing to take a description and mentally categorize the person it describes — in this case, feminist bank teller is much more salient than simply bank teller — that we will violate probabilities and logic in order to uphold our first conclusion. The extra description makes our mental picture much more vivid, and we conclude that the vivid picture must be the correct one. This error very likely contributes to our tendency to stereotype based on limited sample sizes; a remnant of our primate days which probably served us well in a very different environment.

The point is made again by Kahneman as he explains how the corporate performance of companies included in famous business books like In Search of Excellence and Good to Great regressed to the mean after the books were written, a now well-known phenomenon:

You are probably tempted to think of causal explanations for these observations: perhaps the successful firms became complacent, the less successful firms tried harder. But this is the wrong way to think about what happened. The average gap must shrink, because the original gap was due in good part to luck, which contributed both to the success of the top firms and to the lagging performance of the rest. We have already encountered this statistical fact of life: regression to the mean.

Stories of how businesses rise and fall strike a chord with readers by offering what the human mind needs: a simple message of triumph and failure that identifies clear causes and ignores the determinative power of luck and the inevitability of regression. These stories induce and maintain an illusion of understanding, imparting lessons of enduring value to readers who are all too eager to believe them.

What’s the harm, you say? Aren’t we just making our lives a little more interesting with these stories? Very true. Stories serve many wonderful functions: teaching, motivating, inspiring. The problem though is that we too often believe our stories are predictive. We make them more real than they are. The writers of the business case-study books certainly believed that the explanations of success they put forth would be predictive of future success (the title Built to Last certainly implies as much), yet a good many of the companies soon became shells of their former selves – Citigroup, Hewlett Packard, Motorola, and Sony among them.

Is a good corporate culture helpful in generating success? Certainly! As it is with height and NBA success. But it’s far more difficult to determine cause and effect than simply recognizing the no-brainers. Just as many tall, talented, hard-working basketball players have failed to make it, many corporate cultures which met all of the Built to Last structures have subsequently failed. The road to success was simply more complicated than the reductive narrative of the book would allow. Strategic choices, luck, circumstance, and the contributions of specific individual personalities may have all played a role. It’s hard to say. And unless we recognize the Narrative Fallacy for what it is, a simplified and often incorrect view of past causality, we carry an arrogance about our knowledge of the past and its usefulness in predicting the future.

***

A close cousin of the narrative fallacy is what Charlie Munger refers to as Reason-Respecting Tendency in Poor Charlie’s Almanack. Here’s how Charlie describes the tendency:

There is in man, particularly one in an advanced culture, a natural love of accurate cognition and a joy in its exercise. This accounts for the widespread popularity of crossword puzzles, other puzzles, and bridge and chess columns, as well as all games requiring mental skill.

This tendency has an obvious implication. It makes man especially prone to learn well when a would-be teacher gives correct reasons for what is being taught, instead of simply laying out the desired belief ex-cathedra with no reasons given. Few practices, therefore, are wiser than not only thinking through reasons before giving orders but also communicating these reasons to the recipient of the order.

[…]

Unfortunately, Reason-Respecting Tendency is so strong that even a person’s giving of meaningless or incorrect reasons will increase compliance with his orders and requests. This has been demonstrated in psychology experiments wherein “compliance practitioners” successfully jump to the head of lines in front of copying machines by explaining their reason: “I have to make some copies.” This sort of unfortunate byproduct of Reason-Respecting Tendency is a conditioned reflex, based on a widespread appreciation of the importance of reasons. And, naturally, the practice of laying out various claptrap reasons is much used by commercial and cult “compliance practitioners” to help them get what they don’t deserve.

The deep structure of the mind is such that stories, reasons, and causes, things that point an arrow in the direction of Why are the ones that stick most deeply. Our need to look for cause-and-effect chains in anything we encounter is simply an extension of our inbuilt pattern recognition software, which can deepen and broaden as we learn new things. It has been shown, for example, that a master chess player cannot remember the pieces on a randomly assembled chessboard any better than a complete novice. But a master chess player can memorize the pieces on a board which represents an actual game in progress. If you take the pieces away, the master can replicate their positions with very high fidelity, whereas a novice cannot. The difference is that the pattern recognition software of the chess player has been developed to a high degree through deliberate practice — they have been in a thousand game situations just like it in the past. And while most of us may not be able to memorize chess games, we all have brains that perform the same function in other contexts.

Taleb hits on the same idea in The Black Swan in a powerful way:

Consider a collection of words glued together to constitute a 500-page book. If the words are purely random, picked up from the dictionary in a totally unpredictable way, you will not be able to summarize, transfer, or reduce the dimensions of that book without losing something significant from it. You need 100,000 words to carry the exact message of a random 100,000 words with you on your next trip to Siberia. Now consider the opposite: a book filled with the repetition of the following sentence: “The chairman of [insert here your company’s name] is a lucky fellow who happened to be in the right place at the right time and claims credit for the company’s success, without making a single allowance for luck,” running ten times per page for 500 pages. The entire book can be accurately compressed, as I have just done, into 34 words (out of 100,000); you could reproduce it with total fidelity out of such a kernel.

If we combine the ideas of Reason-Respecting Tendency and the mind’s deep craving for order, the interesting truth is that the best teaching, learning, and storytelling methods — those involving reasons and narrative, on which our brain can store information in a more useful and efficient way — are also the ones that cause us to make some of the worst mistakes. Our craving for order betrays us.

***

So, how do we help ourselves out of this quagmire?

The first step, clearly, is to become aware of the problem. Once we understand our brain’s craving for narrative, we begin to see narratives every day, all the time, especially as we consume news. The key question we must ask ourselves is “Of the population of X subject to the same initial conditions, how many turned out similarly to Y? What hard-to-measure causes might have played a role?” This is what we did when we unraveled Steven’s narrative above. How many kids just like him, with the same stated conditions — tall, skilled, good parents, good coaches, etc. — achieved the same result? We don’t have to run an empirical test to understand that our narrative sense is providing some misleading cause-and-effect. Common sense tells us there are likely to be many more failures than successes in that pool, leading us to understand that there must have been other unrealized factors at play; luck being a crucial one. Some identified factors were necessary but not sufficient — height, talent, and coaching among them — and some factors might have been negligible or even negative. (Would it have helped or hurt Steven’s NBA chances if he had not lost his father? Impossible to say.)

Modern scientific thought is built on just this sort of edifice to solve the cause-and-effect problem. A thousand years ago, much of what we thought we knew was based on naïve backward-looking causality. (Steve put leeches on his skin and then survived the plague = leeches cure the plague.) Only when we learned to take the concept of [leeches = cure for plague] and call it a hypothesis did we begin to understand the physical world. Only by downgrading our naïve assumptions to the status of a hypothesis, which needs to be tested with rigorous experiment – give 100 plague victims leeches and let 100 of them go leech-less and tally the results – did we find a method to parse actual cause and effect.

And it is just as relevant to ask ourselves the inverse of the question posed above: “Of the population not subject to X, how many still ended up with the results of Y?” This is where we ask: which basketball players had intact families, easy childhoods, and, yet ended up in the NBA anyway? Which corporations lacked the traits described in Good to Great but achieved Greatness anyway? When we are willing to ask both types of questions and try our best to answer them, we can start to see which elements are simply part of the story rather than causal contributors.

A second way we can circumvent narrative is to simply avoid or reinterpret sources of information most subject to the bias. Turn the TV news off. Stop reading so many newspapers. Be skeptical of biographies, memoirs, and personal histories. Be careful of writers who are incredibly talented at painting a narrative, but claim to be writing facts. (I love Malcolm Gladwell’s books, but he would be an excellent example here.) We learned above that narrative is so powerful it can overcome basic logic, so we must be rigorous to some extent about what kinds of information we allow to pass through our filters. Strong narrative flow is exactly why we enjoy a fictional story, but when we enter the non-fiction world of understanding and decision making, the power of narrative is not always on our side. We want to use narrative to our advantage — to teach ourselves or others useful concepts — but be wary of where it can mislead.

One way to assess how narrative affects your decision-making is to start keeping a journal of your decisions or predictions in any arena that you consider important. It’s important to note the why behind your prediction or decision. If you’re going to invest in your cousin’s new social media startup — sure to succeed — explain to yourself exactly why you think it will work. Be detailed. Whether the venture succeeds or fails, you will now have a forward-looking document to refer to later, so that when you have the benefit of hindsight, you can evaluate your original assumptions instead of finding convenient reasons to justify the success or failure. The more you’re able to do this exercise, the more you’ll come to understand how complicated cause-and-effect factors are when we look ahead rather than behind.

Munger also gives us a few prescriptions in Poor Charlie’s Almanack after describing the Availability Bias, another kissing cousin of the Narrative Fallacy and the Reason-Respecting Tendency. We are wise to take heed of them as we approach our fight with narrative:

The main antidote to miscues from the Availability-Misweighing Tendency often involves procedures, including use of checklists, which are almost always helpful.

Another antidote is to behave somewhat like Darwin did when he emphasized disconfirming evidence. What should be done is to especially emphasize factors that don’t produce reams of easily available numbers, instead of drifting mostly or entirely into considering factors that do produce such numbers. Still another antidote is to find and hire some skeptical, articulate people with far-reaching minds to act as advocates for notions that are opposite to the incumbent notions. [Ed: Or simply ask a smart friend to do the same.]

One consequence of this tendency is that extra-vivid evidence, being so memorable and thus more available in cognition, should often consciously be under weighed while less vivid evidence should be overweighed.

Munger's prescriptions are almost certainly as applicable to solving the narrative problem as the close-related Availability problem, especially the issue we discussed earlier of vivid evidence.

Lastly, the final prescription comes from Taleb himself; the progenitor of the idea of our problem with narrative: when searching for real truth, favor experimentation over storytelling (data over anecdote), favor experience over history (which can be cherry-picked), and favor clinical knowledge over grand theories. Figure out what you know and what’s a guess, and become humble about your understanding of the past.

This recognition and respect of the power of our minds to invent and love stories can help us reduce our misunderstanding of the world.

The 2015 Farnam Street Members Book List

Today’s book list is based on recommendations by Farnam Street Members on Slack over the last few months. If you’re not familiar with it, our community on Slack is a discussion area for members, and one of our ongoing discussions is book recommendations.

We’ve compiled and organized eleven of their favorite choices, especially ones we haven’t seen recommended elsewhere. Enjoy!

Three Men in a Boat by Jerome K. Jerome

“The book was initially intended to be a serious travel guide, with accounts of local history along the route, but the humorous elements took over to the point where the serious and somewhat sentimental passages seem a distraction to the comic novel. One of the most praised things about Three Men in a Boat is how undated it appears to modern readers – the jokes seem fresh and witty even today.”

The Happiness Hypothesis by Jonathan Haidt

“Haidt sifts Eastern and Western religious and philosophical traditions for other nuggets of wisdom to substantiate—and sometimes critique—with the findings of neurology and cognitive psychology.”

Black Box Thinking: Why Most People Never Learn from Their Mistakes, But Some Do by Matthew Syed

“Syed draws on a wide range of sources—from anthropology and psychology to history and complexity theory—to explore the subtle but predictable patterns of human error and our defensive responses to error. He also shares fascinating stories of individuals and organizations that have successfully embraced a black box approach to improvement, such as David Beckham, the Mercedes F1 team, and Dropbox.” (Pair with Mistakes were Made (But not by Me) by Carol Tavris to see how we rationalize our own mistakes.)

Gut Feelings: The Intelligence of the Unconscious by Gerd Gigerenzer

“Gigerenzer's theories about the usefulness of mental shortcuts were a small but crucial element of Malcolm Gladwell's bestseller Blink, and that attention has provided the psychologist, who is the director of the Max Planck Institute for Human Development in Berlin, the opportunity to recast his academic research for a general audience. The key concept—rules of thumb serve us as effectively as complex analytic processes, if not more so—is simple to grasp.” (Pair with Thinking Fast and Slow by Daniel Kahneman for a different approach.)

The Means of Ascent by Robert Caro

The second book in the Lyndon Johnson series, written by Robert Caro. This one tackles his service in WWII, building his fortune, and his 1948 election to the Senate, which Caro concludes that Johnson stole. Charlie Munger once commented that LBJ was important to study, simply because he never told the truth when a lie would do better. (Pair with the other books in the series.)

The Effective Engineer: How to Leverage Your Efforts In Software Engineering to Make a Disproportionate and Meaningful Impact by Edmond Lau

“The most effective engineers — the ones who have risen to become distinguished engineers and leaders at their companies — can produce 10 times the impact of other engineers, but they're not working 10 times the hours.” Learn how a great engineer thinks, even if you’re not one yourself.

The Lunar Men: Five Friends Whose Curiosity Changed the World by Jenny Uglow

“In the 1760s a group of amateur experimenters met and made friends in the English Midlands. Most came from humble families, all lived far from the center of things, but they were young and their optimism was boundless: together they would change the world. Among them were the ambitious toymaker Matthew Boulton and his partner James Watt, of steam-engine fame; the potter Josiah Wedgwood; the larger-than-life Erasmus Darwin, physician, poet, inventor, and theorist of evolution (a forerunner of his grandson Charles). Later came Joseph Priestley, discoverer of oxygen and fighting radical.”

Fermat's Enigma: The Epic Quest to Solve the World's Greatest Mathematical Problem by Simon Sing

xn + yn = zn, where n represents 3, 4, 5, …no solution “I have discovered a truly marvelous demonstration of this proposition which this margin is too narrow to contain.” With these words, the seventeenth-century French mathematician Pierre de Fermat threw down the gauntlet to future generations.”  (Pair with Number: The Language of Science by Tobais Dantzig, about the development of mathematics over time by human culture.)

Things Hidden Since the Foundation of the World by René Girard

“An astonishing work of cultural criticism, this book is widely recognized as a brilliant and devastating challenge to conventional views of literature, anthropology, religion, and psychoanalysis.”

Deep Survival: Who Lives, Who Dies, and Why by Laurence Gonzales 

“Survivors, whether they're jet pilots landing on the deck of an aircraft carrier or boatbuilders adrift on a raft in the middle of the Atlantic Ocean, share certain traits: training, experience, stoicism and a capacity for their logical neocortex (the brain's thinking part) to override the primitive amygdala portion of their brains. Although there's no surefire way to become a survivor, Gonzales does share some rules for adventure gleaned from the survivors themselves: stay calm, be decisive and don't give up.”

The Presentation of Self in Everyday Life by Erving Goffman

Written in the 1950s, an interesting look at how we present ourselves to others in social settings, using analogies from dramatic theatre. Reminds us of Shakespeare: “All the world’s a stage.”

The Map Is Not the Territory

Map and Territory

“(History) offers a ridiculous spectacle of a fragment expounding the whole.”
— Will Durant in Our Oriental Heritage

“All models are wrong but some are useful.”
— George Box

***

The Relationship Between Map and Territory

“That’s another thing we’ve learned from your Nation,” said Mein Herr, “map-making. But we’ve carried it much further than you. What do you consider the largest map that would be really useful?”

“About six inches to the mile.”

“Only six inches!” exclaimed Mein Herr. “We very soon got to six yards to the mile. Then we tried a hundred yards to the mile. And then came the grandest idea of all! We actually made a map of the country, on the scale of a mile to the mile!”

“Have you used it much?” I enquired.

“It has never been spread out, yet,” said Mein Herr: “the farmers objected: they said it would cover the whole country, and shut out the sunlight! So we now use the country itself, as its own map, and I assure you it does nearly as well.
Sylvie and Bruno Concluded

In 1931, in New Orleans, Louisiana, mathematician Alfred Korzybski presented a paper on mathematical semantics. To the non-technical reader, most of the paper reads like an abstruse argument on the relationship of mathematics to human language, and of both to physical reality. Important stuff certainly, but not necessarily immediately useful for the layperson.

However, in his string of arguments on the structure of language, Korzybski introduced and popularized the idea that the map is not the territory. In other words, the description of the thing is not the thing itself. The model is not reality. The abstraction is not the abstracted. This has enormous practical consequences.

In Korzybski’s words:

A.) A map may have a structure similar or dissimilar to the structure of the territory.

B.) Two similar structures have similar ‘logical’ characteristics. Thus, if in a correct map, Dresden is given as between Paris and Warsaw, a similar relation is found in the actual territory.

C.) A map is not the actual territory.

D.) An ideal map would contain the map of the map, the map of the map of the map, etc., endlessly…We may call this characteristic self-reflexiveness.

Maps are necessary, but flawed. (By maps, we mean any abstraction of reality, including descriptions, theories, models, etc.) The problem with a map is not simply that it is an abstraction; we need abstraction. Lewis Carroll made that clear by having Mein Herr describe a map with the scale of one mile to one mile. Such a map would not have the problems that maps have, nor would it be helpful in any way.

(See Borges for another take.)

To solve this problem, the mind creates maps of reality in order to understand it, because the only way we can process the complexity of reality is through abstraction. But frequently, we don’t understand our maps or their limits. In fact, we are so reliant on abstraction that we will frequently use an incorrect model simply because we feel any model is preferable to no model. (Reminding one of the drunk looking for his keys under the streetlight because “That’s where the light is!”)

Even the best and most useful maps suffer from limitations, and Korzybski gives us a few to explore: (A.) The map could be incorrect without us realizing it(B.) The map is, by necessity, a reduction of the actual thing, a process in which you lose certain important information; and (C.) A map needs interpretation, a process that can cause major errors. (The only way to truly solve the last would be an endless chain of maps-of-maps, which he called self-reflexiveness.)

With the aid of modern psychology, we also see another issue: the human brain takes great leaps and shortcuts in order to make sense of its surroundings. As Charlie Munger has pointed out, a good idea and the human mind act something like the sperm and the egg — after the first good idea gets in, the door closes. This makes the map-territory problem a close cousin of man-with-a-hammer tendency.

This tendency is, obviously, problematic in our effort to simplify reality. When we see a powerful model work well, we tend to over-apply it, using it in non-analogous situations. We have trouble delimiting its usefulness, which causes errors.

Let’s check out an example.

***

By most accounts, Ron Johnson was one the most successful and desirable retail executives by the summer of 2011. Not only was he handpicked by Steve Jobs to build the Apple Stores, a venture which had itself come under major scrutiny – one retort printed in Bloomberg magazine: “I give them two years before they're turning out the lights on a very painful and expensive mistake” – but he had been credited with playing a major role in turning Target from a K-Mart look-alike into the trendy-but-cheap Tar-zhey by the late 1990s and early 2000s.

Johnson's success at Apple was not immediate, but it was undeniable. By 2011, Apple stores were by far the most productive in the world on a per-square-foot basis, and had become the envy of the retail world. Their sales figures left Tiffany’s in the dust. The gleaming glass cube on Fifth Avenue became a more popular tourist attraction than the Statue of Liberty. It was a lollapalooza, something beyond ordinary success. And Johnson had led the charge.

With that success, in 2011 Johnson was hired by Bill Ackman, Steven Roth, and other luminaries of the financial world to turn around the dowdy old department store chain JC Penney. The situation of the department store was dour: Between 1992 and 2011, the retail market share held by department stores had declined from 57% to 31%.

Their core position was a no-brainer though. JC Penney had immensely valuable real estate, anchoring malls across the country. Johnson argued that their physical mall position was valuable if for no other reason that people often parked next to them and walked through them to get to the center of the mall. Foot traffic was a given. Because of contracts signed in the ’50s, ’60s, and ’70s, the heyday of the mall building era, rent was also cheap, another major competitive advantage. And unlike some struggling retailers, JC Penney was making (some) money. There was cash in the register to help fund a transformation.

The idea was to take the best ideas from his experience at Apple; great customer service, consistent pricing with no markdowns and markups, immaculate displays, world-class products, and apply them to the department store. Johnson planned to turn the stores into little malls-within-malls. He went as far as comparing the ever-rotating stores-within-a-store to Apple’s “apps.” Such a model would keep the store constantly fresh, and avoid the creeping staleness of retail.

Johnson pitched his idea to shareholders in a series of trendy New York City meetings reminiscent of Steve Jobs’ annual “But wait, there’s more!” product launches at Apple. He was persuasive: JC Penney’s stock price went from $26 in the summer of 2011 to $42 in early 2012 on the strength of the pitch.

The idea failed almost immediately. His new pricing model (eliminating discounting) was a flop. The coupon-hunters rebelled. Much of his new product was deemed too trendy. His new store model was wildly expensive for a middling department store chain – including operating losses purposefully endured, he’d spent several billion dollars trying to effect the physical transformation of the stores. JC Penney customers had no idea what was going on, and by 2013, Johnson was sacked. The stock price sank into the single digits, where it remains two years later.

What went wrong in the quest to build America’s Favorite Store? It turned out that Johnson was using a map of Tulsa to navigate Tuscaloosa. Apple’s products, customers, and history had far too little in common with JC Penney’s. Apple had a rabid, young, affluent fan-base before they built stores; JC Penney’s was not associated with youth or affluence. Apple had shiny products, and needed a shiny store; JC Penney was known for its affordable sweaters. Apple had never relied on discounting in the first place; JC Penney was taking away discounts given prior, triggering massive deprival super-reaction.

In other words, the old map was not very useful. Even his success at Target, which seems like a closer analogue, was misleading in the context of JC Penney. Target had made small, incremental changes over many years, to which Johnson had made a meaningful contribution. JC Penney was attempting to reinvent the concept of the department store in a year or two, leaving behind the core customer in an attempt to gain new ones. This was a much different proposition. (Another thing holding the company back was simply its base odds: Can you name a retailer of great significance that has lost its position in the world and come back?)

The main issue was not that Johnson was incompetent. He wasn’t. He wouldn’t have gotten the job if he was. He was extremely competent. But it was exactly his competence and past success that got him into trouble. He was like a great swimmer that tried to tackle a grand rapid, and the model he used successfully in the past, the map that had navigated a lot of difficult terrain, was not the map he needed anymore. He had an excellent theory about retailing that applied in some circumstances, but not in others. The terrain had changed, but the old idea stuck.

***

One person who well understands this problem of the map and the territory is Nassim Taleb, author of the Incerto series – Antifragile , The Black SwanFooled by Randomness, and The Bed of Procrustes.

Taleb has been vocal about the misuse of models for many years, but the earliest and most vivid I can recall is his firm criticism of a financial model called Value-at Risk, or VAR. The model, used in the banking community, is supposed to help manage risk by providing a maximum potential loss within a given confidence interval. In other words, it purports to allow risk managers to say that, within 95%, 99%, or 99.9% confidence, the firm will not lose more than $X million dollars in a given day. The higher the interval, the less accurate the analysis becomes. It might be possible to say that the firm has $100 million at risk at any time at a 99% confidence interval, but given the statistical properties of markets, a move to 99.9% confidence might mean the risk manager has to state the firm has $1 billion at risk. 99.99% might mean $10 billion. As rarer and rarer events are included in the distribution, the analysis gets less useful. So, by necessity, the “tails” are cut off somewhere and the analysis is deemed acceptable.

Elaborate statistical models are built to justify and use the VAR theory. On its face, it seems like a useful and powerful idea; if you know how much you can lose at any time, you can manage risk to the decimal. You can tell your board of directors and shareholders, with a straight face, that you’ve got your eye on the till.

The problem, in Nassim’s words, is that:

A model might show you some risks, but not the risks of using it. Moreover, models are built on a finite set of parameters, while reality affords us infinite sources of risks.

In order to come up with the VAR figure, the risk manager must take historical data and assume a statistical distribution in order to predict the future. For example, if we could take 100 million human beings and analyse their height and weight, we could then predict the distribution of heights and weights on a different 100 million, and there would be a microscopically small probability that we’d be wrong. That’s because we have a huge sample size and we are analysing something with very small and predictable deviations from the average.

But finance does not follow this kind of distribution. There’s no such predictability. As Nassim has argued, the “tails” are fat in this domain, and the rarest, most unpredictable events have the largest consequences. Let’s say you deem a highly threatening event (for example, a 90% crash in the S&P 500) to have a 1 in 10,000 chance of occurring in a given year, and your historical data set only has 300 years of data. How can you accurately state the probability of that event? You would need far more data.

Thus, financial events deemed to be 5, or 6, or 7 standard deviations from the norm tend to happen with a certain regularity that nowhere near matches their supposed statistical probability.  Financial markets have no biological reality to tie them down: We can say with a useful amount of confidence that an elephant will not wake up as a monkey, but we can’t say anything with absolute confidence in an Extremistan arena.

We see several issues with VAR as a “map,” then. The first that the model is itself a severe abstraction of reality, relying on historical data to predict the future. (As all financial models must, to a certain extent.) VAR does not say “The risk of losing X dollars is Y, within a confidence of Z.” (Although risk managers treat it that way). What VAR actually says is “the risk of losing X dollars is Y, based on the given parameters.” The problem is obvious even to the non-technician: The future is a strange and foreign place that we do not understand. Deviations of the past may not be the deviations of the future. Just because municipal bonds have never traded at such-and-such a spread to U.S. Treasury bonds does not mean that they won’t in the future. They just haven’t yet. Frequently, the models are blind to this fact.

In fact, one of Nassim’s most trenchant points is that on the day before whatever “worst case” event happened in the past, you would have not been using the coming “worst case” as your worst case, because it wouldn’t have happened yet.

Here’s an easy illustration. October 19, 1987, the stock market dropped by 22.61%, or 508 points on the Dow Jones Industrial Average. In percentage terms, it was then and remains the worst one-day market drop in U.S. history. It was dubbed “Black Monday.” (Financial writers sometimes lack creativity — there are several other “Black Monday’s” in history.) But here we see Nassim’s point: On October 18, 1987, what would the models use as the worst possible case? We don’t know exactly, but we do know the previous worst case was 12.82%, which happened on October 28, 1929. A 22.61% drop would have been considered so many standard deviations from the average as to be near impossible.

But the tails are very fat in finance — improbable and consequential events seem to happen far more often than they should based on naive statistics. There is also a severe but often unrecognized recursiveness problem, which is that the models themselves influence the outcome they are trying to predict. (To understand this more fully, check out our post on Complex Adaptive Systems.)

A second problem with VAR is that even if we had a vastly more robust dataset, a statistical “confidence interval” does not do the job of financial risk management. Says Taleb:

There is an internal contradiction between measuring risk (i.e. standard deviation) and using a tool [VAR] with a higher standard error than that of the measure itself.

I find that those professional risk managers whom I heard recommend a “guarded” use of the VAR on grounds that it “generally works” or “it works on average” do not share my definition of risk management. The risk management objective function is survival, not profits and losses. A trader according to the Chicago legend, “made 8 million in eight years and lost 80 million in eight minutes”. According to the same standards, he would be, “in general”, and “on average” a good risk manager.

This is like a GPS system that shows you where you are at all times, but doesn’t include cliffs. You’d be perfectly happy with your GPS until you drove off a mountain.

It was this type of naive trust of models that got a lot of people in trouble in the recent mortgage crisis. Backward-looking, trend-fitting models, the most common maps of the financial territory, failed by describing a territory that was only a mirage: A world where home prices only went up. (Lewis Carroll would have approved.)

This was navigating Tulsa with a map of Tatooine.

***

The logical response to all this is, “So what?” If our maps fail us, how do we operate in an uncertain world? This is its own discussion for another time, and Taleb has gone to great pains to try and address the concern. Smart minds disagree on the solution. But one obvious key must be building systems that are robust to model error.

The practical problem with a model like VAR is that the banks use it to optimize. In other words, they take on as much exposure as the model deems OK. And when banks veer into managing to a highly detailed, highly confident model rather than to informed common sense, which happens frequently, they tend to build up hidden risks that will un-hide themselves in time.

If one were to instead assume that there were no precisely accurate maps of the financial territory, they would have to fall back on much simpler heuristics. (If you assume detailed statistical models of the future will fail you, you don’t use them.)

In short, you would do what Warren Buffett has done with Berkshire Hathaway. Mr. Buffett, to our knowledge, has never used a computer model in his life, yet manages an institution half a trillion dollars in size by assets, a large portion of which are financial assets. How?

The approach requires not only assuming a future worst case far more severe than the past, but also dictates building an institution with a robust set of backup systems, and margins-of-safety operating at multiple levels. Extra cash, rather than extra leverage. Taking great pains to make sure the tails can’t kill you. Instead of optimizing to a model, accepting the limits of your clairvoyance.

The trade-off, of course, is short-run rewards much less great than those available under more optimized models. Speaking of this, Charlie Munger has noted:

Berkshire’s past record has been almost ridiculous. If Berkshire had used even half the leverage of, say, Rupert Murdoch, it would be five times its current size.

For Berkshire at least, the trade-off seems to have been worth it.

***

The salient point then is that in our march to simplify reality with useful models, of which Farnam Street is an advocate, we confuse the models with reality. For many people, the model creates its own reality. It is as if the spreadsheet comes to life. We forget that reality is a lot messier. The map isn’t the territory. The theory isn’t what it describes, it’s simply a way we choose to interpret a certain set of information. Maps can also be wrong, but even if they are essentially correct, they are an abstraction, and abstraction means that information is lost to save space. (Recall the mile-to-mile scale map.)

How do we do better? This is fodder for another post, but the first step is to realize that you do not understand a model, map, or reduction unless you understand and respect its limitations. We must always be vigilant by stepping back to understand the context in which a map is useful, and where the cliffs might lie. Until we do that, we are the turkey.