Tag: Behavioral Psychology

Choosing your Choice Architect(ure)

“Nothing will ever be attempted
if all possible objections must first be overcome.”

— Samuel Johnson

***

In the book Nudge by Richard Thaler and Cass Sunstein they coin the terms ‘Choice Architecture’ and ‘Choice Architect’. For them, if you have an ability to influence the choices other people make, you are a choice architect.

Considering the number of interactions we have everyday, it would be quite easy to argue that we are all Choice Architects at some point. But this also makes the inverse true; we are also wandering around someone else’s Choice Architecture.

Let’s take a look at a few of the principles of good choice architecture, so we can get a better idea of when someone is trying to nudge us.

This information can then be used/weighed when making decisions.  

Defaults

Thaler and Sunstein start with a discussion on “defaults” that are commonly offered to us:

For reasons we have discussed, many people will take whatever option requires the least effort, or the path of least resistance. Recall the discussion of inertia, status quo bias, and the ‘yeah, whatever’ heuristic. All these forces imply that if, for a given choice, there is a default option — an option that will obtain if the chooser does nothing — then we can expect a large number of people to end up with that option, whether or not it is good for them. And as we have also stressed, these behavioral tendencies toward doing nothing will be reinforced if the default option comes with some implicit or explicit suggestion that it represents the normal or even the recommended course of action.

When making decisions people will often take the option that requires the least effort or the path of least resistance. This makes sense: It’s not just a matter of laziness, we also only have so many hours in a day. Unless you feel particularly strongly about it, if putting little to no effort towards something leads you forward (or at least doesn’t noticeably kick you backwards) this is what you are likely to do. Loss aversion plays a role as well. If we feel like the consequences of making a poor choice are high, we will simply decide to do nothing. 

Inertia is another reason: If the ship is currently sailing forward, it can often take a lot of time and effort just to slightly change course.

You have likely seen many examples of inertia at play in your work environment and this isn’t necessarily a bad thing.

Sometimes we need that ship to just steadily move forward. The important bit is to realize when this is factoring into your decisions, or more specifically, when this knowledge is being used to nudge you into making specific choices.

Let’s think about some of your monthly recurring bills. While you might not be reading that magazine or going to the gym, you’re still paying for the ability to use that good or service. If you weren’t being auto-renewed monthly, what is the chance that you would put the effort into renewing that subscription or membership? Much lower, right? Publishers and gym owners know this, and they know you don't want to go through the hassle of cancelling either, so they make that difficult, too. (They understand well our tendency to want to travel the path of least resistance and avoid conflict.)

This is also where they will imply that the default option is the recommended course of action. It sounds like this:

“We’re sorry to hear you no longer want the magazine Mr. Smith. You know, more than half of the fortune 500 companies have a monthly subscription to magazine X, but we understand if it’s not something you’d like to do at the moment.”

or

“Mr. Smith we are sorry to hear that you want to cancel your membership at GymX. We understand if you can’t make your health a priority at this point but we’d love to see you back sometime soon. We see this all the time, these days everyone is so busy. But I’m happy to say we are noticing a shift where people are starting to make time for themselves, especially in your demographic…”

(Just cancel them. You’ll feel better. We promise.)

The Structure of Complex Choices

We live in a world of reviews. Product reviews, corporate reviews, movie reviews… When was the last time you bought a phone or a car before checking the reviews? When was the last time that you hired an employee without checking out their references? 

Thaler and Sunstein call this Collaborative Filtering and explain it as follows:

You use the judgements of other people who share your tastes to filter through the vast number of books or movies available in order to increase the likelihood of picking one you like. Collaborative filtering is an effort to solve a problem of choice architecture. If you know what people like you tend to like, you might well be comfortable in selecting products you don’t know, because people like you tend to like them. For many of us, collaborative filtering is making difficult choices easier.

While collaborative filtering does a great job of making difficult choices easier we have to remember that companies also know that you will use this tool and will try to manipulate it. We just have to look at the information critically, compare multiple sources and take some time to review the reviewers.

These techniques can be useful for decisions of a certain scale and complexity: when the alternatives are understood and in small enough numbers. However, once we reach a certain size we require additional tools to make the right decision.

One strategy to use is what Amos Tversky (1972) called ‘elimination by aspects.’ Someone using this strategy first decides what aspect is most important (say, commuting distance), establishes a cutoff level (say, no more than a thirty-minute commute), then eliminates all the alternatives that do not come up to this standard. The process is repeated, attribute by attribute (no more than $1,500 per month; at least two bedrooms; dogs permitted), until either a choice is made or the set is narrowed down enough to switch over to a compensatory evaluation of the ‘finalists.’”

This is a very useful tool if you have a good idea of which attributes are of most value to you.

When using these techniques, we have to be mindful of the fact that the companies trying to sell us goods have spent a lot of time and money figuring out what attributes are important to you as well.

For example, if you were to shop for an SUV you would notice that there are a specific number of variables they all seem to have in common now (engine options, towing options, seating options, storage options). They are trying to nudge you not to eliminate them from your list. This forces you to do the tertiary research or better yet, this forces you to walk into dealerships where they will try to inflate the importance of those attributes (which they do best).

They also try to call things new names as a means to differentiate themselves and get onto your list. What do you mean our competitors don't have FLEXfuel?

Incentives

Incentives are so ubiquitous in our lives that it’s very easy to overlook them. Unfortunately, this can influence us to make poor decisions.

Thaler and Sunstein believe this is tied into how salient the incentive is.

The most important modification that must be made to a standard analysis of incentives is salience. Do the choosers actually notice the incentives they face? In free markets, the answer is usually yes, but in important cases the answer is no.

Consider the example of members of an urban family deciding whether to buy a car. Suppose their choices are to take taxis and public transportation or to spend ten thousand dollars to buy a used car, which they can park on the street in front of their home. The only salient costs of owning this car will be the weekly stops at the gas station, occasional repair bills, and a yearly insurance bill. The opportunity cost of the ten thousand dollars is likely to be neglected. (In other words, once they purchase the car, they tend to forget about the ten thousand dollars and stop treating it as money that could have been spent on something else.) In contrast, every time the family uses a taxi the cost will be in their face, with the meter clicking every few blocks. So behavioral analysis of the incentives of car ownership will predict that people will underweight the opportunity costs of car ownership, and possibly other less salient aspects such as depreciation, and may overweight the very salient costs of using a taxi.

The problems here are relatable and easily solved: If the family above had written down all the numbers related to either taxi, public transportation, or car ownership, it would have been a lot more difficult for them to undervalue the salient aspects of any of their choices. (At least if the highest value attribute is cost).

***

This isn’t an exhaustive list of all the daily nudges we face but it’s a good start and some important, translatable, themes emerge.

  • Realize when you are wandering around someone’s choice architecture.
  • Do your homework
  • Develop strategies to help you make decisions when you are being nudged.

 

Still Interested? Buy, and most importantly read, the whole book. Also, check out our other post on some of the Biases and Blunders covered in Nudge.

The Fundamental Attribution Error: Why Predicting Behavior is so Hard


“Psychologists refer to the inappropriate use of dispositional
explanation as the fundamental attribution error, that is,
explaining situation-induced behavior as caused by
enduring character traits of the agent.”
— Jon Elster

***

The problem with any concept of “character” driving behavior is that “character” is pretty hard to pin down. We call someone “moral” or “honest,” we call them “courageous” or “naive” or any other number of names. The implicit connotation is that someone “honest” in one area will be “honest” in most others, or someone “moral” in one situation is going to be “moral” elsewhere.

Old-time folk psychology supports the notion, of course. As Jon Elster points out in his wonderful book Explaining Social Behavior, folk wisdom would have us believe that much of this “predicting and understanding behavior” thing is pretty darn easy! Simply ascertain character, and use that as a basis to predict or explain action.

People are often assumed to have personality traits (introvert, timid, etc.) as well as virtues (honesty, courage, etc.) or vices (the seven deadly sins, etc.). In folk psychology, these features are assumed to be stable over time and across situations. Proverbs in all languages testify to this assumption. “Who tells one lie will tell a hundred.” “Who lies also steals.” “Who steals an egg will steal an ox.” “Who keeps faith in small matters, does so in large ones.” “Who is caught red-handed once will always be distrusted.” If folk psychology is right, predicting and explaining behavior should be easy.

A single action will reveal the underlying trait or disposition and allow us to predict behavior on an indefinite number of other occasions when the disposition could manifest itself. The procedure is not tautological, as it would be if we took cheating on an exam as evidence of dishonesty and then used the trait of dishonesty to explain the cheating. Instead, it amounts to using cheating on an exam as evidence for a trait (dishonesty) that will also cause the person to be unfaithful to a spouse. If one accepts the more extreme folk theory that all virtues go together, the cheating might also be used to predict cowardice in battle or excessive drinking. 

This is a very natural and tempting way to approach the understanding of people. We like to think of actions that “speak volumes” about others' character, thus using that as a basis to predict or understand their behavior in other realms.

For example, let's say you were interviewing a financial advisor. He shows up on time, in a nice suit, and buys lunch. He says all the right words. Will he handle your money correctly?

Almost all of us would be led to believe he would, reasoning that his sharp appearance, timeliness, and generosity point towards his “good character”.

But what the study of history shows us is that appearances are flawed, and behavior in one context often does not have correlation to behavior in other contexts. Judging character becomes complex when we appreciate the situational nature of our actions. The U.S. President Lyndon Johnson was an arrogant bully and a liar who stole an election when he was young. He also fought like hell to pass the Civil Rights Act, something almost no other politician could have done.

Henry Ford standardized and streamlined the modern automobile and made it affordable to the masses, while paying “better than fair” wages to his employees and generally treating them well and with respect, something many “Titans” of business had trouble with in his day. He was also a notorious anti-Semite! If it's true that “He who is moral in one respect is also moral in all respects,” then what are we to make of this?

Jon Elster has some other wonderful examples coming from the world of music, regarding impulsivity versus discipline:

The jazz musician Charlie Parker was characterized by a doctor who knew him as “a man living from moment to moment. A man living for the pleasure principle, music, food, sex, drugs, kicks, his personality arrested at an infantile level.” Another great jazz musician, Django Reinhardt, had an even more extreme present-oriented attitude in his daily life, never saving any of his substantial earnings, but spending them on whims or on expensive cars, which he quickly proceeded to crash. In many ways he was the incarnation of the stereotype of “the Gypsy.” Yet you do not become a musician of the caliber of Parker and Reinhardt if you live in the moment in all respects. Proficiency takes years of utter dedication and concentration. In Reinhardt's case, this was dramatically brought out when he damaged his left hand severely in a fire and retrained himself so that he could achieve more with two fingers than anyone else with four. If these two musicians had been impulsive and carefree across the board — if their “personality” had been consistently “infantile” — they could never have become such consummate artists.

Once we realize this truth, it seems obvious. We begin seeing it everywhere. Dan Ariely wrote a book about situational dishonesty and cheating which we have written about before. Judith Rich Harris based her theory of child development on the idea that children do not behave the same elsewhere as they do at home, misleading parents into thinking they were molding their children. Good interviewing and hiring is a notoriously difficult problem because we are consistently misled into thinking that what we learn in the interview process is representative of the interviewee's general competence. Books have been written about the Halo Effect, a similar idea that good behavior in one area creates a “halo” around all behavior.

The reason we see this everywhere is because it's how the world works!

This basic truth is called the Fundamental Attribution Error, the belief that behavior in one context carries over with any consistency into other areas.

Studying the error leads us to conclude that we have a natural tendency to:

A. Over-rate some general consideration of “character” and,
B. Under-rate the “power of the situation”, and its direct incentives, to compel a variety of behavior.

Elster describes a social psychology experiment that effectively demonstrates how quickly any thought of “morality” can be lost in the right situation:

In another experiment, theology students were told to prepare themselves to give a brief talk in a nearby building. One-half were told to build the talk around the Good Samaritan parable(!), whereas the others were given a more neutral topic. One group was told to hurry since the people in the other building were waiting for them, whereas another was told that they had plenty of time. On their way to the other building, subjects came upon a man slumping in the doorway, apparently in distress. Among the students who were told they were late, only 10 percent offered assistance; in the other group, 63 percent did so. The group that had been told to prepare a talk on the Good Samaritan was not more likely to behave as one. Nor was the behavior of the students correlated with answers to a questionnaire intended to measure whether their interest in religion was due to the desire for personal salvation or to a desire to help others. The situational factor — being hurried or not — had much greater explanatory power than any dispositional factor.

So with a direct incentive in front of them — not wanting to be late when people were waiting for them, which could cause shame — the idea of being a Good Samaritan was thrown right out the window! So much for good character.

What we need to appreciate is that, in the words of Elster, “Behavior is often no more stable than the situations that shape it.” A shy young boy on the playground might be the most outgoing and aggressive boy in his group of friends. A moral authority in the realm of a religious institution might well cheat on their taxes. A woman who treats her friends poorly might treat her family with reverence and care.

We can't throw the baby out with the bathwater, of course. Elster refers to contingent response tendencies that would carry from situation to situation, but they tend to be specific rather than general. If we break down character into specific interactions between person and types of situations, we can understand things a little more accurately.

Instead of calling someone a “liar,” we might understand that they lie on their taxes but are honest with their spouse. Instead of calling someone a “hard worker,” we might come to understand that they drive hard in work situations, but simply cannot be bothered to work around the house. And so on. We should pay attention to the interplay between the situation, the incentives and the nature of the person, rather than just assuming that a broad  character trait applies in all situations.

This carries two corollaries:

A. As we learn to think more accurately, we get one step closer to understanding human nature as it really is. We can better understand the people with whom we coexist.

B. We might better understand ourselves! Imagine if you could be the rare individual whose positive traits truly did carry over into all, or at least all important, situations. You would be traveling an uncrowded road.

***

Want More? Check out our ever-growing database of mental models.

Hares, Tortoises, and the Trouble with Genius

“Geniuses are dangerous.”
— James March

The Trouble with Genius

How many organizations would deny that they want more creativity, more genius, and more divergent thinking among their constituents? The great genius leaders of the world are fawned over breathlessly and a great amount of lip service is given to innovation; given the choice between “mediocrity” and “innovation,” we all choose innovation hands-down.

So why do we act the opposite way?

Stanford's James March might have some insight. His book On Leadership (see our earlier notes here) is a collection of insights derived mostly from the study of great literature, from Don Quixote to Saint Joan to War & Peace. In March's estimation, we can learn more about human nature (of which leadership is merely a subset) from studying literature than we can from studying leadership literature.

March discusses the nature of divergent thinking and “genius” in a way that seems to reflect true reality. We don't seek to cultivate genius, especially in a mature organization, because we're more afraid of the risks than appreciative of the benefits. A classic case of loss aversion. Tolerating genius means tolerating a certain amount of disruption; the upside of genius sounds pretty good until we start understanding its dark side:

Most original ideas are bad ones. Those that are good, moreover, are only seen as such after a long learning period; they rarely are impressive when first tried out. As a result, an organization is likely to discourage both experimentation with deviant ideas and the people who come up with them, thereby depriving itself, in the name of efficient operation, of its main source of innovation.

[…]

Geniuses are dangerous. Othello's instinctive action makes him commit an appalling crime, the fine sentiments of Pierre Bezukhov bring little comfort to the Russian peasants, and Don Quixote treats innocent people badly over and over again. A genius combines the characteristics that produce resounding failures (stubbornness, lack of discipline, ignorance), a few ingredients of success (elements of intelligence, a capacity to put mistakes behind him or her, unquenchable motivation), and exceptional good luck. Genius therefore only appears as a sub-product of a great tolerance for heresy and apparent craziness, which is often the result of particular circumstances (over-abundant resources, managerial ideology, promotional systems) rather than deliberate intention. “Intelligent” organizations will therefore try to create an environment that allows genius to flourish by accepting the risks of inefficiency or crushing failures…within the limits of the risks that they can afford to take.

We've bolded an important component: Exceptional good luck. The kind of genius that rarely surfaces but we desperately pursue needs great luck to make an impact. Truthfully, genius is always recognized in hindsight, with the benefit of positive results in mind. We “cherrypick” the good results of divergent thinkers, but forget that we use the results to decide who's a genius and who isn't. Thus, tolerating divergent, genius-level thinking requires an ability to tolerate failure, loss, and change if it's to be applied prospectively.

Sounds easy enough, in theory. But as Daniel Kahneman and Charlie Munger have so brilliantly pointed out, we become very risk averse when we possess anything, including success; we feel loss more acutely than gain, and we seek to keep the status quo intact. (And it's probably good that we do, on average.)

Compounding the problem, when we do recognize and promote genius, some of our exalting is likely to be based on false confidence, almost by definition:

Individuals who are frequently promoted because they have been successful will have confidence in their own abilities to beat the odds. Since in a selective, and therefore increasingly homogenous, management group the differences in performance that are observed are likely to be more often due to chance events than to any particular individual capacity, the confidence is likely to be misplaced. Thus, the process of selecting on performance results in exaggerated self-confidence and exaggerated risk-taking.

Let's use a current example: Elon Musk. Elon is (justifiably) recognized as a modern genius, leaping tall buildings in a single bound. Yet as Ashlee Vance makes clear in his biography, Musk teetered on the brink several times. It's a near miracle that his businesses have survived (and thrived) to where they're at today. The press would read much differently if SpaceX or Tesla had gone under — he might be considered a brilliant but fatally flawed eccentric rather than a genius. Luck played a fair part in that outcome (which is not to take away from Musk's incredible work).

***

Getting back to organizations, the failure to appropriately tolerate genius is also a problem of homeostasis: The tendency of systems to “stay in place” and avoid disruption of strongly formed past habits. Would an Elon Musk be able to rise in a homeostatic organization? It generally does not happen.

James March has a solution, though, and it's one we've heard echoed by other thinkers like Nassim Taleb and seems to be used fairly well in some modern technology organizations. As with most organizational solutions, it requires realigning incentives, which is the job of a strong and selfless leader.

An analogy of the hare and the tortoise illustrates the solution:

Although one particular hare (who runs fast but sleeps too long) has every chance or being beaten by one particular tortoise, an army of hares in competition with an army of tortoises will almost certainly result in one of the hares crossing the finish line first. The choices of an organization therefore depend on the respective importance that it attaches to its mean performance (in which case it should recruit tortoises) and the achievement of a few dazzling successes (an army of hares, which is inefficient as a whole, but contains some outstanding individuals.)

[…]

In a simple model, a tortoise advances with a constant speed of 1 mile/hour while a hare runs at 5 miles/hour, but in each given 5-minute period a hare has a 90 percent chance of sleeping rather than running. A tortoise will cover the mile of the test in one hour exactly and a hare will have only about an 11 percent chance of arriving faster (the probability that he will be awake for at least three of the 5-minute periods.) If there is a race between the tortoise and one hare, the probability that the hare will win is only 0.11. However, if there are 100 tortoises and 100 hares in the race, the probability that at least one hare will arrive before any tortoise (and thus the race will be won by a hare) is 1– ((0.89)^100), or greater than 0.9999.

The analogy holds up well in the business world. Any one young, aggressive “hare” is unlikely to beat the lumbering “tortoise” that reigns king, but put 100 hares out against 100 tortoises and the result is much different.

This means that any organization must conduct itself in such a way that hares have a chance to succeed internally. It means becoming open to divergence and allowing erratic genius to rise, while keeping the costs of failure manageable. It means having the courage to create an “army of hares” inside of your own organization rather than letting tortoises have their way, as they will if given the opportunity.

For a small young organization, the cost of failure isn't all that high, comparatively speaking — you can't fall far off a pancake. So hares tend to get a lot more leash. But for a large organization, the cost of failure tends to increase to such a pain point that it stops becoming tolerated! At this point, real innovation ceases.

But, if we have the will and ability to create small teams and projects with “hare-like” qualities, in ways that allow the “talent + luck” equation to surface truly better and different work, necessarily tolerating (and encouraging) failure and disruption, then we might have a shot at overcoming homeostasis in the same way that a specific combination of engineering and fuel allow rockets to overcome the equally strong force of gravity.

***

Still Interested? Check out our notes on James March's books On Leadership and The Ambiguities of Experience, and an interview March did on the topic of leadership.

Fun with Logical Fallacies

We came across a cool book recently called Logically Fallacious: The Ultimate Collection of Over 300 Logical Fallacies, by a social psychologist named Bo Bennett. We were a bit skeptical at first — lists like that can be lacking thoughtfulness and synthesis — but then were hooked by a sentence in the introduction that brought the book near and dear to our hearts:

This book is a crash course, meant to catapult you into a world where you start to see things how they really are, not how you think they are.

We could use the same tag line for Farnam Street. (What was that thing about great artists stealing?)

Logically Fallacious a fun little reference guide to bad thinking, but let's try to highlight a few that seem to arise quite often without enough recognition. (To head off any objections at the pass, most of these are not strict logical fallacies in the technical sense, but more so examples of bad reasoning.)

Logical Fallacies

No True Scotsman

This one is a favorite. It arises when someone makes a broad sweeping claim that a “real” or “true” so and so would only do X or would never do Y.

Example: “No true Scotsman would drink an ale like that!”

“I know dyed-in-the-wool Scotsmen who drink many such ales!”

“Well then he's not a True Scotsman!”

Problem: The problem should be obvious: It's a circular definition! A True Scotsman is thus defined as anyone who would not drink such ales, which then makes them a True Scotsman, and so on. It's non-falsifiable. There's a Puritanical aspect to this line of reasoning that almost always leads to circularity.

Genetic Fallacy

This doesn't have to do with genetics per se so much as the genetic origin of an argument. The “genetic fallacy” is when you disclaim someone's argument based solely on some aspect of their background or the motivation of the claim.

Example: “Of course Joe's arguing that unions are good for the world, he's the head of the Local 147 Chapter!”

Problem: Whether or not Joe is the head of his local union chapter has nothing to do with whether unions are good or bad. It certainly may influence his argument, but it doesn't invalidate his argument. You must approach the merits of the argument rather than the merits of Joe to figure out whether it's true or not.

Failure to Elucidate

This is when someone tries to “explain” something slippery by redefining it in an equally nebulous way, instead of actually explaining it. Hearing something stated this way is usually a strong indicator that the person doesn't know what they're talking about.

Example: “The Secret works because of the vibration of sub-lingual frequencies.”

“What the heck are sub-lingual frequencies?”

“They're waves of energy that exist below the level of our consciousness.”

“…”

Problem: The claimant thinks they have explained the thing in a satisfactory way, but they haven't — they've simply offered another useless definition that does no work in explaining why the claim makes any sense. Too often the challenger will simply accept the follow up, or worse, repeat it to others, without getting a satisfactory explanation. In a Feynman-like way, you must keep probing, and if the probes reveal more failures to elucidate, it's likely that you can reject the claim, at least until real evidence is presented.

Causal Reductionism

This reflects closely on Nassim Taleb's work and the concept of the Narrative Fallacy — an undue simplifying of reality to a simple cause–> effect chain.

Example: “Warren Buffett was successful because his dad was a Congressman. He had a leg up I don't have!”

Problem: This form of argument is used pretty frequently because the claimant wishes it was true or is otherwise comfortable with the narrative. It resolves reality into a neat little box, when actual reality is complicated. To address this particular example, extreme success on the level of a Buffett clearly would have multiple causes acting in the same direction. His father's political affiliation is probably way down the list.

This fallacy is common in conspiracy theory-type arguments, where the proponent is convinced that because they have some inarguable facts — Howard Buffett was a congressman; being politically connected offers some advantages — their conclusion must also be correct. They ignore other explanations that are likely to be more correct, or refuse to admit that we don't quite know the answer. Reductionism leads to a lot of wrong thinking — the antidote is learning to think more broadly and be skeptical of narratives.

“Fallacy of Composition/Fallacy of Division”

These two fallacies are two sides of the same coin: The first problem is thinking that if some part of a greater whole has certain properties, that the whole must share the same properties. The second is the reverse: Thinking that because a whole is judged to have certain properties, that its constituent parts must necessarily share those properties.

Examples: “Your brain is made of molecules, and molecules are not conscious, so your brain must not be the source of consciousness.”

“Wall Street is a dishonest place, and so my neighbor Steve, who works at Goldman Sachs, must be a crook.”

Problem: In the first example, stolen directly from the book, we're ignoring emergent properties: Qualities that emerge upon the combination of various elements with more mundane innate qualities. (Like a great corporate culture.) In the second example, we make the same mistake in a mirrored way: We forget that greed may be emergent in the system itself, even from a group of otherwise fairly honest people. The other mistake is assuming that each constituent part of the system must necessarily share the traits of the whole system. (i.e., because Wall St. is a dishonest system, your neighbor must be dishonest.)

***

Still Interested? Check out the whole book. It's fun to pick up regularly and see which fallacies you can start recognizing all around you.

Eager to Be Wrong

“You know what Kipling said? Treat those two impostors just the same — success and failure. Of course, there’s going to be some failure in making the correct decisions. Nobody bats a thousand. I think it’s important to review your past stupidities so you are less likely to repeat them, but I’m not gnashing my teeth over it or suffering or enduring it. I regard it as perfectly normal to fail and make bad decisions. I think the tragedy in life is to be so timid that you don’t play hard enough so you have some reverses.”
— Charlie Munger

***

When was the last time you said to yourself I hope I’m wrong and really meant it?

Have you ever really meant it?

Here’s the thing: In our search for truth we must realize, thinking along two tracks, that we’re frequently led to wrong solutions by the workings of our natural apparatus. Uncertainty is a very mentally demanding, and in a certain way, physically demanding process. The brain uses a lot of energy when it has to process conflicting information. To show yourself, try reading up on something contentious like the abortion debate, but with a completely open mind to either side (if you can). Pay attention as your brain starts twisting itself into a very uncomfortable state while you explore completely opposing sides of an argument.

This mental pain is called cognitive dissonance and it's really not that much fun. Charlie Munger calls the process of resolving this dissonance doubt avoidance tendency – the tendency to resolve conflicting information as quickly as possible to return to physical and mental comfort. To get back to your happy zone.

Combine this tendency to resolve doubt with the well-known first conclusion bias (something Francis Bacon knew about long ago), and the logical conclusion is that we land on a lot of wrong answers and stay there because it’s easier.

Let that sink in. We don’t stay there because we’re correct, but because it’s physically easier. It's a form of laziness.

Don’t believe me? Spend a single day asking yourself this simple question: Do I know this for sure, or have I simply landed on a comfortable spot?

You’ll be surprised how many things you do and believe just because it’s easy. You might not even know how you landed there. Don’t feel bad about it — it’s as natural as breathing. You were wired that way at birth.

But there is a way to attack this problem.

Munger has a dictum that he won’t allow himself to hold an opinion unless he knows the other side of the argument better than that side does. Such an unforgiving approach means that he’s not often wrong. (It sometimes takes many years to show, but posterity has rarely shown him to be way off.) It’s a tough, wise, and correct solution.

It’s still hard though, and doesn’t solve the energy expenditure problem. What can we tell ourselves to encourage ourselves to do that kind of work? The answer would be well-known to Darwin: Train yourself to be eager to be wrong.

Right to be Wrong

The advice isn't simply to be open to being wrong, which you’ve probably been told to do your whole life. That’s nice, and correct in theory, but frequently turns into empty words on a page. Simply being open to being wrong allows you to keep the window cracked when confronted with disconfirming evidence — to say Well, I was open to it! and keep on with your old conclusion.

Eagerness implies something more. Eager implies that you actively hope there is real, true, disconfirming information proving you wrong. It implies you’d be more than glad to find it. It implies that you might even go looking for it. And most importantly, it implies that when you do find yourself in error, you don’t need to feel bad about it. You feel great about it! Imagine how much of the world this unlocks for you.

Why be so eager to prove yourself wrong? Well, do you want to be comfortable or find the truth? Do you want to say you understand the world or do you want to actually understand it? If you’re a truth seeker, you want reality the way it is, so you can live in harmony with it.

Feynman wanted reality. Darwin wanted reality. Einstein wanted reality. Even when they didn’t like it. The way to stand on the shoulders of giants is to start the day by telling yourself I can't wait to correct my bad ideas, because then I’ll be one step closer to reality. 

*** 

Post-script: Make sure you apply this advice to things that matter. As stated above, resolving uncertainty takes great energy. Don’t waste that energy on deciding whether Nike or Reebok sneakers are better. They’re both fine. Pick the ones that feel comfortable and move on. Save your deep introspection for the stuff that matters.

Incentives Gone Wrong: Cobras, Severed Hands, and Shea Butter

“You must have the confidence to override people with more credentials than you whose cognition is impaired by incentive-caused bias or some similar psychological force that is obviously present. But there are also cases where you have to recognize that you have no wisdom to add— and that your best course is to trust some expert.”
— Charlie Munger

***

There's a great little story on incentives which some of you may already know. The tale may be apocryphal, but it instructs so wonderfully that it's worth a repeat.

During British colonial rule of India, the government began to worry about the number of venomous cobras in Delhi, and so instituted a reward for every dead snake brought to officials. In a wonderful demonstration of the importance of second-order thinking, Indian citizens dutifully complied and began breeding venomous snakes to kill and bring to the British. By the time the experiment was over, the snake problem was worse than when it began. The Raj government had gotten exactly what it asked for.

***

There's another story, much more perverse, from the Congolese massacre in the late 19th and early 20th century under Belgian rule — the period Joseph Conrad wrote about in Heart of Darkness. (Some of you might know the tale better as Apocalypse Now, which was a Vietnam retelling of Heart of Darkness.)

As the wickedly evil King Leopold II of Belgium forced the Congolese to produce rubber, he sent in his Force Publique to whip the natives into shape through genocidal murder. (Think of them as a Belgian Congo version of the Nazi's SS.) Fearful that his soldiers would waste bullets hunting animals, Leopold ordered that the soldiers bring back the severed hands of dead Congolese as proof that they were enforcing the rubber decree. (Leopold himself never even visited his colony, although he did cause at least 10 million deaths.)

Given that Leopold's quotas were impossible to meet, shortfalls were common. And with the incentives placed on Belgian soldiers, many decided they could get human hands more easily than meeting rubber quotas, while still conserving their ammo for hunting. An interesting result ensued, as described by Bertrand Russell in his book Freedom and Organisation, 1814-1914.

Each village was ordered by the authorities to collect and bring in a certain amount of rubber – as much as the men could collect and bring in by neglecting all work for their own maintenance. If they failed to bring the required amount, their women were taken away and kept as hostages in compounds or in the harems of government employees. If this method failed, native troops, many of them cannibals, were sent into the village to spread terror, if necessary by killing some of the men; but in order to prevent a waste of cartridges, they were ordered to bring one right hand for every cartridge used. If they missed, or used cartridges on big game, they cut off the hands of living people to make up the necessary number.

In fact, as Peter Forbath describes in his book The River Congo, the soldiers were paid explicitly on the number of hands they collected. So hands gained in demand.

The baskets of severed hands, set down at the feet of the European post commanders, became the symbol of the Congo Free State. … The collection of hands became an end in itself. Force Publique soldiers brought them to the stations in place of rubber; they even went out to harvest them instead of rubber… They became a sort of currency. They came to be used to make up for shortfalls in rubber quotas, to replace… the people who were demanded for the forced labour gangs; and the Force Publique soldiers were paid their bonuses on the basis of how many hands they collected.

Looking to bolster an economy of rubber, Leopold II got an economy of severed hands. Like the British Raj, he got exactly what he asked for.

***

Joseph Heath describes another case of incentives gone wrong in his book Economics Without Illusions, citing the book Out of Poverty: And Into Something More Comfortable by John Stackhouse.

Stackhouse spent time in Ghana in the 1990s, and noticed that the “socially conscious” retailer The Body Shop was an enormous purchaser of shea nuts, which were produced in great quantities by Ghanians. The Body Shop used shea butter, produced from the nuts, to produce a variety of skin products, and as a part of its socially conscious mission, and its role in the Trade, Not Aid campaign, decided they were willing to pay above-market prices to Ghanian farmers, to the tune of an extra 50% on top of the going rate. And on top of that premium price, The Body Shop also decided to throw in a bonus payment for every kilogram of shea butter purchased, to be used for local development projects at the farmers' discretion.

Thinking that the Body Shop's early shea nut orders were a harbinger of a profitable boom, farmers began to rapidly up their production of shea butter. Stackhouse describes the result in his book:

A shea-nut rush was on, and neither the British chain nor the aid agencies were in a position to absorb the glut. In the first season, the northern villages, which normally produced two tonnes of shea butter a year, churned out twenty tonnes, nearly four times what the Body Shop wanted….Making matters worse, the Body Shop, after discovering it had overestimated the international market for shea products, quickly scaled back its orders for the next season. In Northern Ghana, it wasn't long before shea butter prices plunged.

Unfortunately, in its desire to do good in a poor part of the world, the Body Shop created a situation which was worse than when they began: Massive resources went into shea butter production only to find that it was not needed, and the overproduction of nuts ended up being mostly worthless.

These three cases above, and many more, lead us to the conclusion that people follow incentives the way ants follow sugar. It's imperative that we think very literally about the incentive systems we create. Remember that incentives are not only financial. Frequently it's something else: prestige, freedom, time, titles, sex, power, admiration…all of these and many other things are powerful incentives. But if we're not careful, we do the equivalent of creating an economy for severed hands.

***

Still Interested? Learn about one company that understood and harnessed incentives correctly, or re-read Munger's discussion on incentive-caused bias in his famous speech on human psychology. Also, check out the Distorting Power of Incentives.