Over 400,000 people visited Farnam Street last month to learn how to make better decisions, create new ideas, and avoid stupid errors. With more than 98,000 subscribers to our popular weekly digest, we've become an online intellectual hub. To learn more about we what do, start here.

Category Archives: Decision Making

The Probability Distribution of the Future

The best colloquial definition of risk may be the following:

“Risk means more things can happen than will happen.”

We found it through the inimitable Howard Marks, but it’s a quote from Elroy Dimson of the London Business School. Doesn’t that capture it pretty well?

Another way to state it is: If there were only one thing that could happen, how much risk would there be, except in an extremely banal sense? You’d know the exact probability distribution of the future. If I told you there was a 100% probability that you’d get hit by a car today if you walked down the street, you simply wouldn’t do it. You wouldn’t call walking down the street a “risky gamble” right? There’s no gamble at all.

But the truth is that in practical reality, there aren’t many 100% situations to bank on. Way more things can happen than will happen. That introduces great uncertainty into the future, no matter what type of future you’re looking at: An investment, your career, your relationships, anything.

How do we deal with this in a pragmatic way? The investor Howard Marks starts it this way:

Key point number one in this memo is that the future should be viewed not as a fixed outcome that’s destined to happen and capable of being predicted, but as a range of possibilities and, hopefully on the basis of insight into their respective likelihoods, as a probability distribution.

This is the most sensible way to think about the future: A probability distribution where more things can happen than will happen. Knowing that we live in a world of great non-linearity and with the potential for unknowable and barely understandable Black Swan events, we should never become too confident that we know what’s in store, but we can also appreciate that some things are a lot more likely than others. Learning to adjust probabilities on the fly as we get new information is called Bayesian updating.

But.

Although the future is certainly a probability distribution, Marks makes another excellent point in the wonderful memo above: In reality, only one thing will happen. So you must make the decision: Are you comfortable if that one thing happens, whatever it might be? Even if it only has a 1% probability of occurring? Echoing the first lesson of biology, Warren Buffett stated that “In order to win, you must first survive.” You have to live long enough to play out your hand.

Which leads to an important second point: Uncertainty about the future does not necessarily equate with risk, because risk has another component: Consequences. The world is a place where “bad outcomes” are only “bad” if you know their (rough) magnitude. So in order to think about the future and about risk, we must learn to quantify.

It’s like the old saying (usually before something terrible happens): What’s the worst that could happen? Let’s say you propose to undertake a six month project that will cost your company \$10 million, and you know there’s a reasonable probability that it won’t work. Is that risky?

It depends on the consequences of losing \$10 million, and the probability of that outcome. It’s that simple! (Simple, of course, does not mean easy.) A company with \$10 billion in the bank might consider that a very low-risk bet even if it only had a 10% chance of succeeding.

In contrast, a company with only \$10 million in the bank might consider it a high-risk bet even if it only had a 10% of failing. Maybe five \$2 million projects with uncorrelated outcomes would make more sense to the latter company.

In the real world, risk = probability of failure x consequences. That concept, however, can be looked at through many lenses. Risk of what? Losing money? Losing my job? Losing face? Those things need to be thought through. When we observe others being “too risk averse,” we might want to think about which risks they’re truly avoiding. Sometimes risk is not only financial.

***

Let’s cover one more under-appreciated but seemingly obvious aspect of risk, also pointed out by Marks: Knowing the outcome does not teach you about the risk of the decision.

This is an incredibly important concept:

If you make an investment in 2012, you’ll know in 2014 whether you lost money (and how much), but you won’t know whether it was a risky investment – that is, what the probability of loss was at the time you made it.

To continue the analogy, it may rain tomorrow, or it may not, but nothing that happens tomorrow will tell you what the probability of rain was as of today. And the risk of rain is a very good analogue (although I’m sure not perfect) for the risk of loss.

How many times do we see this simple dictum violated? Knowing that something worked out, we argue that it wasn’t that risky after all. But what if, in reality, we were simply fortunate? This is the Fooled by Randomness effect.

The way to think about it is the following: The worst thing that can happen to a young gambler is that he wins the first time he goes to the casinoHe might convince himself he can beat the system.

The truth is that most times we don’t know the probability distribution at all. Because the world is not a predictable casino game — an error Nassim Taleb calls the Ludic Fallacy — the best we can do is guess.

With intelligent estimations, we can work to get the rough order of magnitude right, understand the consequences if we’re wrong, and always be sure to never fool ourselves after the fact.

If you’re into this stuff, check out Howard Marks’ memos to his clients, or check out his excellent book, The Most Important Thing. Nate Silver also has an interesting similar idea about the difference between risk and uncertainty. And lastly, another guy that understands risk pretty well is Jason Zweig, who we’ve interviewed on our podcast before.

***

Nassim Taleb on the Notion of Alternative Histories — “The quality of a decision cannot be solely judged based on its outcome.”

The Four Types of Relationships — As Seneca said, “Time discovers truth.”

Date:
Filed Under:

Breaking the Rules: Moneyball Edition

Most of the book Simple Rules by Donald Sull and Kathleen Eisenhardt talks about identifying a problem area (or an area ripe for “simple rules”) and then walks you through creating your own set of rules. It’s a useful mental process.

An ideal situation for simple rules is something repetitive, giving you constant feedback so you can course correct as you go. But what if your rules stop working and you need to start over completely?

Simple Rules recounts the well-known Moneyball tale in its examination of this process:

The story begins with Sandy Alderson. Alderson, a former Marine with no baseball background became the A’s general manager in 1983. Unlike baseball traditionalists, Alderson saw scoring runs as a process, not an outcome, and imagined baseball as a factory with a flow of players moving along the bases. This view led Alderson and later his protege and replacement, Billy Beane, to the insight that most teams overvalue batting average (hits only) and miss the relevance of on-base percentage (walks plus hits) to keeping the runners moving. Like many insightful rules, this boundary rule of picking players with a high on base percentage has subtle second – and third-order effects. Hitters with a high on-base percentage are highly disciplined (i.e., patient, with a good eye for strikes). This means they get more walks, and their reputation for discipline encourages pitchers to throw strikes, which are easier to hit. They tire out pitchers by making them throw more pitches overall, and disciplined hitting does not erode much with age. These and other insights are at the heart of what author Michael Lewis famously described as moneyball.

The Oakland A’s did everything right, they had examined the issues, they tried to figure out those areas which would most benefit from a set of simple rules and they had implemented them. The problem was, they were easy rules to copy.

They were operating in a Red Queen Effect world where everyone around them was co-evolving, where running fast was just enough to get ahead temporarily, but not permanently. The Red Sox were the first and most successful club to copy the A’s:

By 2004, a free-spending team, the Boston Red Sox, co-opted the A’s principles and won the World Series for the first time since 1918. In contrast, the A’s went into decline, and by 2007 the were losing more games than they were winning Moneyball had struck out.

What can we do when the rules stop working?

We must break them.

***

When the A’s had brought in Sandy Alderson, he was an outsider with no baseball background who could look at the problem in a different and new light. So how could that be replicated?

The team decided to bring in Farhan Zaidi as director of baseball operations in 2009. Zaidi spent most of his life with a pretty healthy obsession for baseball but he had a unique background: a PhD in behavioral economics.

He started on the job of breaking the old rules and crafting new ones. Like Andy Grove did once upon a time with Intel, Zaidi helped the team turn and face a new reality. Sull and Eisenhardt consider this as a key trait:

To respond effectively to major change, it is essential to investigate the new situation actively, and create a reimagined vision that utilizes radically different rules.

The right choice is often to move to the new rules as quickly as possible. Performance will typically decline in the short run, but the transition to the new reality will be faster and more complete in the long run. In contrast, changing slowly often results in an awkward combination of the past and the future with neither fitting the other or working well.

Beane and Zaidi first did some house cleaning: They fired the team’s manager. Then, they began breaking the old Moneyball rules, things like avoiding drafting high-school players. They also decided to pay more attention to physical skills like speed and throwing.

In the short term, the team performed quite poorly as fan attendance showed a steady decline. Yet, once again, against all odds, the A’s finished first in their division in 2012. Their change worked.

With a new set of Simple Rules, they became a dominant force in their division once again.

Reflecting their formidable analytic skills, the A’s brass had a new mindset that portrayed baseball as a financial market rife with arbitrage possibilities and simple rules to match.

One was a how-to rule that dictated exploiting players with splits. Simply put, players with splits have substantially different performances in two seemingly similar situations. A common split is when a player hits very well against right-handed pitchers and poorly against left-handed pitchers, or vice versa. Players with spits are mediocre when they play every game, and are low paid. In contrast, most superstars play well regardless of the situation, and are paid handsomely for their versatility. The A’s insight was that when a team has a player who can perform one side of the split well and a different player who excels at the opposite split, the two positives can create a cheap composite player. So the A’s started using a boundary rule to pick players with splits and how-to rule to exploit those splits with platooning – putting different players at the same position to take advantage of their splits against right – or left-handed pitching.

If you’re reading this as a baseball fan, you’re probably thinking that exploiting splits isn’t anything new. So why did it have such an effect on their season? Well, no one had pushed it this hard before, which had some nuanced effects that might not have been immediately apparent.

For example, exploiting these splits keeps players healthier during the long 162-game season because they don’t play every day. The rule keeps everyone motivated because everyone has a role and plays often. It provides versatility when players are injured since players can fill in for each other.

They didn’t stop there. Zaidi and Beane looked at the data and kept rolling out new simple rules that broke with their highly successful Moneyball past.

In 2013 they added a new boundary rule to the player-selection activity: pick fly-ball hitters, meaning hitters who tend to hit the ball in the air and out of the infield (in contrast with ground-ball hitters). Sixty percent of the A’s at-bat were by fly-ball hitters in 2013, the highest percentage in major-league baseball in almost a decade, and the A’s had the highest ratio of fly ball to ground balls, by far. Why fly-ball hitters?

Since one of ten fly balls is a home run, fly-ball hitters hit more home runs: an important factor in winning games. Fly-ball hitters also avoid ground-ball double plays, a rally killer if ever there as one. They are particularly effective against ground-ball pitches because they tend to swing underneath the ball, taking way the advantage of those pitchers. In fact, the A’s fly-ball hitters batted an all-star caliber .302 against ground-ball pitchers in 2013 on their way to their second consecutive division title despite having the fourth-lowest payroll in major-league baseball.

Unfortunately, the new rules had a short-lived effectiveness: In 2014 the A’s fell to 2nd place and have been struggling the last two seasons. Two Cinderella stories is a great achievement, but it’s hard to maintain that edge.

This wonderful demonstration of the Red Queen Effect in sports can be described as an “arms race.’” As everyone tries to get ahead, a strange equilibrium is created by the simultaneous continual improvement, and those with more limited resources must work even harder as the pack moves ahead one at a time.

Even though they have adapted and created some wonderful “Simple Rules” in the past, the A’s (and all of their competitors) must stay in the race in order to return to the top: No “rule” will allow them to rest on their laurels. Second Level Thinking and a little real world experience shows this to be true: Those that prosper consistently will think deeply, reevaluate, adapt, and continually evolve. That is the nature of a competitive world.

Date:
Filed Under:

The book Simple Rules by Donald Sull and Kathleen Eisenhardt has a very interesting chapter on strategy, which tries to answer the following question: How do you translate your broad objectives into a strategy that can provide guidelines for your employees from day to day?

It’s the last bit there which is particularly important — getting everyone on the same page.

Companies don’t seem to have a problem creating broad objectives (which isn’t really a strategy). Your company might not call them that, they might call them “mission statements” or simply “corporate goals.”  They sound all well and good, but very little thought is given to how we will actually implement these lofty goals.

As Sull and Eisenhardt put it:

Developing a strategy and implementing it are often viewed as two distinct activities — first you come up with the perfect plan and then you worry about how to make it happen. This approach, common through it is, creates a disconnect between what a company is trying to accomplish and what employees do on a day-to-day basis.

The authors argue that companies can bridge this gap between strategic intent and actual implementation by following three steps:

1. Figure out what will move the needles.
2. Choose a bottleneck.
3. Craft the rules.

1. Moving the Needles

The authors use a dual needle metaphor to visualize corporate profits. They see it as two parallel needles: an upper needle which represents revenues and a lower needle which represents costs. The first critical step is to identify which actions will drive a wedge between the needles causing an increase in profits, a decrease in costs, and sustain this over time.

In other words, as simple as it sounds, we need an actual set of steps to get from figure a. to figure b.

What action will become the wedge that will move the needles?

The authors believe the best way to answer this is to sit down with your management team and ask them to work as a group to answer the following three questions:

1. Who will we target as customers?
2. What product or service will we offer?
3. How will we provide this product at a profit?

When you are trying to massage out these answers remember to use inversion as well.

Equally important are the choices on who not to serve and what not to offer.

Steve Jobs once pointed out that Apple was defined as much by what it didn’t do as by what it did.

2. Bottlenecks

Speaking of inversion, in order to complete our goal we must also figure out what’s holding us back from moving the needles — the bottlenecks standing in our way.

When it comes to implementing a strategy of simple rules, pinpointing the precise decision or activity where rules will have the most impact is half the battle. We use the term bottleneck to describe a specific activity or decision that hinders a company from moving the needles.

You may be surprised at the amount of bottlenecks you come across, so you’ll have to practice some “triage” of your issues, sorting what’s important from what’s really important.

The authors believe that the best bottlenecks to focus your attention on share three characteristics:

1. They have a direct and significant impact on value creation.
2. They should represent recurrent decisions (as opposed to ‘one off’ choices).
3. They should be obstacles that arise when opportunities exceed available resources.

Once we’ve established what the bottlenecks are, it’s time to craft the rules which will provide you a framework in which to remove them.

3. Craft the Rules

Developing rules from the top down is a big mistake. When leaders rely on their gut instincts, they overemphasize recent events, build in their personal biases, and ignore data that doesn’t fit with their preconceived notions. It is much better to involve a team, typically ranging in size from four to eight members, and use a structured process to harness members’ diverse insights and points of view. When drafting the dream team to develop simple rules, it is critical to include some of the people who will be using them on a day-to-day basis.

This probably seems like common sense but we’re guessing you have worked at least one place where all information and new initiatives came from above, and much of it seemingly came out of nowhere because you weren’t likely involved.

In these situations it’s very hard to get buy-in from the employees — yet they are the ones doing the work, implementing the rules. So we need to think about their involvement from the beginning.

Having users make the rules confers several advantages. First, they are closest to the facts on the ground and best positioned to codify experience into usable rules. Because they will make decisions based on the rules, they can strike the right balance between guidance and discretion, avoiding rules that are overly vague or restrictive. User can also phrase the rules in language that resonates for them, rather than relying on business jargon. By actively participating in the process, users are more likely to buy into the final rules and therefore apply them in practice. Firsthand knowledge also makes it easier to explain the rules, and their underlying rationale, to colleagues who did not participate in the process.

It’s important to note here that this is a process, a process in which you are never done – there is no real finish line. You must always plan to learn and to iterate as you learn — keep changing the plan as new information comes in. Rigidity to a plan is not a virtue; learning and adapting are virtues

***

There’s nothing wrong with strategy. In fact, without a strategy, it’s hard to figure out what to do; some strategy or another must guide your actions as an organization. But it’s simply not enough: Detailed execution, at the employee level, is what gets things done. That’s what the Simple Rules are all about.

Strategy, in our view, lives in the simple rules that guide an organization’s most important activities. They allow employees to make on-the-spot decisions and seize unexpected opportunities without losing sight of the big picture.

The process you use to develop simple rules matters as much as the rules themselves. Involving a broad cross-section of employees, for example, injects more points of view into the discussion, produces a shared understanding of what matters for value creation, and increases buy-in to the simple rules. Investing the time up front to clarify what will move the needles dramatically increases the odds that simple rules will be applied where they can have the greatest impact.

***

Still Interested? Read the book, or check out our other post where we cover the details of creating your simple rules.

Date:
Filed Under:

Moving the Finish Line: The Goal Gradient Hypothesis

Imagine a sprinter running an Olympic race. He’s competing in the 1600 meter run.

The first two laps he runs at a steady but hard pace, trying to keep himself consistently near the head, or at least the middle, of the pack, hoping not to fall too far behind while also conserving energy for the whole race.

About 800 meters in, he feels himself start to fatigue and slow. At 1000 meters, he feels himself consciously expending less energy. At 1200, he’s convinced that he didn’t train enough.

Now watch him approach the last 100 meters, the “mad dash” for the finish. He’s been running what would be an all-out sprint to us mortals for 1500 meters, and yet what happens now, as he feels himself neck and neck with his competitors, the finish line in sight?

He speeds up. That energy drag is done. The goal is right there, and all he needs is one last push. So he pushes.

This is called the Goal Gradient Effect, or more precisely, the Goal Gradient Hypothesis. Its effect on biological creatures is not just a feeling, but a real and measurable thing.

***

The first person to try explaining the goal gradient hypothesis was an early behavioural psychologist named Clark L. Hull.

As with other animals, when it came to humans, Hull was a pretty hardcore “behaviourist”, thinking that human behaviour could eventually be reduced to mathematical prediction based on rewards and conditioning. As insane as this sounds now, he had a neat mathematical formula for human behaviour:

Some of his ideas eventually came to be seen as extremely limiting Procrustean Bed type models of human behavior, but the Goal Gradient Hypothesis was replicated many times over the years.

Hull himself wrote papers with titles like The Goal-Gradient Hypothesis and Maze Learning to explore the effect of the idea in rats. As Hull put it, “...animals in traversing a maze will move at a progressively more rapid pace as the goal is approached.” Just like the runner above.

Most of the work Hull focused on were animals rather than humans, showing somewhat unequivocally that in the context of approaching a reward, the animals did seem to speed up as the goal approached, enticed by the end of the maze. The idea was, however, resurrected in the human realm in 2006 with a paper entitled The Goal-Gradient Hypothesis Resurrected: Purchase Acceleration, Illusionary Goal Progress, and Customer Retention. (link)

The paper examined consumer behaviour in the “goal gradient” sense and found, alas, it wasn’t just rats that felt the tug of the “end of the race” — we do too. Examining a few different measurable areas of human behaviour, the researchers found that consumers would work harder to earn incentives as the goal came in sight, and that after the reward was earned, they’d slow down their efforts:

We found that members of a café RP accelerated their coffee purchases as they progressed toward earning a free coffee. The goal-gradient effect also generalized to a very different incentive system, in which shorter goal distance led members to visit a song-rating Web site more frequently, rate more songs during each visit, and persist longer in the rating effort. Importantly, in both incentive systems, we observed the phenomenon of post-reward resetting, whereby customers who accelerated toward their first reward exhibited a slowdown in their efforts when they began work (and subsequently accelerated) toward their second reward. To the best of our knowledge, this article is the first to demonstrate unequivocal, systematic behavioural goal gradients in the context of the human psychology of rewards.

Fascinating.

***

If we’re to take the idea seriously, the Goal Gradient Hypothesis has some interesting implications for leaders and decision-makers.

The first and most important is probably that incentive structures should take the idea into account. This is a fairly intuitive (but often unrecognized) idea: Far-away rewards are much less motivating than near term ones. Given the chance to earn \$1,000 at the end of this month, and each thereafter, or \$12,000 at the end of the year, which would you be more likely to work hard for?

What if I pushed it back even more but gave you some “interest” to compensate: Would you work harder for the potential to earn \$90,000 five years from now or to earn \$1,000 this month, followed by \$1,000 the following month, and so on, every single month during five year period?

Companies like Nucor take the idea seriously: They pay bonuses to lower-level employees based on monthly production, not letting it wait until the end of the year. Essentially, the end of the maze happens every 30 days rather than once per year. The time between doing the work and the reward is shortened.

The other takeaway comes to consumer behaviour, as referenced in the marketing paper. If you’re offering rewards for a specific action from your customer, do you reward them sooner, or later?

The answer is almost always going to be “sooner”. In fact, the effect may be strong enough that you can get away with less total rewards by increasing their velocity.

Lastly, we might be able to harness the Hypothesis in our personal lives.

Let’s say we want to start reading more. Do we set a goal to read 52 books this year and hold ourselves accountable, or to read 1 book a week? What about 25 pages per day?

Not only does moving the goalposts forward tend to increase our motivation, but we repeatedly prove to ourselves that we’re capable of accomplishing them. This is classic behavioural psychology: Instant rewards rather than delayed. (Even if they’re psychological.) Not only that, but it forces us to avoid procrastination — leaving 35 books to be read in the last two months of the year, for example.

Those three seem like useful lessons, but here’s a challenge: Try synthesizing a new rule or idea of your own, combining the Goal Gradient Effect with at least one other psychological principle, and start testing it out in your personal life or in your organization. Don’t let useful nuggets sit around; instead, start eating the broccoli.

Date:
Filed Under:

“Nothing will ever be attempted
if all possible objections must first be overcome.”

— Samuel Johnson

***

In the book Nudge by Richard Thaler and Cass Sunstein they coin the terms ‘Choice Architecture’ and ‘Choice Architect’. For them, if you have an ability to influence the choices other people make, you are a choice architect.

Considering the number of interactions we have everyday, it would be quite easy to argue that we are all Choice Architects at some point. But this also makes the inverse true; we are also wandering around someone else’s Choice Architecture.

Let’s take a look at a few of the principles of good choice architecture, so we can get a better idea of when someone is trying to nudge us.

This information can then be used/weighed when making decisions.

Defaults

Thaler and Sunstein start with a discussion on “defaults” that are commonly offered to us:

For reasons we have discussed, many people will take whatever option requires the least effort, or the path of least resistance. Recall the discussion of inertia, status quo bias, and the ‘yeah, whatever’ heuristic. All these forces imply that if, for a given choice, there is a default option — an option that will obtain if the chooser does nothing — then we can expect a large number of people to end up with that option, whether or not it is good for them. And as we have also stressed, these behavioral tendencies toward doing nothing will be reinforced if the default option comes with some implicit or explicit suggestion that it represents the normal or even the recommended course of action.

When making decisions people will often take the option that requires the least effort or the path of least resistance. This makes sense: It’s not just a matter of laziness, we also only have so many hours in a day. Unless you feel particularly strongly about it, if putting little to no effort towards something leads you forward (or at least doesn’t noticeably kick you backwards) this is what you are likely to do. Loss aversion plays a role as well. If we feel like the consequences of making a poor choice are high, we will simply decide to do nothing.

Inertia is another reason: If the ship is currently sailing forward, it can often take a lot of time and effort just to slightly change course.

You have likely seen many examples of inertia at play in your work environment and this isn’t necessarily a bad thing.

Sometimes we need that ship to just steadily move forward. The important bit is to realize when this is factoring into your decisions, or more specifically, when this knowledge is being used to nudge you into making specific choices.

Let’s think about some of your monthly recurring bills. While you might not be reading that magazine or going to the gym, you’re still paying for the ability to use that good or service. If you weren’t being auto-renewed monthly, what is the chance that you would put the effort into renewing that subscription or membership? Much lower, right? Publishers and gym owners know this, and they know you don’t want to go through the hassle of cancelling either, so they make that difficult, too. (They understand well our tendency to want to travel the path of least resistance and avoid conflict.)

This is also where they will imply that the default option is the recommended course of action. It sounds like this:

“We’re sorry to hear you no longer want the magazine Mr. Smith. You know, more than half of the fortune 500 companies have a monthly subscription to magazine X, but we understand if it’s not something you’d like to do at the moment.”

or

“Mr. Smith we are sorry to hear that you want to cancel your membership at GymX. We understand if you can’t make your health a priority at this point but we’d love to see you back sometime soon. We see this all the time, these days everyone is so busy. But I’m happy to say we are noticing a shift where people are starting to make time for themselves, especially in your demographic…”

(Just cancel them. You’ll feel better. We promise.)

The Structure of Complex Choices

We live in a world of reviews. Product reviews, corporate reviews, movie reviews… When was the last time you bought a phone or a car before checking the reviews? When was the last time that you hired an employee without checking out their references?

Thaler and Sunstein call this Collaborative Filtering and explain it as follows:

You use the judgements of other people who share your tastes to filter through the vast number of books or movies available in order to increase the likelihood of picking one you like. Collaborative filtering is an effort to solve a problem of choice architecture. If you know what people like you tend to like, you might well be comfortable in selecting products you don’t know, because people like you tend to like them. For many of us, collaborative filtering is making difficult choices easier.

While collaborative filtering does a great job of making difficult choices easier we have to remember that companies also know that you will use this tool and will try to manipulate it. We just have to look at the information critically, compare multiple sources and take some time to review the reviewers.

These techniques can be useful for decisions of a certain scale and complexity: when the alternatives are understood and in small enough numbers. However, once we reach a certain size we require additional tools to make the right decision.

One strategy to use is what Amos Tversky (1972) called ‘elimination by aspects.’ Someone using this strategy first decides what aspect is most important (say, commuting distance), establishes a cutoff level (say, no more than a thirty-minute commute), then eliminates all the alternatives that do not come up to this standard. The process is repeated, attribute by attribute (no more than \$1,500 per month; at least two bedrooms; dogs permitted), until either a choice is made or the set is narrowed down enough to switch over to a compensatory evaluation of the ‘finalists.’”

This is a very useful tool if you have a good idea of which attributes are of most value to you.

When using these techniques, we have to be mindful of the fact that the companies trying to sell us goods have spent a lot of time and money figuring out what attributes are important to you as well.

For example, if you were to shop for an SUV you would notice that there are a specific number of variables they all seem to have in common now (engine options, towing options, seating options, storage options). They are trying to nudge you not to eliminate them from your list. This forces you to do the tertiary research or better yet, this forces you to walk into dealerships where they will try to inflate the importance of those attributes (which they do best).

They also try to call things new names as a means to differentiate themselves and get onto your list. What do you mean our competitors don’t have FLEXfuel?

Incentives

Incentives are so ubiquitous in our lives that it’s very easy to overlook them. Unfortunately, this can influence us to make poor decisions.

Thaler and Sunstein believe this is tied into how salient the incentive is.

The most important modification that must be made to a standard analysis of incentives is salience. Do the choosers actually notice the incentives they face? In free markets, the answer is usually yes, but in important cases the answer is no.

Consider the example of members of an urban family deciding whether to buy a car. Suppose their choices are to take taxis and public transportation or to spend ten thousand dollars to buy a used car, which they can park on the street in front of their home. The only salient costs of owning this car will be the weekly stops at the gas station, occasional repair bills, and a yearly insurance bill. The opportunity cost of the ten thousand dollars is likely to be neglected. (In other words, once they purchase the car, they tend to forget about the ten thousand dollars and stop treating it as money that could have been spent on something else.) In contrast, every time the family uses a taxi the cost will be in their face, with the meter clicking every few blocks. So behavioral analysis of the incentives of car ownership will predict that people will underweight the opportunity costs of car ownership, and possibly other less salient aspects such as depreciation, and may overweight the very salient costs of using a taxi.

The problems here are relatable and easily solved: If the family above had written down all the numbers related to either taxi, public transportation, or car ownership, it would have been a lot more difficult for them to undervalue the salient aspects of any of their choices. (At least if the highest value attribute is cost).

***

This isn’t an exhaustive list of all the daily nudges we face but it’s a good start and some important, translatable, themes emerge.

• Realize when you are wandering around someone’s choice architecture.
• Develop strategies to help you make decisions when you are being nudged.

Still Interested? Buy, and most importantly read, the whole book. Also, check out our other post on some of the Biases and Blunders covered in Nudge.

Date:
Filed Under:

Peter Bevelin on Seeking Wisdom, Mental Models, Learning, and a Lot More

One of the most impactful books we’ve ever come across is the wonderful Seeking Wisdom: From Darwin to Munger, written by the Swedish investor Peter Bevelin. In the spirit of multidisciplinary learning, Seeking Wisdom is a compendium of ideas from biology, psychology, statistics, physics, economics, and human behavior.

Mr. Bevelin is out with a new book full of wisdom from Warren Buffett & Charlie Munger: All I Want to Know is Where I’m Going to Die So I Never Go There. We were fortunate enough to have a chance to interview Peter recently, and the result is the wonderful discussion below.

What was the original impetus for writing these books?

The short answer: To improve my thinking. And when I started writing on what later became Seeking Wisdom I can express it even simpler: “I was dumb and wanted to be less dumb.” As Munger says: “It’s ignorance removal…It’s dishonorable to stay stupider than you have to be.” And I had done some stupid things and I had seen a lot of stupidity being done by people in life and in business.

A seed was first planted when I read Charlie Munger’s worldly wisdom speech and another one where he referred to Darwin as a great thinker. So I said to myself: I am 42 now. Why not take some time off business and spend a year learning, reflecting and write about the subject Munger introduced to me – human behavior and judgments.

None of my writings started out as a book project. I wrote my first book – Seeking Wisdom – as a memorandum for myself with the expectation that I could transfer some of its essentials to my children. I learn and write because I want to be a little wiser day by day. I don’t want to be a great-problem-solver. I want to avoid problems – prevent them from happening and doing right from the beginning. And I focus on consequential decisions. To paraphrase Buffett and Munger – decision-making is not about making brilliant decisions, but avoiding terrible ones. Mistakes and dumb decisions are a fact of life and I’m going to make more, but as long as I can avoid the big or “fatal” ones I’m fine.

So I started to read and write to learn what works and not and why. And I liked Munger’s “All I want to know is where I’m going to die so I’ll never go there” approach. And as he said, “You understand it better if you go at it the way we do, which is to identify the main stupidities that do bright people in and then organize your patterns for thinking and developments, so you don’t stumble into those stupidities.” Then I “only” had to a) understand the central “concept” and its derivatives and describe it in as simple way as possible for me and b) organize what I learnt in a way that was logical and useful for me.

And what better way was there to learn this from those who already knew this?

After I learnt some things about our brain, I understood that thinking doesn’t come naturally to us humans – most is just unconscious automatic reactions. Therefore I needed to set up the environment and design a system that helped me make it easier to know what to do and prevent and avoid harm. Things like simple rules of thumbs, tricks and filters. Of course, I could only do that if I first had the foundation. And as the years have passed, I’ve found that filters are a great way to save time and misery. As Buffett says, “I process information very quickly since I have filters in my mind.” And they have to be simple – as the proverb says, “Beware of the door that has too many keys.” The more complicated a process is, the less effective it is.

Why do I write? Because it helps me understand and learn better. And if I can’t write something down clearly, then I have not really understood it. As Buffett says, “I learn while I think when I write it out. Some of the things, I think I think, I find don’t make any sense when I start trying to write them down and explain them to people … And if it can’t stand applying pencil to paper, you’d better think it through some more.”

My own test is one that a physicist friend of mine told me many years ago, ‘You haven’t really understood an idea if you can’t in a simple way describe it to almost anyone.’ Luckily, I don’t have to understand zillion of things to function well.

And even if some of mine and others thoughts ended up as books, they are all living documents and new starting points for further, learning, un-learning and simplifying/clarifying. To quote Feynman, “A great deal of formulation work is done in writing the paper, organizational work, organization. I think of a better way, a better way, a better way of getting there, of proving it. I never do much — I mean, it’s just cleaner, cleaner and cleaner. It’s like polishing a rough-cut vase. The shape, you know what you want and you know what it is. It’s just polishing it. Get it shined, get it clean, and everything else.

Which book did you learn the most from the experience of writing/collecting?

Seeking Wisdom because I had to do a lot of research – reading, talking to people etc. Especially in the field of biology and brain science since I wanted to first understand what influences our behavior. I also spent some time at a Neurosciences Institute to get a better understanding of how our anatomy, physiology and biochemistry constrained our behavior.

And I had to work it out my own way and write it down in my own words so I really could understand it. It took a lot of time but it was a lot of fun to figure it out and I learnt much more and it stuck better than if I just had tried to memorize what somebody else had already written. I may not have gotten everything letter perfect but good enough to be useful for me.

As I said, the expectation wasn’t to create a book. In fact, that would have removed a lot of my motivation. I did it because I had an interest in becoming better. It goes back to the importance of intrinsic motivation. As I wrote in Seeking Wisdom: “If we reward people for doing what they like to do anyway, we sometimes turn what they enjoy doing into work. The reward changes their perception. Instead of doing something because they enjoy doing it, they now do it because they are being paid. The key is what a reward implies. A reward for our achievements makes us feel that we are good at something thereby increasing our motivation. But a reward that feels controlling and makes us feel that we are only doing it because we’re paid to do it, decreases the appeal.

It may sound like a cliché but the joy was in the journey – reading, learning and writing – not the destination – the finished book. Has the book made a difference for some people? Yes, I hope so but often people revert to their old behavior. Some of them are the same people who – to paraphrase something that is attributed to Churchill – occasionally should check their intentions and strategies against their results. But reality is what Munger once said, “Everyone’s experience is that you teach only what a reader almost knows, and that seldom.” But I am happy that my books had an impact and made a difference to a few people. That’s enough.

Why did the new book (All I Want To Know Is Where I’m Going To Die So I’ll Never Go There) have a vastly different format?

It was more fun to write about what works and not in a dialogue format. But also because vivid and hopefully entertaining “lessons” are easier to remember and recall. And you will find a lot of quotes in there that most people haven’t read before.

I wanted to write a book like this to reinforce a couple of concepts in my head. So even if some of the text sometimes comes out like advice to the reader, I always think about what the mathematician Gian-Carlo Rota once said, “The advice we give others is the advice that we ourselves need.”

How do you define Mental Models?

Some kind of representation that describes how reality is (as it is known today) – a principle, an idea, basic concepts, something that works or not – that I have in my head that helps me know what to do or not. Something that has stood the test of time.

For example some timeless truths are:

• Reality is that complete competitors – same product/niche/territory – cannot coexist (Competitive exclusion principle). What works is going where there is no or very weak competition + differentiation/advantages that others can’t copy (assuming of course we have something that is needed/wanted now and in the future)
• Reality is that we get what we reward for. What works is making sure we reward for what we want to achieve.

I favor underlying principles and notions that I can apply broadly to different and relevant situations. Since some models don’t resemble reality, the word “model” for me is more of an illustration/story of an underlying concept, trick, method, what works etc. that agrees with reality (as Munger once said, “Models which underlie reality”) and help me remember and more easily make associations.

But I don’t judge or care how others label it or do it – models, concepts, default positions … The important thing is that whatever we use, it reflects and agrees with reality and that it works for us to help us understand or explain a situation or know what to do or not do. Useful and good enough guide me. I am pretty pragmatic – whatever works is fine. I follow Deng Xiaoping, “I don’t care whether the cat is black or white as long as it catches mice.” As Feynman said, “What is the best method to obtain the solution to a problem? The answer is, any way that works.

I’ll tell you about a thing Feynman said on education which I remind myself of from time to time in order not to complicate things (from Richard P. Feynman, Michael A. Gottlieb, Ralph Leighton, Feynman’s Tips on Physics: A Problem-Solving Supplement to the Feynman Lectures on Physics):

“There’s a round table on three legs. Where should you lean on it, so the table will be the most unstable?”
The student’s solution was, “Probably on top of one of the legs, but let me see: I’ll calculate how much force will produce what lift, and so on, at different places.”
Then I said, “Never mind calculating. Can you imagine a real table?”
“But that’s not the way you’re supposed to do it!”
“Never mind how you’re supposed to do it; you’ve got a real table here with the various legs, you see? Now, where do you think you’d lean? What would happen if you pushed down directly over a leg?”
“Nothin’!”
I say, “That’s right; and what happens if you push down near the edge, halfway between two of the legs?”
“It flips over!”
I say, “OK! That’s better!”
The point is that the student had not realized that these were not just mathematical problems; they described a real table with legs. Actually, it wasn’t a real table, because it was perfectly circular, the legs were straight up and down, and so on. But it nearly described, roughly speaking, a real table, and from knowing what a real table does, you can get a very good idea of what this table does without having to calculate anything – you know darn well where you have to lean to make the table flip over. So, how to explain that, I don’t know! But once you get the idea that the problems are not mathematical problems but physical problems, it helps a lot.
Anyway, that’s just two ways of solving this problem. There’s no unique way of doing any specific problem. By greater and greater ingenuity, you can find ways that require less and less work, but that takes experience.

Which mental models “carry the most freight?” (Related follow up: Which concepts from Buffett/Munger/Mental Models do you find yourself referring to or appreciating most frequently?)

Ideas from biology and psychology since many stupidities are caused by not understanding human nature (and you get illustrations of this nearly every day). And most of our tendencies were already known by the classic writers (Publilius Syrus, Seneca, Aesop, Cicero etc.)

Others that I find very useful both in business and private is the ideas of Quantification (without the fancy math), Margin of safety, Backups, Trust, Constraints/Weakest link, Good or Bad Economics slash Competitive advantage, Opportunity cost, Scale effects. I also think Keynes idea of changing your mind when you get new facts or information is very useful.

But since reality isn’t divided into different categories but involves a lot of factors interacting, I need to synthesize many ideas and concepts.

Are there any areas of the mental models approach you feel are misunderstood or misapplied?

I don’t know about that but what I often see among many smart people agrees with Munger’s comment: “All this stuff is really quite obvious and yet most people don’t really know it in a way where they can use it.”

Anyway, I believe if you really understand an idea and what it means – not only memorizing it – you should be able to work out its different applications and functional equivalents. Take a simple big idea – think on it – and after a while you see its wider applications. To use Feynman’s advice, “It is therefore of first-rate importance that you know how to “triangulate” – that is, to know how to figure something out from what you already know.” As a good friend says, “Learn the basic ideas, and the rest will fill itself in. Either you get it or you don’t.”

Most of us learn and memorize a specific concept or method etc. and learn about its application in one situation. But when the circumstances change we don’t know what to do and we don’t see that the concept may have a wider application and can be used in many situations.

Take for example one big and useful idea – Scale effects. That the scale of size, time and outcomes changes things – characteristics, proportions, effects, behavior…and what is good or not must be tied to scale. This is a very fundamental idea from math. Munger described some of this idea’s usefulness in his worldly wisdom speech. One effect from this idea I often see people miss and I believe is important is group size and behavior. That trust, feeling of affection and altruistic actions breaks down as group size increases, which of course is important to know in business settings. I wrote about this in Seeking Wisdom (you can read more if you type in Dunbar Number on Google search). I know of some businesses that understand the importance of this and split up companies into smaller ones when they get too big (one example is Semco).

Another general idea is “Gresham’s Law” that can be generalized to any process or system where the bad drives out the good. Like natural selection or “We get what we select for” (and as Garrett Hardin writes, “The more general principle is: We get whatever we reward for).

While we are on the subject of mental models etc., let me bring up another thing that distinguishes the great thinkers from us ordinary mortals. Their ability to quickly assess and see the essence of a situation – the critical things that really matter and what can be ignored. They have a clear notion of what they want to achieve or avoid and then they have this ability to zoom in on the key factor(s) involved.

One reason to why they can do that is because they have a large repertoire of stored personal and vicarious experiences and concepts in their heads. They are masters at pattern recognition and connection. Some call it intuition but as Herbert Simon once said, “The situation has provided a cue; this cue has given the expert access to information stored in memory, and the information provides the answer. Intuition is nothing more and nothing less than recognition.

It is about making associations. For example, roughly like this:
Situation X Association (what does this remind me of?) to experience, concept, metaphor, analogy, trick, filter… (Assuming of course we are able to see the essence of the situation) What counts and what doesn’t? What works/not? What to do or what to explain?

Let’s take employing someone as an example (or looking at a business proposal). This reminds me of one key factor – trustworthiness and Buffett’s story, “If you’re looking for a manager, find someone who is intelligent, energetic and has integrity. If he doesn’t have the last, make sure he lacks the first two.”

I believe Buffett and Munger excel at this – they have seen and experienced so much about what works and not in business and behavior.

Buffett referred to the issue of trust, chain letters and pattern recognition at the latest annual meeting:

You can get into a lot of trouble with management that lacks integrity… If you’ve got an intelligent, energetic guy or woman who is pursuing a course of action, which gets put on the front page it could make you very unhappy. You can get into a lot of trouble. ..We’ve seen patterns…Pattern recognition is very important in evaluating humans and businesses. Pattern recognition isn’t one hundred percent and none of the patterns exactly repeat themselves, but there are certain things in business and securities markets that we’ve seen over and over and frequently come to a bad end but frequently look extremely good in the short run. One which I talked about last year was the chain letter scheme. You’re going to see chain letters for the rest of your life. Nobody calls them chain letters because that’s a connotation that will scare you off but they’re disguised as chain letters and many of the schemes on Wall Street, which are designed to fool people, have that particular aspect to it…There were patterns at Valeant certainly…if you go and watch the Senate hearings, you will see there are patterns that should have been picked up on.

This is what he wrote on chain letters in the 2014 annual report:

In the late 1960s, I attended a meeting at which an acquisitive CEO bragged of his “bold, imaginative accounting.” Most of the analysts listening responded with approving nods, seeing themselves as having found a manager whose forecasts were certain to be met, whatever the business results might be. Eventually, however, the clock struck twelve, and everything turned to pumpkins and mice. Once again, it became evident that business models based on the serial issuances of overpriced shares – just like chain-letter models – most assuredly redistribute wealth, but in no way create it. Both phenomena, nevertheless, periodically blossom in our country – they are every promoter’s dream – though often they appear in a carefully-crafted disguise. The ending is always the same: Money flows from the gullible to the fraudster. And with stocks, unlike chain letters, the sums hijacked can be staggering.

And of course, the more prepared we are or the more relevant concepts and “experiences” we have in our heads, the better we all will be at this. How do we get there? Reading, learning and practice so we know it “fluently.” There are no shortcuts. We have to work at it and apply it to the real world.

As a reminder to myself so I understand my limitation and “circle”, I keep a paragraph from Munger’s USC Gould School of Law Commencement Address handy so when I deal with certain issues, I don’t fool myself into believing I am Max Planck when I’m really the Chauffeur:

In this world I think we have two kinds of knowledge: One is Planck knowledge, that of the people who really know. They’ve paid the dues, they have the aptitude. Then we’ve got chauffeur knowledge. They have learned to prattle the talk. They may have a big head of hair. They often have fine timbre in their voices. They make a big impression. But in the end what they’ve got is chauffeur knowledge masquerading as real knowledge.

Which concepts from Buffett/Munger/Mental Models do you find most counterintuitive?

One trick or notion I see many of us struggling with because it goes against our intuition is the concept of inversion – to learn to think “in negatives” which goes against our normal tendency to concentrate on for example, what we want to achieve or confirmations instead of what we want to avoid and disconfirmations. Another example of this is the importance of missing confirming evidence (I call it the “Sherlock trick”) – that negative evidence and events that don’t happen, matter when something implies they should be present or happen.

Another example that is counterintuitive is Newton’s 3d law that forces work in pairs. One object exerts a force on a second object, but the second object also exerts a force equal and opposite in direction to the force acting on it – the first object. As Newton wrote, “If you press a stone with your finger, the finger is also pressed by the stone.” Same as revenge (reciprocation).

Who are some of the non-obvious, or under-the-radar thinkers that you greatly admire?

One that immediately comes to mind is one I have mentioned in the introduction in two of my books is someone I am fortunate to have as a friend – Peter Kaufman. An outstanding thinker and a great businessman and human being. On a scale of 1 to 10, he is a 15.

What have you come to appreciate more with Buffett/Munger’s lessons as you’ve studied them over the years?

Their ethics and their ethos of clarity, simplicity and common sense. These two gentlemen are outstanding in their instant ability to exclude bad ideas, what doesn’t work, bad people, scenarios that don’t matter, etc. so they can focus on what matters. Also my amazement that their ethics and ideas haven’t been more replicated. But I assume the answer lies in what Munger once said, “The reason our ideas haven’t spread faster is they’re too simple.”

This reminds me something my father-in-law once told me (a man I learnt a lot from) – the curse of knowledge and the curse of academic title. My now deceased father-in-law was an inventor and manager. He did not have any formal education but was largely self-taught. Once a big corporation asked for his services to solve a problem their 60 highly educated engineers could not solve. He solved the problem. The engineers said, “It can’t be that simple.” It was like they were saying that, “Here we have 6 years of school, an academic title, lots of follow up education. Therefore an engineering problem must be complicated”. Like Buffett once said of Ben Graham’s ideas, “I think that it comes down to those ideas – although they sound so simple and commonplace that it kind of seems like a waste to go to school and get a PhD in Economics and have it all come back to that. It’s a little like spending eight years in divinity school and having somebody tell you that the 10 commandments were all that counted. There is a certain natural tendency to overlook anything that simple and important.”

(I must admit that in the past I had a tendency to be extra drawn to elegant concepts and distracting me from the simple truths.)

What things have you come to understand more deeply in the past few years?

• That I don’t need hundreds of concepts, methods or tricks in my head – there are a few basic, time-filtered fundamental ones that are good enough. As Munger says, “The more basic knowledge you have the less new knowledge you have to get.” And when I look at something “new”, I try to connect it to something I already understand and if possible get a wider application of an already existing basic concept that I already have in my head.
• Neither do I have to learn everything to cover every single possibility – not only is it impossible but the big reason is well explained by the British statistician George Box. He said that we shouldn’t be preoccupied with optimal or best procedures but good enough over a range of possibilities likely to happen in practice – circumstances which the world really present to us.
• The importance of “Picking my battles” and focus on the long-term consequences of my actions. As Munger said, “A majority of life’s errors are caused by forgetting what one is really trying to do.”
• How quick most of us are in drawing conclusions. For example, I am often too quick in being judgmental and forget how I myself behaved or would have behaved if put in another person’s shoes (and the importance of seeing things from many views).
• That I have to “pick my poison” since there is always a set of problems attached with any system or approach – it can’t be perfect. The key is try to move to a better set of problems one can accept after comparing what appear to be the consequences of each.
• How efficient and simplified life is when you deal with people you can trust. This includes the importance of the right culture.
• The extreme importance of the right CEO – a good operator, business person and investor.
• That luck plays a big role in life.
• That most predictions are wrong and that prevention, robustness and adaptability is way more important. I can’t help myself – I have to add one thing about the people who give out predictions on all kinds of things. Often these are the people who live in a world where their actions have no consequences and where their ideas and theories don’t have to agree with reality.
• That people or businesses that are foolish in one setting often are foolish in another one (“The way you do anything, is the way you do everything”).
• Buffett’s advice that “A checklist is no substitute for thinking.” And that sometimes it is easy to overestimate one’s competency in a) identifying or picking what the dominant or key factors are and b) evaluating them including their predictability. That I believe I need to know factor A when I really need to know B – the critical knowledge that counts in the situation with regards to what I want to achieve.
• Close to this is that I sometimes get too involved in details and can’t see the forest for the trees and I get sent up too many blind alleys. Just as in medicine where a whole body scan sees too much and sends the doctor up blind alleys.
• The wisdom in Buffett’s advice that “You only have to be right on a very, very few things in your lifetime as long as you never make any big mistakes…An investor needs to do very few things right as long as he or she avoids big mistakes.”

What’s the best investment of time/effort/money that you’ve ever made?

The best thing I have done is marrying my wife. As Buffett says and it is so so true, “Choosing a spouse is the most important decision in your life…You need everything to be stable, and if that decision isn’t good, it may affect every other decision in life, including your business decisions…If you are lucky on health and…on your spouse, you are a long way home.”

A good “investment” is taking the time to continuously improve. It just takes curiosity and a desire to know and understand – real interest. And for me this is fun.

What does your typical day look like? (How much time do you spend reading… and when?)

Every day is a little different but I read every day.

What book has most impacted your life?

There is not one single book or one single idea that has done it. I have picked up things from different books (still do). And there are different books and articles that made a difference during different periods of my life. Meeting and learning from certain people and my own practical experiences has been more important in my development. As an example – When I was in my 30s a good friend told me something that has been very useful in looking at products and businesses. He said I should always ask who the real customer is: “Who ultimately decides what to buy and what are their decision criteria and how are they measured and rewarded and who pays?

But looking back, if I have had a book like Poor Charlie’s Almanack when I was younger I would have saved myself some misery. And of course, when it comes to business, managing and investing, nothing beats learning from Warren Buffett’s Letters to Berkshire Hathaway Shareholders.

Another thing I have found is that it is way better to read and reread fewer books but good and timeless ones and then think. Unfortunately many people absorb too many new books and information without thinking.

Let me finish this with some quotes from my new book that I believe we all can learn from:

• “There’s no magic to it…We haven’t succeeded because we have some great, complicated systems or magic formulas we apply or anything of the sort. What we have is just simplicity itself.” – Buffett
• “Our ideas are so simple that people keep asking us for mysteries when all we have are the most elementary ideas…There’s nothing remarkable about it. I don’t have any wonderful insights that other people don’t have. Just slightly more consistently than others, I’ve avoided idiocy…It is remarkable how much long-term advantage people like us have gotten by trying to be consistently not stupid, instead of trying to be very intelligent.” – Munger
• “It really is simple – just avoid doing the dumb things. Avoiding the dumb things is the most important.” – Buffett

Finally, I wish you and your readers an excellent day – Everyday!

Date:
Filed Under: