Tag: Decision Making

All Models Are Wrong

How is your journey towards understanding Farnam Street’s latticework of mental models going? Is it proving useful? Changing your view of the world? If the answer is that it’s going well that’s good. There’s just one tiny hitch.

All models are wrong.

Yep. It's the truth. However, there is another part to that statement:

All models are wrong, some are useful.

Those words come from the British statistician, George Box. In a groundbreaking 1976 paper, Box revealed the fallacy of our desire to categorize and organize the world. We create models (a term with many applications), once to confuse them for reality.

Box also stated:

Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful.

What Exactly Is A Model?

First, we should understand precisely what a model is.

The dictionary definition states a model is ‘a representation, generally in miniature, to show the construction or appearance of something’ or ‘a simplified description, especially a mathematical one, of a system or process, to assist calculations and predictions.’

For our purposes here, we are better served by the second definition. A model is a simplification which fosters understanding.

Think of an architectural model. These are typically a small scale model of a building, made before it's built. Its purpose is to show what the building will look like and to help people working on the project to develop a clear picture of the overall feel. In the iconic scene from Zoolander, Derek (played by Ben Stiller) looks at the architectural model of his propsed ‘school for kids who can’t read good’ and shouts “What is this? A center for ants??”

That scene illustrates the wrong way to understand models: Too literally.

Why We Use Models- And Why They Work

At Farnam Street, we believe in using models for the purpose of building a massive, but finite amount of fundamental, invariant knowledge about how the world really works. Applying this knowledge is the key to making good decisions and avoiding stupidity.

“Scientists generally agree that no theory is 100 percent correct. Thus, the real test of knowledge is not truth, but utility. Science gives us power. The more useful that power, the better the science.”

— Yuval Noah Harari

Time-tested models allow us to understand how things work in the real world. And understanding how things work prepares us to make better decisions without expending too much mental energy in the process.

Instead of relying on fickle and specialized facts, we can learn versatile concepts. The mental models we cover are intended to be widely applicable.

It's crucial for us to understand as many mental models as possible. As the adage goes, a little knowledge can be dangerous and creates more problems than total ignorance. No single model is universally applicable – we find exceptions for nearly everything. Even hardcore physics has not been totally solved.

“The basic trouble, you see, is that people think that “right” and “wrong” are absolute; that everything that isn't perfectly and completely right is totally and equally wrong.”

— Isaac Asimov

Take a look at almost any comment section on the internet and you are guaranteed to find at least one pedant raging about a minor perceived inaccuracy, throwing out the good with the bad. While ignorance and misinformation are certainly not laudable, neither is an obsession with perfection.

Like heuristics, models work as a consequence of the fact they are usually helpful in most situations, not because they are always helpful in a small number of situations.

Models can assist us in making predictions and forecasting the future. Forecasts are never guaranteed, yet they provide us with a degree of preparedness and comprehension of the future. For example, a weather forecast which claims it will rain today may get that wrong. Still, it's correct often enough to enable us to plan appropriately and bring an umbrella.

Mental Models and Minimum Viable Products

Think of mental models as minimum viable products.

Sure, all of them can be improved. But the only way that can happen is if we try them out, educate ourselves and collectively refine them.

We can apply one of our mental models, Occam’s razor, to this. Occam’s razor states that the simplest solution is usually correct. In the same way, our simplest mental models tend to be the most useful. This is because there is minimal room for errors and misapplication.

“The world doesn’t have the luxury of waiting for complete answers before it takes action.”

— Daniel Gilbert

Your kitchen knives are not as sharp as they could be. Does that matter as long as they still cut vegetables? Your bed is not as comfortable as it could be. Does that matter if you can still get a good night’s sleep in it? Your internet is not as fast as it could be. Does that matter as long as you can load this article? Arguably not. Our world runs on the functional, not the perfect. This is what a mental model is – a functional tool. A tool which maybe could be a bit sharper or easier to use, but still does the job.

The statistician David Hand made the following statement in 2014;

In general, when building statistical models, we must not forget that the aim is to understand something about the real world. Or predict, choose an action, make a decision, summarize evidence, and so on, but always about the real world, not an abstract mathematical world: our models are not the reality.

For example, in 1960, Georg Rasch said the following:

When you construct a model you leave out all the details which you, with the knowledge at your disposal, consider inessential…. Models should not be true, but it is important that they are applicable, and whether they are applicable for any given purpose must, of course, be investigated. This also means that a model is never accepted finally, only on trial.

Imagine a world where physics like precision is prized over usefulness.

We would lack medical care because a medicine or procedure can never be perfect. In a world like this, we would possess little scientific knowledge, because research can never be 100% accurate. We would have no art because a work can never be completed. We would have no technology because there are always little flaws which can be ironed out.

“A model is a simplification or approximation of reality and hence will not reflect all of reality … While a model can never be “truth,” a model might be ranked from very useful, to useful, to somewhat useful to, finally, essentially useless.”

— Ken Burnham and David Anderson

In short, we would have nothing. Everything around us is imperfect and uncertain. Some things are more imperfect than others, but issues are always there. Over time, incremental improvements happen through unending experimentation and research.

The Map is Not the Territory

As we know, the map is not the territory. A map can be seen as a symbol or index of a place, not an icon.

When we look at a map of Paris, we know it is a representation of the actual city. There are bound to be flaws; streets which have been renamed, demolished buildings, perhaps a new Metro line. Even so, the map will help us find our way. It is far more useful to have a map showing the way from Notre Dame to Gare du Nord (a tool) than to know how many meters they are apart (a piece of trivia.)

Someone who has spent a lot of time studying a map will be able to use it with greater ease, just like a mental model. Someone who lives in Paris will find the map easier to understand than a tourist, just as someone who uses a mental model in their day to day life will apply it better than a novice. As long as there are no major errors, we can consider the map useful, even if it is by no means a reflection of reality. Gregory Bateson writes in Steps to an Ecology of Mind that the purpose of a map is not to be true, but to have a structure which represents truth within the current context.

“A map is not the territory it represents, but, if correct, it has a similar structure to the territory, which accounts for its usefulness.”

— Alfred Korzybski

Physical maps generally become more accurate as time passes. Not long ago, they often included countries which didn’t exist, omitted some which did, portrayed the world as flat or fudged distances. Nowadays, our maps have come a long way.

The same goes for mental models – they are always evolving, being revised – never really achieving perfection. Certainly, over time, the best models are revised only slightly, but we must never consider our knowledge “set”.

Another factor to consider in using models is to take into account what they're used for.

Many mental models (e.g. entropy, critical mass and activation energy) are based upon scientific and mathematical concepts. A person who works in those areas will obviously need a deeper understanding of it than someone who want to learn to think better when making investment decisions. They will need a different map and a more detailed one showing elements which the rest of us have no need for.

“A model which took account of all the variation of reality would be of no more use than a map at the scale of one to one.”

— Joan Robinson

In Partial Enchantments of the Quixote, Jorge Luis Borges provides an even more interesting analysis of the confusion between models and reality:

Let us imagine that a portion of the soil of England has been leveled off perfectly and that on it a cartographer traces a map of England. The job is perfect; there is no detail of the soil of England, no matter how minute that is not registered on the map; everything has there its correspondence. This map, in such a case, should contain a map of the map, which should contain a map of the map of the map, and so on to infinity.Why does it disturb us that the map be included in the map and the thousand and one nights in the book of the Thousand and One Nights? Why does it disturb us that Don Quixote be a reader of the Quixote and Hamlet a spectator of Hamlet? I believe I have found the reason: these inversions suggest that if the characters of a fictional work can be readers or spectators, we, its readers or spectators, can be fictions.

How Do We Know If A Model Is Useful?

This is a tricky question to answer. When looking at any model, it is helpful to ask some of the following questions:

  • How long has this model been around? As a general rule, mental models which have been around for a long time (such as Occam’s razor) will have been subjected to a great deal of scrutiny. Time is an excellent curator, trimming away inefficient ideas. A mental model which is new may not be particularly refined or versatile. Many of our mental models originate from Ancient Greece and Rome, meaning they have to be functional to have survived this long.
  • Is it a representation of reality? In other words, does it reflect the real world? Or is it based on abstractions?
  • Does this model apply to multiple areas? The more elastic a model is, the more valuable it is to learn about. (Of course, be careful not to apply the model where it doesn't belong. Mind Feynman: “You must not fool yourself, and you're the easiest person to fool.”)
  • How did this model originate? Many mental models arise from scientific or mathematical concepts. The more fundamental the domain, the more likely the model is to be true and lasting.
  • Is it based on first principles? A first principle is a foundational concept which cannot be deduced from any other concept and must be known.
  • Does it require infinite regress? Infinite regress refers to something which is justified by principles, which themselves require justification by other principles. A model based on infinite regress is likely to required extensive knowledge of a particular topic, and have minimal real-world application.

When using any mental model, we must avoid becoming too rigid. There are exceptions to all of them, and situations in which they are not applicable.

Think of the latticework as a toolkit. That's why it pays to do the work up front to put so many of them in your toolbox at a deep, deep level. If you only have one or two, you're likely to attempt to use them in places that don't make sense. If you've absorbed them only lightly, you will not be able to use them when the time is at hand.

If on the other hand, you have a toolbox full of them and they're sunk in deep, you're more likely to pull out the best ones for the job exactly when they are needed.

Too many people are caught up wasting time on physics-like precision in areas of practical life that do not have such precision available. A better approach is to ask “Is it useful?” and, if yes, “To what extent?”

Mental models are a way of thinking about the world that prepares us to make good decisions in the first place.

Rory Sutherland on The Psychology of Advertising, Complex Evolved Systems, Reading, Decision Making

“There is a huge danger in looking at life as an optimization problem.”

***

Rory Sutherland (@rorysutherland) is the Vice Chairman of Ogilvy & Mather Group, which is one of the largest advertising companies in the world.

Rory started the behavioral insights team and spends his days applying behavioral economics and evolutionary psychology to solve problems that conventionally advertising agencies haven't been able to solve.

In this wide-ranging interview we talk about: how advertising agencies are solving airport security problems, what Silicon Valley misses, how to mess with self-driving cars, reading habits, decision making, the intersection of advertising and psychology, and so much more.

This interview was recorded live in London, England.

Enjoy this amazing conversation.

“The problem with economics is not only that it is wrong but that it's incredibly creatively limiting.”

Listen

Transcript
A lot of people like to take notes while listening. A transcription of this conversation is available to members of our learning community or you can purchase one separately.

***

If you liked this, check out all the episodes of the knowledge project.

Get Smart: Three Ways of Thinking to Make Better Decisions and Achieve Results

“Give me six hours to chop down a tree and I will spend the first four sharpening the axe.”
— Abraham Lincoln

***

Your ability to think clearly determines the decisions you make and the actions you take.

In Get Smart!: How to Think and Act Like the Most Successful and Highest-Paid People in Every Field, author Brian Tracy presents ten different ways of thinking that enable better decisions. Better decisions free up your time and improve results. At Farnam Street, we believe that a multidisciplinary approach based on mental models allows you to gauge situations from different perspectives and profoundly affect the quality of decisions you make.

Most of us slip into a comfort zone of what Tracy calls “easy thinking and decision-making.” We use less than our cognitive capacity because we become lazy and jump to simple conclusions.

This isn't about being faster. I disagree with the belief that decisions should be, first and foremost, fast and efficient. A better approach is to be effective. If it takes longer to come to a better decision, so be it. In the long run, this will pay for itself over and over with fewer messes, more free time, and less anxiety.

In Get Smart, Tracy does a good job of showing people a series of simple, practical, and powerful ways of examining a situation to improve the odds you're making the best decision.

Let's take a look at a few of them.

1. Long-Time Perspective Versus Short-Time Perspective

Dr. Edward Banfield of Harvard University studied upward economic mobility for almost 50 years. He wondered why some people and families moved from lower socioeconomic classes to higher ones and some didn't. A lot of these people moved from labor jobs to riches in one lifetime. He wanted to know why. His findings are summarized in the controversial book, The Unheavenly City. Banfield offered one simple conclusion that has endured. He concluded that “time perspective” was overwhelmingly the most important factor.

Tracy picks us up here:

At the lowest socioeconomic level, lower-lower class, the time perspective was often only a few hours, or minutes, such as in the case of the hopeless alcoholic or drug addict, who thinks only about the next drink or dose.

At the highest level, those who were second- or third-generation wealthy, their time perspective was many years, decades, even generations into the future. It turns out that successful people are intensely future oriented. They think about the future most of the time.

[…]

The very act of thinking long term sharpens your perspective and dramatically improves the quality of your short-term decision making.

So what should we do about this? Tracy advises:

Resolve today to develop long-time perspective. Become intensely future oriented. Think about the future most of the time. Consider the consequences of your decisions and actions. What is likely to happen? And then what could happen? And then what? Practice self-discipline, self-mastery, and self-control. Be willing to pay the price today in order to enjoy the rewards of a better future tomorrow.

Sounds a lot like Garrett Hardin's three lessons from ecology. But really what we're talking about here is second-level thinking.

2. Slow Thinking 

“If it is not necessary to decide, it is necessary not to decide.” 
— Lord Acton

I don't know many consistently successful people or organizations that are constantly reacting without thinking. And yet most of us are habitually in reactive mode. We react and respond to what's happening around us with little deliberate thought.

“From the first ring of the alarm clock,” Tracy writes, we are “largely reacting and responding to stimuli from [our] environment.” This feeds our impulses and appetites. “The normal thinking process is almost instantaneous: stimulus, then immediate response, with no time in between.”

The superior thinking process is also triggered by stimulus, but between the stimulus and the response there is a moment or more where you think before you respond. Just like your mother told you, “Count to ten before you respond, especially when you are upset or angry.”

The very act of stopping to think before you say or do anything almost always improves the quality of your ultimate response. It is an indispensable requirement for success.

One of the best things we can do to improve the quality of our thinking is to understand when we gain an advantage from slow thinking and when we don't.

Ask yourself “does this decision require fast or slow thinking?” 

Shopping for toothpaste is a situation where we derive little benefit from slow thinking. On the other hand if we're making an acquisition or investment we want to be deliberate. Where do we draw the line? A good shortcut is to consider the consequences. Telling your boss he's an idiot when he says something stupid is going to feel really good in the moment but carry lasting consequences. Don't React.

Pause. Think. Act. 

This sounds easy but it's not. One habit you can develop is to continually ask “How do we know this is true?” for the pieces of information you think are relevant to the decision.

3. Informed Thinking Versus Uninformed Thinking

“Beware of endeavouring to be a great man in a hurry.
One such attempt in ten thousand may succeed: these are fearful odds.”
—Benjamin Disraeli

 

I know a lot of entrepreneurs and most of them religiously say the same two words “due diligence.” In fact, a great friend of mine has a 20+ page due diligence checklist. This means taking the time to make the right decision. You may be wrong but it won't be because you rushed. Of course, most of the people who preach due diligence have skin in the game. It's easier to be cavalier (or stupid) when it's heads I win and tails I don't lose much (hello government).

Harold Geneen, who formed a conglomerate at ITT, said, “The most important elements in business are facts. Get the real facts, not the obvious facts or assumed facts or hoped-for facts. Get the real facts. Facts don’t lie.”

Heck, use the scientific method. Tracy writes:

Create a hypothesis— a yet-to-be-proven theory. Then seek ways to invalidate this hypothesis, to prove that your idea is wrong. This is what scientists do.

This is exactly the opposite of what most people do. They come up with an idea, and then they seek corroboration and proof that their idea is a good one. They practice “confirmation bias.” They only look for confirmation of the validity of the idea, and they simultaneously reject all input or information that is inconsistent with what they have already decided to believe.

Create a negative or reverse hypothesis. This is the opposite of your initial theory. For example, you are Isaac Newton, and the idea of gravity has just occurred to you. Your initial hypothesis would be that “things fall down.” You then attempt to prove the opposite—“things fall up.”

If you cannot prove the reverse or negative hypothesis of your idea, you can then conclude that your hypothesis is correct.

 

***

One of the reasons why Charles Darwin was such an effective thinker is that he relentlessly sought out disconfirming evidence.

As the psychologist Jerry Jampolsky once wrote, “Do you want to be right or do you want to be happy?”

It is amazing how many people come up with a new product or service idea and then fall in love with the idea long before they validate whether or not this is something that a sufficient number of customers are willing to buy and pay for.

Keep gathering information until the proper course of action becomes clear, as it eventually will. Check and double-check your facts. Assume nothing on faith. Ask, “How do we know that this is true?”

Finally, search for the hidden flaw, the one weak area in the decision that could prove fatal to the product or business if it occurred. J. Paul Getty, once the richest man in the world, was famous for his approach to making business decisions. He said, “We first determine that it is a good business opportunity. Then we ask, ‘What is the worst possible thing that could happen to us in this business opportunity?’ We then go to work to make sure that the worst possible outcome does not occur.”

Most importantly, never stop gathering information. One of the reasons that Warren Buffett is so successful is that he spends most of his day reading and thinking. I call this the Buffett Formula.

 

***

If you're a knowledge worker decisions are your product. Milton Friedman, the economist, wrote: “The best measure of quality thinking is your ability to accurately predict the consequences of your ideas and subsequent actions.”

If there were a single message to Get Smart, it's another plus in the Farnam Street mold of being conscious. Stop and think before deciding — especially if the consequences are serious. The more ways you have to look at a problem, the more likely you are to better understand. And when you understand a problem — when you really understand a problem — the solution becomes obvious. A friend of mine has a great expression: “To understand is to know what to do.”

Get Smart goes on to talk about goal and result orientated thinking, positive and negative thinking, entrepreneurial vs. corporate thinking and more.

Do Algorithms Beat Us at Complex Decision Making?

Algorithms are all the rage these days. AI researchers are taking more and more ground from humans in areas like rules-based games, visual recognition, and medical diagnosis. However, the idea that algorithms make better predictive decisions than humans in many fields is a very old one.

In 1954, the psychologist Paul Meehl published a controversial book with a boring sounding name: Clinical vs. Statistical Prediction: A Theoretical Analysis and a Review of the Evidence.

The controversy? After reviewing the data, Meehl claimed that mechanical, data-driven algorithms could better predict human behavior than trained clinical psychologists — and with much simpler criteria. He was right.

The passing of time has not been friendly to humans in this game: Studies continue to show that the algorithms do a better job than experts in a range of fields. In Daniel Kahneman's Thinking Fast and Slow, he details a selection of fields which have demonstrated inferior human judgment compared to algorithms:

The range of predicted outcomes has expanded to cover medical variables such as the longevity of cancer patients, the length of hospital stays, the diagnosis of cardiac disease, and the susceptibility of babies to sudden infant death syndrome; economic measures such as the prospects of success for new businesses, the evaluation of credit risks by banks, and the future career satisfaction of workers; questions of interest to government agencies, including assessments of the suitability of foster parents, the odds of recidivism among juvenile offenders, and the likelihood of other forms of violent behavior; and miscellaneous outcomes such as the evaluation of scientific presentations, the winners of football games, and the future prices of Bordeaux wine.

The connection between them? Says Kahneman: “Each of these domains entails a significant degree of uncertainty and unpredictability.” He called them “low-validity environments”, and in those environments, simple algorithms matched or outplayed humans and their “complex” decision making criteria, essentially every time.

***

A typical case is described in Michael Lewis' book on the relationship between Daniel Kahneman and Amos Tversky, The Undoing Project. He writes of work done at the Oregon Research Institute on radiologists and their x-ray diagnoses:

The Oregon researchers began by creating, as a starting point, a very simple algorithm, in which the likelihood that an ulcer was malignant depended on the seven factors doctors had mentioned, equally weighted. The researchers then asked the doctors to judge the probability of cancer in ninety-six different individual stomach ulcers, on a seven-point scale from “definitely malignant” to “definitely benign.” Without telling the doctors what they were up to, they showed them each ulcer twice, mixing up the duplicates randomly in the pile so the doctors wouldn't notice they were being asked to diagnose the exact same ulcer they had already diagnosed. […] The researchers' goal was to see if they could create an algorithm that would mimic the decision making of doctors.

This simple first attempt, [Lewis] Goldberg assumed, was just a starting point. The algorithm would need to become more complex; it would require more advanced mathematics. It would need to account for the subtleties of the doctors' thinking about the cues. For instance, if an ulcer was particularly big, it might lead them to reconsider the meaning of the other six cues.

But then UCLA sent back the analyzed data, and the story became unsettling. (Goldberg described the results as “generally terrifying”.) In the first place, the simple model that the researchers had created as their starting point for understanding how doctors rendered their diagnoses proved to be extremely good at predicting the doctors' diagnoses. The doctors might want to believe that their thought processes were subtle and complicated, but a simple model captured these perfectly well. That did not mean that their thinking was necessarily simple, only that it could be captured by a simple model.

More surprisingly, the doctors' diagnoses were all over the map: The experts didn't agree with each other. Even more surprisingly, when presented with duplicates of the same ulcer, every doctor had contradicted himself and rendered more than one diagnosis: These doctors apparently could not even agree with themselves.

[…]

If you wanted to know whether you had cancer or not, you were better off using the algorithm that the researchers had created than you were asking the radiologist to study the X-ray. The simple algorithm had outperformed not merely the group of doctors; it had outperformed even the single best doctor.

The fact that doctors (and psychiatrists, and wine experts, and so forth) cannot even agree with themselves is a problem called decision making “noise”: Given the same set of data twice, we make two different decisions. Noise. Internal contradiction.

Algorithms win, at least partly, because they don't do this: The same inputs generate the same outputs every single time. They don't get distracted, they don't get bored, they don't get mad, they don't get annoyed. Basically, they don't have off days. And they don't fall prey to the litany of biases that humans do, like the representativeness heuristic.

The algorithm doesn't even have to be a complex one. As demonstrated above with radiology, simple rules work just as well as complex ones. Kahneman himself addresses this in Thinking, Fast and Slow when discussing Robyn Dawes's research on the superiority of simple algorithms using a few equally-weighted predictive variables:

The surprising success of equal-weighting schemes has an important practical implication: it is possible to develop useful algorithms without prior statistical research. Simple equally weight formulas based on existing statistics or on common sense are often very good predictors of significant outcomes. In a memorable example, Dawes showed that marital stability is well predicted by a formula: Frequency of lovemaking minus frequency of quarrels.

You don't want your result to be a negative number.

The important conclusion from this research is that an algorithm that is constructed on the back of an envelope is often good enough to compete with an optimally weighted formula, and certainly good enough to outdo expert judgment. This logic can be applied in many domains, ranging from the selection of stocks by portfolio managers to the choices of medical treatments by doctors or patients.

Stock selection, certainly a “low validity environment”, is an excellent example of the phenomenon.

As John Bogle pointed out to the world in the 1970's, a point which has only strengthened with time, the vast majority of human stock-pickers cannot outperform a simple S&P 500 index fund, an investment fund that operates on strict algorithmic rules about which companies to buy and sell and in what quantities. The rules of the index aren't complex, and many people have tried to improve on them with less success than might be imagined.

***

Another interesting area where this holds is interviewing and hiring, a notoriously difficult “low-validity” environment. Even elite firms often don't do it that well, as has been well documented.

Fortunately, if we take heed of the advice of the psychologists, operating in a low-validity environment has rules that can work very well. In Thinking Fast and Slow, Kahneman recommends fixing your hiring process by doing the following (or some close variant), in order to replicate the success of the algorithms:

Suppose you need to hire a sales representative for your firm. If you are serious about hiring the best possible person for the job, this is what you should do. First, select a few traits that are prerequisites for success in this position (technical proficiency, engaging personality, reliability, and so on). Don't overdo it — six dimensions is a good number. The traits you choose should be as independent as possible from each other, and you should feel that you can assess them reliably by asking a few factual questions. Next, make a list of questions for each trait and think about how you will score it, say on a 1-5 scale. You should have an idea of what you will call “very weak” or “very strong.”

These preparations should take you half an hour or so, a small investment that can make a significant difference in the quality of the people you hire. To avoid halo effects, you must collect the information one at a time, scoring each before you move on to the next one. Do not skip around. To evaluate each candidate, add up the six scores. […] Firmly resolve that you will hire the candidate whose final score is the highest, even if there is another one whom you like better–try to resit your wish to invent broken legs to change the ranking. A vast amount of research offers a promise: you are much more likely to find the best candidate if you use this procedure than if you do what people normally do in such situations, which is to go into the interview unprepared and to make choices by an overall intuitive judgment such as “I looked into his eyes and liked what I saw.”

In the battle of man vs algorithm, unfortunately, man often loses. The promise of Artificial Intelligence is just that. So if we're going to be smart humans, we must learn to be humble in situations where our intuitive judgment simply is not as good as a set of simple rules.

Naval Ravikant on Reading, Happiness, Systems for Decision Making, Habits, Honesty and More

Naval Ravikant (@naval) is the CEO and co-founder of AngelList. He’s invested in more than 100 companies, including Uber, Twitter, Yammer, and many others.

Don’t worry, we’re not going to talk about early stage investing. Naval’s an incredibly deep thinker who challenges the status quo on so many things.

In this wide-ranging interview, we talk about reading, habits, decision-making, mental models, and life.

Just a heads up, this is the longest podcast I’ve ever done. While it felt like only thirty minutes, our conversation lasted over two hours!

If you’re like me, you’re going to take a lot of notes so grab a pen and paper. I left some white space on the transcript below in case you want to take notes in the margin.

Enjoy this amazing conversation.

******

Listen

***

Books mentioned

Transcript

Normally only members of our learning community have access to transcripts, however, we wanted to make this one open to everyone. Here's the complete transcript of the interview with Naval.

***

If you liked this, check out all the episodes of the knowledge project.

Moving the Finish Line: The Goal Gradient Hypothesis

Imagine a sprinter running an Olympic race. He’s competing in the 1600 meter run.

The first two laps he runs at a steady but hard pace, trying to keep himself consistently near the head, or at least the middle, of the pack, hoping not to fall too far behind while also conserving energy for the whole race.

About 800 meters in, he feels himself start to fatigue and slow. At 1000 meters, he feels himself consciously expending less energy. At 1200, he’s convinced that he didn’t train enough.

Now watch him approach the last 100 meters, the “mad dash” for the finish. He’s been running what would be an all-out sprint to us mortals for 1500 meters, and yet what happens now, as he feels himself neck and neck with his competitors, the finish line in sight?

He speeds up. That energy drag is done. The goal is right there, and all he needs is one last push. So he pushes.

This is called the Goal Gradient Effect, or more precisely, the Goal Gradient Hypothesis. Its effect on biological creatures is not just a feeling, but a real and measurable thing.

***

The first person to try explaining the goal gradient hypothesis was an early behavioural psychologist named Clark L. Hull.

As with other animals, when it came to humans, Hull was a pretty hardcore “behaviourist”, thinking that human behaviour could eventually be reduced to mathematical prediction based on rewards and conditioning. As insane as this sounds now, he had a neat mathematical formula for human behaviour:

screen-shot-2016-10-14-at-12-34-26-pm

Some of his ideas eventually came to be seen as extremely limiting Procrustean Bed type models of human behavior, but the Goal Gradient Hypothesis was replicated many times over the years.

Hull himself wrote papers with titles like The Goal-Gradient Hypothesis and Maze Learning to explore the effect of the idea in rats. As Hull put it, “...animals in traversing a maze will move at a progressively more rapid pace as the goal is approached.” Just like the runner above.

Most of the work Hull focused on were animals rather than humans, showing somewhat unequivocally that in the context of approaching a reward, the animals did seem to speed up as the goal approached, enticed by the end of the maze. The idea was, however, resurrected in the human realm in 2006 with a paper entitled The Goal-Gradient Hypothesis Resurrected: Purchase Acceleration, Illusionary Goal Progress, and Customer Retention. (link)

The paper examined consumer behaviour in the “goal gradient” sense and found, alas, it wasn’t just rats that felt the tug of the “end of the race” — we do too. Examining a few different measurable areas of human behaviour, the researchers found that consumers would work harder to earn incentives as the goal came in sight, and that after the reward was earned, they'd slow down their efforts:

We found that members of a café RP accelerated their coffee purchases as they progressed toward earning a free coffee. The goal-gradient effect also generalized to a very different incentive system, in which shorter goal distance led members to visit a song-rating Web site more frequently, rate more songs during each visit, and persist longer in the rating effort. Importantly, in both incentive systems, we observed the phenomenon of post-reward resetting, whereby customers who accelerated toward their first reward exhibited a slowdown in their efforts when they began work (and subsequently accelerated) toward their second reward. To the best of our knowledge, this article is the first to demonstrate unequivocal, systematic behavioural goal gradients in the context of the human psychology of rewards.

Fascinating.

***

If we’re to take the idea seriously, the Goal Gradient Hypothesis has some interesting implications for leaders and decision-makers.

The first and most important is probably that incentive structures should take the idea into account. This is a fairly intuitive (but often unrecognized) idea: Far-away rewards are much less motivating than near term ones. Given the chance to earn $1,000 at the end of this month, and each thereafter, or $12,000 at the end of the year, which would you be more likely to work hard for?

What if I pushed it back even more but gave you some “interest” to compensate: Would you work harder for the potential to earn $90,000 five years from now or to earn $1,000 this month, followed by $1,000 the following month, and so on, every single month during five year period?

Companies like Nucor take the idea seriously: They pay bonuses to lower-level employees based on monthly production, not letting it wait until the end of the year. Essentially, the end of the maze happens every 30 days rather than once per year. The time between doing the work and the reward is shortened.

The other takeaway comes to consumer behaviour, as referenced in the marketing paper. If you’re offering rewards for a specific action from your customer, do you reward them sooner, or later?

The answer is almost always going to be “sooner”. In fact, the effect may be strong enough that you can get away with less total rewards by increasing their velocity.

Lastly, we might be able to harness the Hypothesis in our personal lives.

Let’s say we want to start reading more. Do we set a goal to read 52 books this year and hold ourselves accountable, or to read 1 book a week? What about 25 pages per day?

Not only does moving the goalposts forward tend to increase our motivation, but we repeatedly prove to ourselves that we’re capable of accomplishing them. This is classic behavioural psychology: Instant rewards rather than delayed. (Even if they’re psychological.) Not only that, but it forces us to avoid procrastination — leaving 35 books to be read in the last two months of the year, for example.

Those three seem like useful lessons, but here’s a challenge: Try synthesizing a new rule or idea of your own, combining the Goal Gradient Effect with at least one other psychological principle, and start testing it out in your personal life or in your organization. Don’t let useful nuggets sit around; instead, start eating the broccoli.