Category: Decision Making

Do Algorithms Beat Us at Complex Decision Making?

Algorithms are all the rage these days. AI researchers are taking more and more ground from humans in areas like rules-based games, visual recognition, and medical diagnosis. However, the idea that algorithms make better predictive decisions than humans in many fields is a very old one.

In 1954, the psychologist Paul Meehl published a controversial book with a boring sounding name: Clinical vs. Statistical Prediction: A Theoretical Analysis and a Review of the Evidence.

The controversy? After reviewing the data, Meehl claimed that mechanical, data-driven algorithms could better predict human behavior than trained clinical psychologists — and with much simpler criteria. He was right.

The passing of time has not been friendly to humans in this game: Studies continue to show that the algorithms do a better job than experts in a range of fields. In Daniel Kahneman's Thinking Fast and Slow, he details a selection of fields which have demonstrated inferior human judgment compared to algorithms:

The range of predicted outcomes has expanded to cover medical variables such as the longevity of cancer patients, the length of hospital stays, the diagnosis of cardiac disease, and the susceptibility of babies to sudden infant death syndrome; economic measures such as the prospects of success for new businesses, the evaluation of credit risks by banks, and the future career satisfaction of workers; questions of interest to government agencies, including assessments of the suitability of foster parents, the odds of recidivism among juvenile offenders, and the likelihood of other forms of violent behavior; and miscellaneous outcomes such as the evaluation of scientific presentations, the winners of football games, and the future prices of Bordeaux wine.

The connection between them? Says Kahneman: “Each of these domains entails a significant degree of uncertainty and unpredictability.” He called them “low-validity environments”, and in those environments, simple algorithms matched or outplayed humans and their “complex” decision making criteria, essentially every time.

***

A typical case is described in Michael Lewis' book on the relationship between Daniel Kahneman and Amos Tversky, The Undoing Project. He writes of work done at the Oregon Research Institute on radiologists and their x-ray diagnoses:

The Oregon researchers began by creating, as a starting point, a very simple algorithm, in which the likelihood that an ulcer was malignant depended on the seven factors doctors had mentioned, equally weighted. The researchers then asked the doctors to judge the probability of cancer in ninety-six different individual stomach ulcers, on a seven-point scale from “definitely malignant” to “definitely benign.” Without telling the doctors what they were up to, they showed them each ulcer twice, mixing up the duplicates randomly in the pile so the doctors wouldn't notice they were being asked to diagnose the exact same ulcer they had already diagnosed. […] The researchers' goal was to see if they could create an algorithm that would mimic the decision making of doctors.

This simple first attempt, [Lewis] Goldberg assumed, was just a starting point. The algorithm would need to become more complex; it would require more advanced mathematics. It would need to account for the subtleties of the doctors' thinking about the cues. For instance, if an ulcer was particularly big, it might lead them to reconsider the meaning of the other six cues.

But then UCLA sent back the analyzed data, and the story became unsettling. (Goldberg described the results as “generally terrifying”.) In the first place, the simple model that the researchers had created as their starting point for understanding how doctors rendered their diagnoses proved to be extremely good at predicting the doctors' diagnoses. The doctors might want to believe that their thought processes were subtle and complicated, but a simple model captured these perfectly well. That did not mean that their thinking was necessarily simple, only that it could be captured by a simple model.

More surprisingly, the doctors' diagnoses were all over the map: The experts didn't agree with each other. Even more surprisingly, when presented with duplicates of the same ulcer, every doctor had contradicted himself and rendered more than one diagnosis: These doctors apparently could not even agree with themselves.

[…]

If you wanted to know whether you had cancer or not, you were better off using the algorithm that the researchers had created than you were asking the radiologist to study the X-ray. The simple algorithm had outperformed not merely the group of doctors; it had outperformed even the single best doctor.

The fact that doctors (and psychiatrists, and wine experts, and so forth) cannot even agree with themselves is a problem called decision making “noise”: Given the same set of data twice, we make two different decisions. Noise. Internal contradiction.

Algorithms win, at least partly, because they don't do this: The same inputs generate the same outputs every single time. They don't get distracted, they don't get bored, they don't get mad, they don't get annoyed. Basically, they don't have off days. And they don't fall prey to the litany of biases that humans do, like the representativeness heuristic.

The algorithm doesn't even have to be a complex one. As demonstrated above with radiology, simple rules work just as well as complex ones. Kahneman himself addresses this in Thinking, Fast and Slow when discussing Robyn Dawes's research on the superiority of simple algorithms using a few equally-weighted predictive variables:

The surprising success of equal-weighting schemes has an important practical implication: it is possible to develop useful algorithms without prior statistical research. Simple equally weight formulas based on existing statistics or on common sense are often very good predictors of significant outcomes. In a memorable example, Dawes showed that marital stability is well predicted by a formula: Frequency of lovemaking minus frequency of quarrels.

You don't want your result to be a negative number.

The important conclusion from this research is that an algorithm that is constructed on the back of an envelope is often good enough to compete with an optimally weighted formula, and certainly good enough to outdo expert judgment. This logic can be applied in many domains, ranging from the selection of stocks by portfolio managers to the choices of medical treatments by doctors or patients.

Stock selection, certainly a “low validity environment”, is an excellent example of the phenomenon.

As John Bogle pointed out to the world in the 1970's, a point which has only strengthened with time, the vast majority of human stock-pickers cannot outperform a simple S&P 500 index fund, an investment fund that operates on strict algorithmic rules about which companies to buy and sell and in what quantities. The rules of the index aren't complex, and many people have tried to improve on them with less success than might be imagined.

***

Another interesting area where this holds is interviewing and hiring, a notoriously difficult “low-validity” environment. Even elite firms often don't do it that well, as has been well documented.

Fortunately, if we take heed of the advice of the psychologists, operating in a low-validity environment has rules that can work very well. In Thinking Fast and Slow, Kahneman recommends fixing your hiring process by doing the following (or some close variant), in order to replicate the success of the algorithms:

Suppose you need to hire a sales representative for your firm. If you are serious about hiring the best possible person for the job, this is what you should do. First, select a few traits that are prerequisites for success in this position (technical proficiency, engaging personality, reliability, and so on). Don't overdo it — six dimensions is a good number. The traits you choose should be as independent as possible from each other, and you should feel that you can assess them reliably by asking a few factual questions. Next, make a list of questions for each trait and think about how you will score it, say on a 1-5 scale. You should have an idea of what you will call “very weak” or “very strong.”

These preparations should take you half an hour or so, a small investment that can make a significant difference in the quality of the people you hire. To avoid halo effects, you must collect the information one at a time, scoring each before you move on to the next one. Do not skip around. To evaluate each candidate, add up the six scores. […] Firmly resolve that you will hire the candidate whose final score is the highest, even if there is another one whom you like better–try to resit your wish to invent broken legs to change the ranking. A vast amount of research offers a promise: you are much more likely to find the best candidate if you use this procedure than if you do what people normally do in such situations, which is to go into the interview unprepared and to make choices by an overall intuitive judgment such as “I looked into his eyes and liked what I saw.”

In the battle of man vs algorithm, unfortunately, man often loses. The promise of Artificial Intelligence is just that. So if we're going to be smart humans, we must learn to be humble in situations where our intuitive judgment simply is not as good as a set of simple rules.

Blog Posts, Book Reviews, and Abstracts: On Shallowness

We’re quite glad that you read Farnam Street, and we hope we’re always offering you a massive amount of value. (If not, email us and tell us what we can do more effectively.)

But there’s a message all of our readers should appreciate: Blog posts are not enough to generate the deep fluency you need to truly understand or get better at something. We offer a starting point, not an end point.

This goes just as well for book reviews, abstracts, cliff's notes, and a good deal of short-form journalism.

This is a hard message for some who want a shortcut. They want the “gist” and the “high level takeaways”, without doing the work or eating any of the broccoli. They think that’s all it takes: Check out a 5-minute read, and instantly their decision making and understanding of the world will improve right-quick. Most blogs, of course, encourage this kind of shallowness. Because it makes you feel that the whole thing is pretty easy.

Here’s the problem: The world is more complex than that. It doesn’t actually work this way. The nuanced detail behind every “high level takeaway” gives you the context needed to use it in the real world. The exceptions, the edge cases, and the contradictions.

Let me give you an example.

A high-level takeaway from reading Kahneman’s Thinking Fast, and Slow would be that we are subject to something he and Amos Tversky call the Representativeness Heuristic. We create models of things in our head, and then fit our real-world experiences to the model, often over-fitting drastically. A very useful idea.

However, that’s not enough. There are so many follow-up questions. Where do we make the most mistakes? Why does our mind create these models? Where is this generally useful? What are the nuanced examples of where this tendency fails us? And so on. Just knowing about the Heuristic, knowing that it exists, won't perform any work for you.

Or take the rise of human species as laid out by Yuval Harari. It’s great to post on his theory; how myths laid the foundation for our success, how “natural” is probably a useless concept the way it’s typically used, and how biology is the great enabler.

But Harari’s book itself contains the relevant detail that fleshes all of this out. And further, his bibliography is full of resources that demand your attention to get even more backup. How did he develop that idea? You have to look to find out.

Why do all this? Because without the massive, relevant detail, your mind is built on a house of cards.

What Farnam Street and a lot of other great resources give you is something like a brief map of the territory.

Welcome to Colonial Williamsburg! Check out the re-enactors, the museum, and the theatre. Over there is the Revolutionary City. Gettysburg is 4 hours north. Washington D.C. is closer to 2.5 hours.

Great – now you have a lay of the land. Time to dig in and actually learn about the American Revolution. (This book is awesome, if you actually want to do that.)

Going back to Kahneman, one of his and Tversky’s great findings was the concept of the Availability Heuristic. Basically, the mind operates on what it has close at hand.

As Kahneman puts it, “An essential design feature of the associative machine is that it represents only activated ideas. Information that is not retrieved (even unconsciously) from memory might as well not exist. System 1 excels at constructing the best possible story that incorporates ideas currently activated, but it does not (cannot) allow for information it does not have.”

That means that in the moment of decision making, when you’re thinking hard on some complex problem you face, it’s unlikely that your mind is working all that successfully without the details. It doesn't have anything to draw on. It’d be like a chess player who read a book about great chess players, but who hadn’t actually studied all of their moves. Not very effective.

The great difficulty, of course, is that we lack the time to dig deep into everything. Opportunity costs and trade-offs are quite real.

That’s why you must develop excellent filters. What’s worth learning this deeply? We think it’s the first-principle style mental models. The great ideas from physical systems, biological systems, and human systems. The new-new thing you’re studying is probably either A. Wrong or B. Built on one of those great ideas anyways. Farnam Street, in a way, is just a giant filtering mechanism to get you started down the hill.

But don't stop there. Don't stop at the starting line. Resolve to increase your depth and stop thinking you can have it all in 5 minutes or less. Use our stuff, and whoever else's stuff you like, as an entrée to the real thing.

(P.S. If you need to learn how to focus, check this out; if you need to learn how to read more effectively, go with this.)

The Probability Distribution of the Future

The best colloquial definition of risk may be the following:

“Risk means more things can happen than will happen.”

We found it through the inimitable Howard Marks, but it's a quote from Elroy Dimson of the London Business School. Doesn't that capture it pretty well?

Another way to state it is: If there were only one thing that could happen, how much risk would there be, except in an extremely banal sense? You'd know the exact probability distribution of the future. If I told you there was a 100% probability that you'd get hit by a car today if you walked down the street, you simply wouldn't do it. You wouldn't call walking down the street a “risky gamble” right? There's no gamble at all.

But the truth is that in practical reality, there aren't many 100% situations to bank on. Way more things can happen than will happen. That introduces great uncertainty into the future, no matter what type of future you're looking at: An investment, your career, your relationships, anything.

How do we deal with this in a pragmatic way? The investor Howard Marks starts it this way:

Key point number one in this memo is that the future should be viewed not as a fixed outcome that’s destined to happen and capable of being predicted, but as a range of possibilities and, hopefully on the basis of insight into their respective likelihoods, as a probability distribution.

This is the most sensible way to think about the future: A probability distribution where more things can happen than will happen. Knowing that we live in a world of great non-linearity and with the potential for unknowable and barely understandable Black Swan events, we should never become too confident that we know what's in store, but we can also appreciate that some things are a lot more likely than others. Learning to adjust probabilities on the fly as we get new information is called Bayesian updating.

But.

Although the future is certainly a probability distribution, Marks makes another excellent point in the wonderful memo above: In reality, only one thing will happen. So you must make the decision: Are you comfortable if that one thing happens, whatever it might be? Even if it only has a 1% probability of occurring? Echoing the first lesson of biology, Warren Buffett stated that “In order to win, you must first survive.” You have to live long enough to play out your hand.

Which leads to an important second point: Uncertainty about the future does not necessarily equate with risk, because risk has another component: Consequences. The world is a place where “bad outcomes” are only “bad” if you know their (rough) magnitude. So in order to think about the future and about risk, we must learn to quantify.

It's like the old saying (usually before something terrible happens): What's the worst that could happen? Let's say you propose to undertake a six month project that will cost your company $10 million, and you know there's a reasonable probability that it won't work. Is that risky?

It depends on the consequences of losing $10 million, and the probability of that outcome. It's that simple! (Simple, of course, does not mean easy.) A company with $10 billion in the bank might consider that a very low-risk bet even if it only had a 10% chance of succeeding.

In contrast, a company with only $10 million in the bank might consider it a high-risk bet even if it only had a 10% of failing. Maybe five $2 million projects with uncorrelated outcomes would make more sense to the latter company.

In the real world, risk = probability of failure x consequences. That concept, however, can be looked at through many lenses. Risk of what? Losing money? Losing my job? Losing face? Those things need to be thought through. When we observe others being “too risk averse,” we might want to think about which risks they're truly avoiding. Sometimes risk is not only financial. 

***

Let's cover one more under-appreciated but seemingly obvious aspect of risk, also pointed out by Marks: Knowing the outcome does not teach you about the risk of the decision.

This is an incredibly important concept:

If you make an investment in 2012, you’ll know in 2014 whether you lost money (and how much), but you won’t know whether it was a risky investment – that is, what the probability of loss was at the time you made it.

To continue the analogy, it may rain tomorrow, or it may not, but nothing that happens tomorrow will tell you what the probability of rain was as of today. And the risk of rain is a very good analogue (although I’m sure not perfect) for the risk of loss.

How many times do we see this simple dictum violated? Knowing that something worked out, we argue that it wasn't that risky after all. But what if, in reality, we were simply fortunate? This is the Fooled by Randomness effect.

The way to think about it is the following: The worst thing that can happen to a young gambler is that he wins the first time he goes to the casinoHe might convince himself he can beat the system.

The truth is that most times we don't know the probability distribution at all. Because the world is not a predictable casino game — an error Nassim Taleb calls the Ludic Fallacy — the best we can do is guess.

With intelligent estimations, we can work to get the rough order of magnitude right, understand the consequences if we're wrong, and always be sure to never fool ourselves after the fact.

If you're into this stuff, check out Howard Marks' memos to his clients, or check out his excellent book, The Most Important Thing. Nate Silver also has an interesting similar idea about the difference between risk and uncertainty. And lastly, another guy that understands risk pretty well is Jason Zweig, who we've interviewed on our podcast before.

***

If you liked this article you'll love:

Nassim Taleb on the Notion of Alternative Histories — “The quality of a decision cannot be solely judged based on its outcome.”

The Four Types of Relationships — As Seneca said, “Time discovers truth.”

Breaking the Rules: Moneyball Edition

Most of the book Simple Rules by Donald Sull and Kathleen Eisenhardt talks about identifying a problem area (or an area ripe for “simple rules”) and then walks you through creating your own set of rules. It's a useful mental process.

An ideal situation for simple rules is something repetitive, giving you constant feedback so you can course correct as you go. But what if your rules stop working and you need to start over completely?

Simple Rules recounts the well-known Moneyball tale in its examination of this process:

The story begins with Sandy Alderson. Alderson, a former Marine with no baseball background became the A’s general manager in 1983. Unlike baseball traditionalists, Alderson saw scoring runs as a process, not an outcome, and imagined baseball as a factory with a flow of players moving along the bases. This view led Alderson and later his protege and replacement, Billy Beane, to the insight that most teams overvalue batting average (hits only) and miss the relevance of on-base percentage (walks plus hits) to keeping the runners moving. Like many insightful rules, this boundary rule of picking players with a high on base percentage has subtle second – and third-order effects. Hitters with a high on-base percentage are highly disciplined (i.e., patient, with a good eye for strikes). This means they get more walks, and their reputation for discipline encourages pitchers to throw strikes, which are easier to hit. They tire out pitchers by making them throw more pitches overall, and disciplined hitting does not erode much with age. These and other insights are at the heart of what author Michael Lewis famously described as moneyball.

The Oakland A’s did everything right, they had examined the issues, they tried to figure out those areas which would most benefit from a set of simple rules and they had implemented them. The problem was, they were easy rules to copy. 

They were operating in a Red Queen Effect world where everyone around them was co-evolving, where running fast was just enough to get ahead temporarily, but not permanently. The Red Sox were the first and most successful club to copy the A's:

By 2004, a free-spending team, the Boston Red Sox, co-opted the A’s principles and won the World Series for the first time since 1918. In contrast, the A’s went into decline, and by 2007 the were losing more games than they were winning Moneyball had struck out.

What can we do when the rules stop working? 

We must break them.

***

When the A's had brought in Sandy Alderson, he was an outsider with no baseball background who could look at the problem in a different and new light. So how could that be replicated?

The team decided to bring in Farhan Zaidi as director of baseball operations in 2009. Zaidi spent most of his life with a pretty healthy obsession for baseball but he had a unique background: a PhD in behavioral economics.

He started on the job of breaking the old rules and crafting new ones. Like Andy Grove did once upon a time with Intel, Zaidi helped the team turn and face a new reality. Sull and Eisenhardt consider this as a key trait:

To respond effectively to major change, it is essential to investigate the new situation actively, and create a reimagined vision that utilizes radically different rules.

The right choice is often to move to the new rules as quickly as possible. Performance will typically decline in the short run, but the transition to the new reality will be faster and more complete in the long run. In contrast, changing slowly often results in an awkward combination of the past and the future with neither fitting the other or working well.

Beane and Zaidi first did some house cleaning: They fired the team’s manager. Then, they began breaking the old Moneyball rules, things like avoiding drafting high-school players. They also decided to pay more attention to physical skills like speed and throwing.

In the short term, the team performed quite poorly as fan attendance showed a steady decline. Yet, once again, against all odds, the A’s finished first in their division in 2012. Their change worked. 

With a new set of Simple Rules, they became a dominant force in their division once again. 

Reflecting their formidable analytic skills, the A’s brass had a new mindset that portrayed baseball as a financial market rife with arbitrage possibilities and simple rules to match.

One was a how-to rule that dictated exploiting players with splits. Simply put, players with splits have substantially different performances in two seemingly similar situations. A common split is when a player hits very well against right-handed pitchers and poorly against left-handed pitchers, or vice versa. Players with spits are mediocre when they play every game, and are low paid. In contrast, most superstars play well regardless of the situation, and are paid handsomely for their versatility. The A’s insight was that when a team has a player who can perform one side of the split well and a different player who excels at the opposite split, the two positives can create a cheap composite player. So the A’s started using a boundary rule to pick players with splits and how-to rule to exploit those splits with platooning – putting different players at the same position to take advantage of their splits against right – or left-handed pitching.

If you’re reading this as a baseball fan, you’re probably thinking that exploiting splits isn’t anything new. So why did it have such an effect on their season? Well, no one had pushed it this hard before, which had some nuanced effects that might not have been immediately apparent.

For example, exploiting these splits keeps players healthier during the long 162-game season because they don’t play every day. The rule keeps everyone motivated because everyone has a role and plays often. It provides versatility when players are injured since players can fill in for each other.

They didn't stop there. Zaidi and Beane looked at the data and kept rolling out new simple rules that broke with their highly successful Moneyball past.

In 2013 they added a new boundary rule to the player-selection activity: pick fly-ball hitters, meaning hitters who tend to hit the ball in the air and out of the infield (in contrast with ground-ball hitters). Sixty percent of the A’s at-bat were by fly-ball hitters in 2013, the highest percentage in major-league baseball in almost a decade, and the A’s had the highest ratio of fly ball to ground balls, by far. Why fly-ball hitters?

Since one of ten fly balls is a home run, fly-ball hitters hit more home runs: an important factor in winning games. Fly-ball hitters also avoid ground-ball double plays, a rally killer if ever there as one. They are particularly effective against ground-ball pitches because they tend to swing underneath the ball, taking way the advantage of those pitchers. In fact, the A’s fly-ball hitters batted an all-star caliber .302 against ground-ball pitchers in 2013 on their way to their second consecutive division title despite having the fourth-lowest payroll in major-league baseball.

Unfortunately, the new rules had a short-lived effectiveness: In 2014 the A's fell to 2nd place and have been struggling the last two seasons. Two Cinderella stories is a great achievement, but it’s hard to maintain that edge. 

This wonderful demonstration of the Red Queen Effect in sports can be described as an “arms race.’” As everyone tries to get ahead, a strange equilibrium is created by the simultaneous continual improvement, and those with more limited resources must work even harder as the pack moves ahead one at a time.

Even though they have adapted and created some wonderful “Simple Rules” in the past, the A's (and all of their competitors) must stay in the race in order to return to the top: No “rule” will allow them to rest on their laurels. Second Level Thinking and a little real world experience shows this to be true: Those that prosper consistently will think deeply, reevaluate, adapt, and continually evolve. That is the nature of a competitive world. 

Simple Rules for Business Strategy

The book Simple Rules by Donald Sull and Kathleen Eisenhardt has a very interesting chapter on strategy, which tries to answer the following question: How do you translate your broad objectives into a strategy that can provide guidelines for your employees from day to day?

It’s the last bit there which is particularly important — getting everyone on the same page. 

Companies don’t seem to have a problem creating broad objectives (which isn't really a strategy). Your company might not call them that, they might call them “mission statements” or simply “corporate goals.”  They sound all well and good, but very little thought is given to how we will actually implement these lofty goals.

As Sull and Eisenhardt put it: 

Developing a strategy and implementing it are often viewed as two distinct activities — first you come up with the perfect plan and then you worry about how to make it happen. This approach, common through it is, creates a disconnect between what a company is trying to accomplish and what employees do on a day-to-day basis.

The authors argue that companies can bridge this gap between strategic intent and actual implementation by following three steps:

  1. Figure out what will move the needles.
  2. Choose a bottleneck.
  3. Craft the rules.

1. Moving the Needles

The authors use a dual needle metaphor to visualize corporate profits. They see it as two parallel needles: an upper needle which represents revenues and a lower needle which represents costs. The first critical step is to identify which actions will drive a wedge between the needles causing an increase in profits, a decrease in costs, and sustain this over time.

In other words, as simple as it sounds, we need an actual set of steps to get from figure a. to figure b.

screen-shot-2016-10-17-at-1-36-10-pm

What action will become the wedge that will move the needles?

The authors believe the best way to answer this is to sit down with your management team and ask them to work as a group to answer the following three questions:

  1. Who will we target as customers?
  2. What product or service will we offer?
  3. How will we provide this product at a profit?

When you are trying to massage out these answers remember to use inversion as well. 

Equally important are the choices on who not to serve and what not to offer.

Steve Jobs once pointed out that Apple was defined as much by what it didn't do as by what it did.

2. Bottlenecks

Speaking of inversion, in order to complete our goal we must also figure out what's holding us back from moving the needles — the bottlenecks standing in our way.

When it comes to implementing a strategy of simple rules, pinpointing the precise decision or activity where rules will have the most impact is half the battle. We use the term bottleneck to describe a specific activity or decision that hinders a company from moving the needles.

You may be surprised at the amount of bottlenecks you come across, so you'll have to practice some “triage” of your issues, sorting what's important from what's really important.

The authors believe that the best bottlenecks to focus your attention on share three characteristics:

  1. They have a direct and significant impact on value creation.
  2. They should represent recurrent decisions (as opposed to ‘one off’ choices).
  3. They should be obstacles that arise when opportunities exceed available resources.

Once we’ve established what the bottlenecks are, it’s time to craft the rules which will provide you a framework in which to remove them.

3. Craft the Rules

Developing rules from the top down is a big mistake. When leaders rely on their gut instincts, they overemphasize recent events, build in their personal biases, and ignore data that doesn’t fit with their preconceived notions. It is much better to involve a team, typically ranging in size from four to eight members, and use a structured process to harness members’ diverse insights and points of view. When drafting the dream team to develop simple rules, it is critical to include some of the people who will be using them on a day-to-day basis.

This probably seems like common sense but we’re guessing you have worked at least one place where all information and new initiatives came from above, and much of it seemingly came out of nowhere because you weren’t likely involved.

In these situations it's very hard to get buy-in from the employees — yet they are the ones doing the work, implementing the rules. So we need to think about their involvement from the beginning.

Having users make the rules confers several advantages. First, they are closest to the facts on the ground and best positioned to codify experience into usable rules. Because they will make decisions based on the rules, they can strike the right balance between guidance and discretion, avoiding rules that are overly vague or restrictive. User can also phrase the rules in language that resonates for them, rather than relying on business jargon. By actively participating in the process, users are more likely to buy into the final rules and therefore apply them in practice. Firsthand knowledge also makes it easier to explain the rules, and their underlying rationale, to colleagues who did not participate in the process.

It’s important to note here that this is a process, a process in which you are never done – there is no real finish line. You must always plan to learn and to iterate as you learn — keep changing the plan as new information comes in. Rigidity to a plan is not a virtue; learning and adapting are virtues

***

There's nothing wrong with strategy. In fact, without a strategy, it's hard to figure out what to do; some strategy or another must guide your actions as an organization. But it's simply not enough: Detailed execution, at the employee level, is what gets things done. That's what the Simple Rules are all about.

Strategy, in our view, lives in the simple rules that guide an organization’s most important activities. They allow employees to make on-the-spot decisions and seize unexpected opportunities without losing sight of the big picture.

The process you use to develop simple rules matters as much as the rules themselves. Involving a broad cross-section of employees, for example, injects more points of view into the discussion, produces a shared understanding of what matters for value creation, and increases buy-in to the simple rules. Investing the time up front to clarify what will move the needles dramatically increases the odds that simple rules will be applied where they can have the greatest impact.

***

Still Interested? Read the book, or check out our other post where we cover the details of creating your simple rules.

Moving the Finish Line: The Goal Gradient Hypothesis

Imagine a sprinter running an Olympic race. He’s competing in the 1600 meter run.

The first two laps he runs at a steady but hard pace, trying to keep himself consistently near the head, or at least the middle, of the pack, hoping not to fall too far behind while also conserving energy for the whole race.

About 800 meters in, he feels himself start to fatigue and slow. At 1000 meters, he feels himself consciously expending less energy. At 1200, he’s convinced that he didn’t train enough.

Now watch him approach the last 100 meters, the “mad dash” for the finish. He’s been running what would be an all-out sprint to us mortals for 1500 meters, and yet what happens now, as he feels himself neck and neck with his competitors, the finish line in sight?

He speeds up. That energy drag is done. The goal is right there, and all he needs is one last push. So he pushes.

This is called the Goal Gradient Effect, or more precisely, the Goal Gradient Hypothesis. Its effect on biological creatures is not just a feeling, but a real and measurable thing.

***

The first person to try explaining the goal gradient hypothesis was an early behavioural psychologist named Clark L. Hull.

As with other animals, when it came to humans, Hull was a pretty hardcore “behaviourist”, thinking that human behaviour could eventually be reduced to mathematical prediction based on rewards and conditioning. As insane as this sounds now, he had a neat mathematical formula for human behaviour:

screen-shot-2016-10-14-at-12-34-26-pm

Some of his ideas eventually came to be seen as extremely limiting Procrustean Bed type models of human behavior, but the Goal Gradient Hypothesis was replicated many times over the years.

Hull himself wrote papers with titles like The Goal-Gradient Hypothesis and Maze Learning to explore the effect of the idea in rats. As Hull put it, “...animals in traversing a maze will move at a progressively more rapid pace as the goal is approached.” Just like the runner above.

Most of the work Hull focused on were animals rather than humans, showing somewhat unequivocally that in the context of approaching a reward, the animals did seem to speed up as the goal approached, enticed by the end of the maze. The idea was, however, resurrected in the human realm in 2006 with a paper entitled The Goal-Gradient Hypothesis Resurrected: Purchase Acceleration, Illusionary Goal Progress, and Customer Retention. (link)

The paper examined consumer behaviour in the “goal gradient” sense and found, alas, it wasn’t just rats that felt the tug of the “end of the race” — we do too. Examining a few different measurable areas of human behaviour, the researchers found that consumers would work harder to earn incentives as the goal came in sight, and that after the reward was earned, they'd slow down their efforts:

We found that members of a café RP accelerated their coffee purchases as they progressed toward earning a free coffee. The goal-gradient effect also generalized to a very different incentive system, in which shorter goal distance led members to visit a song-rating Web site more frequently, rate more songs during each visit, and persist longer in the rating effort. Importantly, in both incentive systems, we observed the phenomenon of post-reward resetting, whereby customers who accelerated toward their first reward exhibited a slowdown in their efforts when they began work (and subsequently accelerated) toward their second reward. To the best of our knowledge, this article is the first to demonstrate unequivocal, systematic behavioural goal gradients in the context of the human psychology of rewards.

Fascinating.

***

If we’re to take the idea seriously, the Goal Gradient Hypothesis has some interesting implications for leaders and decision-makers.

The first and most important is probably that incentive structures should take the idea into account. This is a fairly intuitive (but often unrecognized) idea: Far-away rewards are much less motivating than near term ones. Given the chance to earn $1,000 at the end of this month, and each thereafter, or $12,000 at the end of the year, which would you be more likely to work hard for?

What if I pushed it back even more but gave you some “interest” to compensate: Would you work harder for the potential to earn $90,000 five years from now or to earn $1,000 this month, followed by $1,000 the following month, and so on, every single month during five year period?

Companies like Nucor take the idea seriously: They pay bonuses to lower-level employees based on monthly production, not letting it wait until the end of the year. Essentially, the end of the maze happens every 30 days rather than once per year. The time between doing the work and the reward is shortened.

The other takeaway comes to consumer behaviour, as referenced in the marketing paper. If you’re offering rewards for a specific action from your customer, do you reward them sooner, or later?

The answer is almost always going to be “sooner”. In fact, the effect may be strong enough that you can get away with less total rewards by increasing their velocity.

Lastly, we might be able to harness the Hypothesis in our personal lives.

Let’s say we want to start reading more. Do we set a goal to read 52 books this year and hold ourselves accountable, or to read 1 book a week? What about 25 pages per day?

Not only does moving the goalposts forward tend to increase our motivation, but we repeatedly prove to ourselves that we’re capable of accomplishing them. This is classic behavioural psychology: Instant rewards rather than delayed. (Even if they’re psychological.) Not only that, but it forces us to avoid procrastination — leaving 35 books to be read in the last two months of the year, for example.

Those three seem like useful lessons, but here’s a challenge: Try synthesizing a new rule or idea of your own, combining the Goal Gradient Effect with at least one other psychological principle, and start testing it out in your personal life or in your organization. Don’t let useful nuggets sit around; instead, start eating the broccoli.