Tag: Predictions

Philip Tetlock on The Art and Science of Prediction

Philip Tetlock small

This is the sixth episode of The Knowledge Project, a podcast aimed at acquiring wisdom through interviews with fascinating people to gain insights into how they think, live, and connect ideas.

***

On this episode, I'm happy to have Philip Tetlock, professor at the University of Pennsylvania. He's the co-leader of The Good Judgement Project, which is a multi-year forecasting study. He's also the author of Superforecasting: The Art and Science of Prediction and Expert Political Judgment: How Good Is It? How Can We Know?

The subject of this interview is how we can get better at the art and science of prediction. We dive into what makes some people better at making predictions and how we can learn to improve our ability to guess the future. I hope you enjoy the conversation as much as I did.

***

Listen

***

Show Notes

Transcript:
A complete transcript is available for members.

Books Mentioned

The Wisdom of Crowds and The Expert Squeeze

As networks harness the wisdom of crowds, the ability of experts to add value in their predictions is steadily declining. This is the expert squeeze.

As networks harness the wisdom of crowds, the ability of experts to add value in their predictions is steadily declining. This is the expert squeeze.

In Think Twice: Harnessing the Power of Counterintuition, Michael Mauboussin, the first guest on my podcast, The Knowledge Project, explains the expert squeeze and its implications for how we make decisions.

As networks harness the wisdom of crowds and computing power grows, the ability of experts to add value in their predictions is steadily declining. I call this the expert squeeze, and evidence for it is mounting. Despite this trend, we still pine for experts— individuals with special skill or know-how— believing that many forms of knowledge are technical and specialized. We openly defer to people in white lab coats or pinstripe suits, believing they hold the answers, and we harbor misgivings about computergenerated outcomes or the collective opinion of a bunch of tyros.

The expert squeeze means that people stuck in old habits of thinking are failing to use new means to gain insight into the problems they face. Knowing when to look beyond experts requires a totally fresh point of view, and one that does not come naturally. To be sure, the future for experts is not all bleak. Experts retain an advantage in some crucial areas. The challenge is to know when and how to use them.

The Value of Experts
The Value of Experts

So how can we manage this in our role as decision maker? The first step is to classify the problem.

(The figure above — The Value of Experts) helps to guide this process. The second column from the left covers problems that have rules-based solutions with limited possible outcomes. Here, someone can investigate the problem based on past patterns and write down rules to guide decisions. Experts do well with these tasks, but once the principles are clear and well defined, computers are cheaper and more reliable. Think of tasks such as credit scoring or simple forms of medical diagnosis. Experts agree about how to approach these problems because the solutions are transparent and for the most part tried and true.

[…]

Now let’s go to the opposite extreme, the column on the far right that deals with probabilistic fields with a wide range of outcomes. Here are no simple rules. You can only express possible outcomes in probabilities, and the range of outcomes is wide. Examples include economic and political forecasts. The evidence shows that collectives outperform experts in solving these problems.

[…]

The middle two columns are the remaining province for experts. Experts do well with rules-based problems with a wide range of outcomes because they are better than computers at eliminating bad choices and making creative connections between bits of information.

Once you've classified the problem, you can turn to the best method for solving it.

… computers and collectives remain underutilized guides for decision making across a host of realms including medicine, business, and sports. That said, experts remain vital in three capacities. First, experts must create the very systems that replace them. … Of course, the experts must stay on top of these systems, improving the market or equation as need be.

Next, we need experts for strategy. I mean strategy broadly, including not only day-to-day tactics but also the ability to troubleshoot by recognizing interconnections as well as the creative process of innovation, which involves combining ideas in novel ways. Decisions about how best to challenge a competitor, which rules to enforce, or how to recombine existing building blocks to create novel products or experiences are jobs for experts.

Finally, we need people to deal with people. A lot of decision making involves psychology as much as it does statistics. A leader must understand others, make good decisions, and encourage others to buy in to the decision.

So what are the practical tips you can do to make the expert squeeze work for you instead of against you? Here Mauboussin offers 3 tips.

1. Match the problem you face with the most appropriate solution.

What we know is that experts do a poor job in many settings, suggesting that you should try to supplement expert views with other approaches.

2. Seek diversity.

(Philip) Tetlock’s work shows that while expert predictions are poor overall, some are better than others. What distinguishes predictive ability is not who the experts are or what they believe, but rather how they think. Borrowing from Archilochus— through Isaiah Berlin— Tetlock sorted experts into hedgehogs and foxes. Hedgehogs know one big thing and try to explain everything through that lens. Foxes tend to know a little about a lot of things and are not married to a single explanation for complex problems. Tetlock finds that foxes are better predictors than hedgehogs. Foxes arrive at their decisions by stitching “together diverse sources of information,” lending credence to the importance of diversity. Naturally, hedgehogs are periodically right— and often spectacularly so— but do not predict as well as foxes over time. For many important decisions, diversity is the key at both the individual and collective levels.

3. Use technology when possible. Leverage technology to side-step the squeeze when possible.

Flooded with candidates and aware of the futility of most interviews, Google decided to create algorithms to identify attractive potential employees. First, the company asked seasoned employees to fill out a three-hundred-question survey, capturing details about their tenure, their behavior, and their personality. The company then compared the survey results to measures of employee performance, seeking connections. Among other findings, Google executives recognized that academic accomplishments did not always correlate with on-the-job performance. This novel approach enabled Google to sidestep problems with ineffective interviews and to start addressing the discrepancy.

Learning the difference between when experts help or hurt can go a long way toward avoiding stupidity. This starts with identifying the type of problem you're facing and then considering the various approaches to solve the problem with pros and cons.

Still curious? Follow up by reading Generalists vs. Specialists, Think Twice: Harnessing the Power of Counterintuition, and reviewing the work of Philip Tetlock on why how you think matters more than what you think.

Daniel Kahneman’s Favorite Approach For Making Better Decisions

premortem

Bob Sutton's book, Scaling Up Excellence: Getting to More Without Settling for Less, contains an interesting section towards the end on looking back from the future, which talks about “a mind trick that goads and guides people to act on what they know and, in turn, amplifies their odds of success.”

We build on Nobel winner Daniel Kahneman's favorite approach for making better decisions. This may sound weird, but it's a form of imaginary time travel.

It's called the premortem. And, while it may be Kahneman's favorite, he didn't come up with it. A fellow by the name of Gary Klein invented the premortem technique.

A premortem works something like this. When you're on the verge of making a decision, not just any decision but a big decision, you call a meeting. At the meeting you ask each member of your team to imagine that it's a year later.

Split them into two groups. Have one group imagine that the effort was an unmitigated disaster. Have the other pretend it was a roaring success. Ask each member to work independently and generate reasons, or better yet, write a story, about why the success or failure occurred. Instruct them to be as detailed as possible, and, as Klein emphasizes, to identify causes that they wouldn't usually mention “for fear of being impolite.” Next, have each person in the “failure” group read their list or story aloud, and record and collate the reasons. Repeat this process with the “success” group. Finally use the reasons from both groups to strengthen your … plan. If you uncover overwhelming and impassible roadblocks, then go back to the drawing board.

Premortems encourage people to use “prospective hindsight,” or, more accurately, to talk in “future perfect tense.” Instead of thinking, “we will devote the next six months to implementing a new HR software initiative,” for example, we travel to the future and think “we have devoted six months to implementing a new HR software package.”

You imagine that a concrete success or failure has occurred and look “back from the future” to tell a story about the causes.

Pretending that a success or failure has already occurred—and looking back and inventing the details of why it happened—seems almost absurdly simple. Yet renowned scholars including Kahneman, Klein, and Karl Weick supply compelling logic and evidence that this approach generates better decisions, predictions, and plans. Their work suggests several reasons why. …

1. This approach helps people overcome blind spots

As … upcoming events become more distant, people develop more grandiose and vague plans and overlook the nitty-gritty daily details required to achieve their long-term goals.

2. This approach helps people bridge short-term and long-term thinking

Weick argues that this shift is effective, in part, because it is far easier to imagine the detailed causes of a single outcome than to imagine multiple outcomes and try to explain why each may have occurred. Beyond that, analyzing a single event as if it has already occurred rather than pretending it might occur makes it seem more concrete and likely to actually happen, which motivates people to devote more attention to explaining it.

3. Looking back dampens excessive optimism

As Kahneman and other researchers show, most people overestimate the chances that good things will happen to them and underestimate the odds that they will face failures, delays, and setbacks. Kahneman adds that “in general, organizations really don't like pessimists” and that when naysayers raise risks and drawbacks, they are viewed as “almost disloyal.”

Max Bazerman, a Harvard professor, believes that we're less prone to irrational optimism when we predict the fate of projects that are not our own. For example, when it comes to friends' home renovation projects, most people estimate the costs will run 25 to 50 percent over budget. When it comes to our projects however, they will be “completed on time and near the project costs.”

4. A premortem challenges the illusion of consensus

Most times not everyone on a team agrees with the course of action. Even when you have enough cognitive diversity in the room, people still keep their mouths shut because people in power tend to reward people who agree with them while punishing those who have the courage to speak up with a dissenting view.

The resulting corrosive conformity is evident when people don't raise private doubts, known risks, and inconvenient facts. In contrast, as Klein explains, a premortem can create a competition where members feel accountable for raising obstacles that others haven't. “The whole dynamic changes from trying to avoid anything that might disrupt harmony to trying to surface potential problems.”

Nate Silver: Confidence Kills Predictions

Best known for accurate election predictions, statistician Nate Silver is also the author of The Signal and the Noise: Why So Many Predictions Fail—But Some Don’t. Heather Bell, Managing Editor of Journal of Indexes, recently spoke with Silver.

IU: What do you see as the common theme among bad predictions? What most often leads people astray?
Silver: A lot of it is overconfidence. People tend to underestimate what the uncertainty that is intrinsic to a problem actually is. If you have someone estimate what they think a confidence interval is that’s supposed to cover 90 percent of all outcomes, it usually only covers 50 percent. You have upside outcomes and downside outcomes in the market certainly more often than people realize.

There are a variety of reasons for this. Part of it is that we can sometimes get stuck in the recent past and examples that are most familiar to us, kind of what Daniel Kahneman called “the availability heuristic,” where we assume that the current trend will always perpetuate itself, when actually it can be an anomaly or a fluke, or where we always think that the period we’re living through is the “signal,” so to speak. That’s often not true—sometimes you’re living in the outlier period, like when you have a housing bubble period that you haven’t historically had before.

Overconfidence is the core linkage between most of the failures of predictions that we’ve looked at. Obviously, you can look at that in a more technical sense and see where sometimes people are fitting models where they don’t have as much data as they think, but the root of it comes down to a failure to understand that it’s tough to be objective and that we often come at a problem with different biases and perverse incentives—and if we don’t check those, we tend to get ourselves into trouble.

IU: What standards or conditions must be met, in your opinion, for something to be considered “predictable”?
Silver: I tend not to think in terms of black and white absolutes. There are two ways to define “predictable,” I’d say. One is by asking, How well we are able to model the system? The other is more of a cosmic predictability: How intrinsically random is something over the long run?

I look at baseball as an example. Even the best teams only win about two-thirds of their games. Even the best hitters only get on base about 40 percent of the time. In that sense, baseball is highly unpredictable. In another sense though, baseball is very easy to measure relative to a lot of other things. It’s easy to set up models for it, and the statistics are of very high quality. A lot of smart people have worked on the problem. As a result, we are able to measure and quantify the uncertainty pretty accurately. We still can’t predict who’s going to win every game, but we are doing a pretty good job with that. Things are predictable in theory, but our capabilities are not nearly as strong.

Predictability is a tricky question, but I always say we almost always have some notion of what’s going to happen next, but it’s just never a perfect notion. The question is more, Where do you sit along that spectrum?

What Matters More in Decisions: Analysis or Process?

Think of the last major decision your company made.

Maybe it was an acquisition, a large purchase, or perhaps it was whether to launch a new product.

Odds are three things went into that decision: (1) It probably relied on the insights of a few key executives; (2) it involved some sort of fact gathering and analysis; and (3) it was likely enveloped in some sort of decision process—whether formal or informal—that translated the analysis into a decision.

Now how would you rate the quality of your organization's strategic decisions?

If you're like most executives, the answer wouldn't be positive:

In a recent McKinsey Quarterly survey of 2,207 executives, only 28 percent said that the quality of strategic decisions in their companies was generally good, 60 percent thought that bad decisions were about as frequent as good ones, and the remaining 12 percent thought good decisions were altogether infrequent.

How could it be otherwise?

Product launches are frequently behind schedule and over budget. Strategic plans often ignore even the anticipated response of competitors. Mergers routinely fail to live up to the promises made in press releases.

The persistence problems across time and organizations, both large and small, indicates that we can make better decisions.

Looking at how organizations make decisions is a good place to start if we're trying to improve the quality of decisions and remove cognitive biases.

While we often make decisions with our gut, these decisions leave us susceptible to biases. To counter the gut decision a lot of organizations gather data and analyze decisions.

The widespread belief is that analysis reduces biases. But does it?

Does putting your faith in analysis any better than using your gut? What does the evidence say? Is there a better way?

Decisions: Analysis or Process

Dan Lovallo and Olivier Sibony set to find out.

Lovallo is a professor at the University of Sydney and Olivier is a director at McKinsey & Company. Together they studied 1,048 “major” business decisions over five years. The results are surprising.

Most business decisions were not made on “gut calls” but rather rigorous analysis.

In short, most people did the all the leg work we think we're supposed to do: they delivered large quantities of detailed analysis.

Yet this wasn't enough. “Our research indicates that, contrary to what one might assume, good analysis in the hands of managers who have good judgment won’t naturally yield good decisions.”

***

These two quotes by Warren Buffett and Charlie Munger explain how analysis can easily go astray.

“I have no use whatsoever for projections or forecasts. They create an illusion of apparent precision. The more meticulous they are, the more concerned you should be. We never look at projections …”
— Warren Buffett

“[Projections] are put together by people who have an interest in a particular outcome, have a subconscious bias, and its apparent precision makes it fallacious. They remind me of Mark Twain's saying, ‘A mine is a hole in the ground owned by a liar.' Projections in America are often a lie, although not an intentional one, but the worst kind because the forecaster often believes them himself.”
— Charlie Munger

***

Lovallo and Sibony didn't only look at analysis, they also asked executives about the process.

Did they, for example, “explicitly explore and discuss major uncertainties or discuss viewpoints that contradicted the senior leader’s?”

So what matters more, process or analysis? After comparing the results they determined that “process mattered more than analysis—by a factor of six.

This finding does not mean that analysis is unimportant, as a closer look at the data reveals: almost no decisions in our sample made through a very strong process were backed by very poor analysis. Why? Because one of the things an unbiased decision-making process will do is ferret out poor analysis. The reverse is not true; superb analysis is useless unless the decision process gives it a fair hearing.

To illustrate the weakness of how most organizations make decisions, Sibony used an interesting analogy: the legal system.

Imagine walking into a courtroom where the trial consists of a prosecutor presenting PowerPoint slides. In 20 pretty compelling charts, he demonstrates why the defendant is guilty. The judge then challenges some of the facts of the presentation, but the prosecutor has a good answer to every objection. So the judge decides, and the accused man is sentenced.

That wouldn’t be due process, right? So if you would find this process shocking in a courtroom, why is it acceptable when you make an investment decision? Now of course, this is an oversimplification, but this process is essentially the one most companies follow to make a decision. They have a team arguing only one side of the case. The team has a choice of what points it wants to make and what way it wants to make them. And it falls to the final decision maker to be both the challenger and the ultimate judge. Building a good decision-making process is largely ensuring that these flaws don’t happen.

Understanding biases doesn't make you immune to them. A disciplined decision process is the best place to improve the quality of decisions and guard against common decision-making biases.

Still curious? Read this next: A process to make better decisions.

The inspiration for this post comes from Chip and Dan Heath in Decisive.