Tag: Philip Tetlock

Gradually Getting Closer to the Truth

You can use a big idea without a physics-like need for exact precision. The key to remember is moving closer to reality by updating.

Consider this excerpt from Philip Tetlock and Dan Gardner in Superforecasting

The superforecasters are a numerate bunch: many know about Bayes' theorem and could deploy it if they felt it was worth the trouble. But they rarely crunch the numbers so explicitly. What matters far more to the superforecasters than Bayes' theorem is Bayes' core insight of gradually getting closer to the truth by constantly updating in proportion to the weight of the evidence.

So they know the numbers. This numerate filter is the second of Garrett Hardin‘s three filters we need to think about problems.

Hardin writes:

The numerate temperament is one that habitually looks for approximate dimensions, ratios, proportions, and rates of change in trying to grasp what is going on in the world.

[…]

Just as “literacy” is used here to mean more than merely reading and writing, so also will “numeracy” be used to mean more than measuring and counting. Examination of the origins of the sciences shows that many major discoveries were made with very little measuring and counting. The attitude science requires of its practitioners is respect, bordering on reverence, for ration, proportions, and rates of change.

Rough and ready back-of-the-envelope calculations are often sufficient to reveal the outline of a new and important scientific discovery … In truth, the essence of many of the major insights of science can be grasped with no more than child’s ability to measure, count, and calculate.

 

We can find another example in investing. Charlie Munger, commenting at the 1996 Berkshire Hathaway Annual Meeting, said: “Warren often talks about these discounted cash flows, but I’ve never seen him do one. If it isn’t perfectly obvious that it’s going to work out well if you do the calculation, then he tends to go on to the next idea.” Buffett retorted: “It's true. If (the value of a company) doesn't just scream out at you, it's too close.”

Precision is easy to teach but it's missing the point.

Philip Tetlock: Ten Commandments for Aspiring Superforecasters

The Knowledge Project interview with Philip Tetlock deconstructs our ability to make accurate predictions into specific components. He learned through his work on The Good Judgment Project.

In Superforecasting: The Art and Science of Prediction, Tetlock and Dan Gardner (his co-author), set out to distill the ten key themes that have been “experimentally demonstrated to boost accuracy” in the real-world.

1. Triage

Focus on questions where your hard work is likely to pay off. Don’t waste time either on easy “clocklike” questions (where simple rules of thumb can get you close to the right answer) or on impenetrable “cloud-like” questions (where even fancy statistical models can’t beat the dart-throwing chimp). Concentrate on questions in the Goldilocks zone of difficulty, where effort pays off the most.

For instance, don't ask, “Who will win the world series in 2050?” That's impossible to forecast and unknowable. The question becomes more interesting when we come closer to home. Asking in April who will win the World Series for the upcoming season and how much justifiable confidence we can have in that answer is a different proposition. While we have low confidence in who will win, we can have a lot more than trying to predict the 2050 winner. At the worst we can narrow the range of outcomes. This allows us to move back on the continuum from uncertainty to risk.

Certain classes of outcomes have well-deserved reputations for being radically unpredictable (e.g., oil prices, currency markets). But we usually don’t discover how unpredictable outcomes are until we have spun our wheels for a while trying to gain analytical traction. Bear in mind the two basic errors it is possible to make here. We could fail to try to predict the potentially predictable or we could waste our time trying to predict the unpredictable. Which error would be worse in the situation you face?

2. Break seemingly intractable problems into tractable sub-problems.

This is Fermi-style thinking. Enrico Fermi designed the first atomic reactor. When he wasn't doing that he loved to tackle challenging questions such as “How many piano tuners are in Chicago?” At first glance this seems very difficult. Fermi started by decomposing the problem into smaller parts and putting them into the buckets of knowable and unknowable. By working at a problem this way you expose what you don't know or, as Tetlock and Gardner put it, you “flush ignorance into the open.” It's better to air your assumptions and discover your errors quickly than to hide behind jargon and fog. Superforecasters are excellent at Fermi-izing — even when it comes to seemingly unquantifiable things like love.

The surprise is how often remarkably good probability estimates arise from a remarkably crude series of assumptions and guesstimates.

3. Strike the right balance between inside and outside views.

Echoing Michael Mauboussin, who cautioned that we should pay attention to what's the same, Tetlock and Gardner add a historical perspective:

Superforecasters know that there is nothing new under the sun. Nothing is 100% “unique.” Language purists be damned: uniqueness is a matter of degree. So superforecasters conduct creative searches for comparison classes even for seemingly unique events, such as the outcome of a hunt for a high-profile terrorist (Joseph Kony) or the standoff between a new socialist government in Athens and Greece’s creditors. Superforecasters are in the habit of posing the outside-view question: How often do things of this sort happen in situations of this sort?

The planning fallacy is a derivative of this.

4. Strike the right balance between under- and overreacting to evidence.

Belief updating is to good forecasting as brushing and flossing are to good dental hygiene. It can be boring, occasionally uncomfortable, but it pays off in the long term. That said, don’t suppose that belief updating is always easy because it sometimes is. Skillful updating requires teasing subtle signals from noisy news flows— all the while resisting the lure of wishful thinking.

Savvy forecasters learn to ferret out telltale clues before the rest of us. They snoop for nonobvious lead indicators, about what would have to happen before X could, where X might be anything from an expansion of Arctic sea ice to a nuclear war in the Korean peninsula. Note the fine line here between picking up subtle clues before everyone else and getting suckered by misleading clues.

The key here is a rational Bayesian updating of your beliefs. This is the same ethos behind Charlie Munger's thoughts on killing your best loved ideas. The world doesn't work the way we want it to but it does signal to us when things change. If we pay attention and adapt we let the world do most of the work for us.

5. Look for the clashing causal forces at work in each problem.

For every good policy argument, there is typically a counterargument that is at least worth acknowledging. For instance, if you are a devout dove who believes that threatening military action never brings peace, be open to the possibility that you might be wrong about Iran. And the same advice applies if you are a devout hawk who believes that soft “appeasement” policies never pay off. Each side should list, in advance, the signs that would nudge them toward the other.

There are no paint-by-number rules here. Synthesis is an art that requires reconciling irreducibly subjective judgments. If you do it well, engaging in this process of synthesizing should transform you from a cookie-cutter dove or hawk into an odd hybrid creature, a dove-hawk, with a nuanced view of when tougher or softer policies are likelier to work.

If you really want to have fun at meetings (and simultaneously decrease your popularity with your bosses) start asking what would cause them to change their mind. Never forget that having an opinion is hard work. You really need to concentrate and rag on the problem.

6. Strive to distinguish as many degrees of doubt as the problem permits but no more.

This could easily be called nuance matters. The more degrees of uncertainty you can distinguish the better.

As in poker, you have an advantage if you are better than your competitors at separating 60/ 40 bets from 40/ 60— or 55/ 45 from 45/ 55. Translating vague-verbiage hunches into numeric probabilities feels unnatural at first but it can be done. It just requires patience and practice.

7. Strike the right balance between under- and overconfidence, between prudence and decisiveness.

Superforecasters understand the risks both of rushing to judgment and of dawdling too long near “maybe.” They routinely manage the trade-off between the need to take decisive stands (who wants to listen to a waffler?) and the need to qualify their stands (who wants to listen to a blowhard?). They realize that long-term accuracy requires getting good scores on both calibration and resolution— which requires moving beyond blame-game ping-pong. It is not enough just to avoid the most recent mistake. They have to find creative ways to tamp down both types of forecasting errors— misses and false alarms— to the degree a fickle world permits such uncontroversial improvements in accuracy.

8. Look for the errors behind your mistakes but beware of rearview-mirror hindsight biases.

It's easy to justify or rationalize your failure. Don't. Own it and keep score with a decision journal. You want to learn where you went wrong and determine ways to get better. And don't just look at failures. Evaluate successes as well so you can determine when you were just plain lucky.

9. Bring out the best in others and let others bring out the best in you.

Master the fine art of team management, especially perspective taking (understanding the arguments of the other side so well that you can reproduce them to the other’s satisfaction), precision questioning (helping others to clarify their arguments so they are not misunderstood), and constructive confrontation (learning to disagree without being disagreeable). Wise leaders know how fine the line can be between a helpful suggestion and micromanagerial meddling or between a rigid group and a decisive one or between a scatterbrained group and an open-minded one.

10. Master the error-balancing bicycle.

Implementing each commandment requires balancing opposing errors. Just as you can’t learn to ride a bicycle by reading a physics textbook, you can’t become a superforecaster by reading training manuals. Learning requires doing, with good feedback that leaves no ambiguity about whether you are succeeding—“ I’m rolling along smoothly!”— or whether you are failing—“ crash!”

As with anything, doing more of it doesn't mean you're getting better at it. You need to do more than just go through the motions.  The way to get better is deliberate practice.

And finally …

“It is impossible to lay down binding rules,” Helmuth von Moltke warned, “because two cases will never be exactly the same.” Guidelines (or maps) are the best we can do in a world where nothing represents the whole. As George Box said: “All models are false. Some are useful.”

***

Mark Steed, a former member of The Good Judgment Project offered us 13 ways to make better decisions.

Philip Tetlock on The Art and Science of Prediction

Philip Tetlock small

This is the sixth episode of The Knowledge Project, a podcast aimed at acquiring wisdom through interviews with fascinating people to gain insights into how they think, live, and connect ideas.

***

On this episode, I'm happy to have Philip Tetlock, professor at the University of Pennsylvania. He's the co-leader of The Good Judgement Project, which is a multi-year forecasting study. He's also the author of Superforecasting: The Art and Science of Prediction and Expert Political Judgment: How Good Is It? How Can We Know?

The subject of this interview is how we can get better at the art and science of prediction. We dive into what makes some people better at making predictions and how we can learn to improve our ability to guess the future. I hope you enjoy the conversation as much as I did.

***

Listen

***

Show Notes

Transcript:
A complete transcript is available for members.

Books Mentioned

“the truth is that prediction is hard, often impossible.”

Philip Tetlock, author of Expert Political Judgment, co-authors an interesting article in foreign policy.

Academic research suggests that predicting events five years into the future is so difficult that most experts perform only marginally better than dart-throwing chimps.

The best way to become a better-calibrated appraiser of long-term futures is to get in the habit of making quantitative probability estimates that can be objectively scored for accuracy over long stretches of time. Explicit quantification enables explicit accuracy feedback, which enables learning. This requires extraordinary organizational patience — an investment that may span decades — but the stakes are high enough to merit a long-term investment.

Still curious? Expert Political Judgment explores what constitutes good judgment in predicting future events, and looks at why experts are often wrong in their forecasts.

Generalists vs. Specialists (And the Specialist’s Dilemma)

wisdom

Animal species reside on a scale with “generalist” on one end and “specialist” on the other. Specialists can live only in a narrow range of conditions: diet, climate, camouflage, etc. Generalists are able to survive a wide variety of conditions and changes in the environment: food, climate, predators, etc.

Specialists thrive when conditions are just right. They fulfill a niche and are very effective at competing with other organisms. They have good mechanisms for coping with “known” risks. But when the specific conditions change, they are much more likely to go extinct. Generalists respond much better to changes/uncertainty. These species usually survive for very long periods because they deal with unanticipated risks better. They have very coarse behavior: eat any food available, survive in many climates, use a simple mechanism to defend a wide range of predators, etc. But unlike specialists they don’t maximize their current environment, because they don’t fill a niche where they could be more successful. It’s tough being a generalist—there’s more competition.

An environment with more competition breeds more specialists. Rainforests have huge diversity and competition, and therefore many specialist species.

…This is what I call the Specialist’s Dilemma. The stronger your competitive position, the more vulnerable you are to eventually being disrupted and replaced.

Hrm. This reminds me of the (artificial) distinction between foxes and hedgehogs.

Among the fragments discovered of the Greek poet Archilochus is one that says ‘The fox knows many things, but the hedgehog knows one big thing.'

In the Hedgehog and the Fox, Isaiah Berlin mentions:

Scholars have differed about the correct interpretation of these dark works, which may mean no more than the fox, for all his cunning, is defeated by the hedgehog's one defence. But, taken figuratively, the words can be made to yield a sense in which they mark one of the deepest differences which divide writers and thinkers, and, it may be, human beings in general. For there exists a great chasm between those, on one side, who relate everything to a single central vision, one system, less or more coherent or articulate, in terms of which they understand, think and feel—a single, universal, organizing principle in terms of which alone all that they are and say has significance—and, on the other side, those who pursue many ends, often unrelated and even contradictory, connected, if at all, only in some de facto way, for some psychological or physiological cause, related by no moral or aesthetic principle.

Philip Tetlock, author of Expert Political Judgment: How Good Is It? How Can We Know?, researched whether foxes or hedgehogs are better able to make predictions. He concludes that we should employ serious skepticism about experts' ability to predict the future. Tetlock says “There's quite a range. Some experts are so out of touch with reality, they're borderline delusional. Other experts are only slightly out of touch. And a few experts are surprisingly nuanced and well-calibrated.” So what distinguishes the impressive from the out of touch? How they think. Tetlock dubbed his experts “foxes” and “hedgehogs.” And the data couldn't be more clear in terms of predictions, foxes beat hedgehogs.*

Berlin warns, “Of course, like all over-simple classifications of this type, the dichotomy becomes, if pressed, artificial, scholastic and ultimately absurd. But if it is not an aid to serious criticism, neither should it be rejected as being merely superficial or frivolous; like all distinctions which embody any degree of truth, it offers a point of view from which to look and compare, a starting-point for genuine investigation.”

Continue Reading Generalists vs. Specialists.

* There is a good discussion of Tetlock in Dan Gardner's book Future Babble (p25-27) see my review here