Everybody’s An Expert

A reader recently passed along a link to Louis Menand’s 2005 New Yorker review of Philip Tetlock’s book: Expert Political Judgment: How Good Is It? How Can We Know?. While I’ve read the book, I’d never read Menand’s excellent review.

Prediction is one of the pleasures of life. Conversation would wither without it. “It won’t last. She’ll dump him in a month.” If you’re wrong, no one will call you on it, because being right or wrong isn’t really the point. The point is that you think he’s not worthy of her, and the prediction is just a way of enhancing your judgment with a pleasant prevision of doom. Unless you’re putting money on it, nothing is at stake except your reputation for wisdom in matters of the heart. If a month goes by and they’re still together, the deadline can be extended without penalty. “She’ll leave him, trust me. It’s only a matter of time.” They get married: “Funny things happen. You never know.” You still weren’t wrong. Either the marriage is a bad one—you erred in the right direction—or you got beaten by a low-probability outcome.

It is the somewhat gratifying lesson of Philip Tetlock’s new book, “Expert Political Judgment: How Good Is It? How Can We Know?” (Princeton; $35), that people who make prediction their business—people who appear as experts on television, get quoted in newspaper articles, advise governments and businesses, and participate in punditry roundtables—are no better than the rest of us. When they’re wrong, they’re rarely held accountable, and they rarely admit it, either. They insist that they were just off on timing, or blindsided by an improbable event, or almost right, or wrong for the right reasons. They have the same repertoire of self-justifications that everyone has, and are no more inclined than anyone else to revise their beliefs about the way the world works, or ought to work, just because they made a mistake. No one is paying you for your gratuitous opinions about other people, but the experts are being paid, and Tetlock claims that the better known and more frequently quoted they are, the less reliable their guesses about the future are likely to be. The accuracy of an expert’s predictions actually has an inverse relationship to his or her self-confidence, renown, and, beyond a certain point, depth of knowledge. People who follow current events by reading the papers and newsmagazines regularly can guess what is likely to happen about as accurately as the specialists whom the papers quote. Our system of expertise is completely inside out: it rewards bad judgments over good ones.

One of Tetlock’s counter-intuitive findings is that specialists are not significantly more reliable than non-specialists in predicting what is going to happen, even in the field they study.

Knowing a little might make someone a more reliable forecaster, but Tetlock found that knowing a lot can actually make a person less reliable. “We reach the point of diminishing marginal predictive returns for knowledge disconcertingly quickly,” he reports. “In this age of academic hyperspecialization, there is no reason for supposing that contributors to top journals—distinguished political scientists, area study specialists, economists, and so on—are any better than journalists or attentive readers of the New York Times in ‘reading’ emerging situations.” And the more famous the forecaster the more overblown the forecasts. “Experts in demand,” Tetlock says, “were more overconfident than their colleagues who eked out existences far from the limelight.”

Experts get beaten by simple formulas. But it’s not hopeless. There are things we can do to improve the odds of making better predictions. It turns out how we think matters more than what we think.

Tetlock uses Isaiah Berlin’s metaphor from Archilochus, from his essay on Tolstoy, “The Hedgehog and the Fox,” to illustrate the difference. He says:

Low scorers look like hedgehogs: thinkers who “know one big thing,” aggressively extend the explanatory reach of that one big thing into new domains, display bristly impatience with those who “do not get it,” and express considerable confidence that they are already pretty proficient forecasters, at least in the long term. High scorers look like foxes: thinkers who know many small things (tricks of their trade), are skeptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible “ad hocery” that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess.

In discussing his findings Tetlock remarks:

The aggregate success rate of Foxes is significantly greater, Tetlock found, especially in short-term forecasts. And Hedgehogs routinely fare worse than Foxes, especially in long-term forecasts. They even fare worse than normal attention-paying dilletantes — apparently blinded by their extensive expertise and beautiful theory. Furthermore, Foxes win not only in the accuracy of their predictions but also the accuracy of the likelihood they assign to their predictions— in this they are closer to the admirable discipline of weather forecasters.

The value of Hedgehogs is that they occasionally get right the farthest-out predictions— civil war in Yugoslavia, Saddam’s invasion of Kuwait, the collapse of the Internet Bubble. But that comes at the cost of a great many wrong far-out predictions— Dow 36,000, global depression, nuclear attack by developing nations.

Hedgehogs annoy only their political opposition, while Foxes annoy across the political spectrum, in part because the smartest Foxes cherry-pick idea fragments from the whole array of Hedgehogs.

Bottom line… The political expert who bores you with an cloud of “howevers” is probably right about what’s going to happen. The charismatic expert who exudes confidence and has a great story to tell is probably wrong.

“The upside of being a hedgehog,” Menand concludes, “is that when you’re right you can be really and spectacularly right. Great scientists, for example, are often hedgehogs.”