Tag: Philip Tetlock

The Generalized Specialist: How Shakespeare, Da Vinci, and Kepler Excelled

“What do you want to be when you grow up?” Do you ever ask kids this question? Did adults ask you this when you were a kid?

Even if you managed to escape this question until high school, then by the time you got there, you were probably expected to be able to answer this question, if only to be able to choose a college and a major. Maybe you took aptitude tests, along with the standard academic tests, in high school. This is when the pressure to go down a path to a job commences. Increasingly, the education system seems to want to reduce the time it takes for us to become productive members of the work force, so instead of exploring more options, we are encouraged to start narrowing them.

Any field you go into, from finance to engineering, requires some degree of specialization. Once you land a job, the process of specialization only amplifies. You become a specialist in certain aspects of the organization you work for.

Then something happens. Maybe your specialty is no longer needed or gets replaced by technology. Or perhaps you get promoted. As you go up the ranks of the organization, your specialty becomes less and less important, and yet the tendency is to hold on to it longer and longer. If it’s the only subject or skill you know better than anything else, you tend to see it everywhere. Even where it doesn’t exist.

Every problem is a nail and you just happen to have a hammer.

Only this approach doesn’t work. Because you have no idea of the big ideas, you start making decisions that don’t take into account how the world really works. These decisions ripple outward, and you have to spend time correcting your mistakes. If you’re not careful about self-reflection, you won’t learn, and you’ll make one version of the same mistakes over and over.

Should we become specialists or polymaths? Is there a balance we should pursue?

There is no single answer.

The decision is personal. And most of the time we fail to see the life-changing implications of it. Whether we’re conscious of this or not, it’s also a decision we have to make and re-make over and over again. Every day, we have to decide where to invest our time — do we become better at what we do or learn something new?

If you can’t adapt, changes become threats instead of opportunities.

There is another way to think about this question, though.

Around 2700 years ago, the Greek poet Archilochus wrote: “the fox knows many things; the hedgehog one big thing.” In the 1950s, philosopher Isaiah Berlin used that sentence as the basis of his essay “The Hedgehog and the Fox.” In it, Berlin divides great thinkers into two categories: hedgehogs, who have one perspective on the world, and foxes, who have many different viewpoints. Although Berlin later claimed the essay was not intended to be serious, it has become a foundational part of thinking about the distinction between specialists and generalists.

Berlin wrote that “…there exists a great chasm between those, on one side, who relate everything to a single central vision, one system … in terms of which they understand, think and feel … and, on the other hand, those who pursue many ends, often unrelated and even contradictory, connected, if at all, only in some de facto way.”

A generalist is a person who is a competent jack of all trades, with lots of divergent useful skills and capabilities. This is the handyman who can fix your boiler, unblock the drains, replace a door hinge, or paint a room. The general practitioner doctor whom you see for any minor health problem (and who refers you to a specialist for anything major). The psychologist who works with the media, publishes research papers, and teaches about a broad topic.

A specialist is someone with distinct knowledge and skills related to a single area. This is the cardiologist who spends their career treating and understanding heart conditions. The scientist who publishes and teaches about a specific protein for decades. The developer who works with a particular program.

In his original essay, Berlin writes that specialists “lead lives, perform acts and entertain ideas that are centrifugal rather than centripetal; their thought is scattered or diffused, moving on many levels, seizing upon the essence of a vast variety of experiences and objects … seeking to fit them into, or exclude them from, any one unchanging, all embracing … unitary inner vision.”

The generalist and the specialist are on the same continuum; there are degrees of specialization in a subject. There’s a difference between someone who specializes in teaching history and someone who specializes in teaching the history of the American Civil war, for example. Likewise, there is a spectrum for how generalized or specialized a certain skill is.

Some skills — like the ability to focus, to read critically, or to make rational decisions — are of universal value. Others are a little more specialized but can be used in many different careers. Examples of these skills would be design, project management, and fluency in a foreign language.

The distinction between generalization and specialization comes from biology. Species are referred to as either generalists or specialists, as with the hedgehog and the fox.

A generalist species can live in a range of environments, utilizing whatever resources are available. Often, these critters eat an omnivorous diet. Raccoons, mice, and cockroaches are generalists. They live all over the world and can eat almost anything. If a city is built in their habitat, then no problem; they can adapt.

A specialist species needs particular conditions to survive. In some cases, they are able to live only in a discrete area or eat a single food. Pandas are specialists, needing a diet of bamboo to survive. Specialist species can thrive if the conditions are correct. Otherwise, they are vulnerable to extinction.

A specialist who is outside of their circle of competence and doesn’t know it is incredibly dangerous.

The distinction between generalist and specialist species is useful as a point of comparison. Generalist animals (including humans) can be less efficient, yet they are less fragile amidst change. If you can’t adapt, changes become threats instead of opportunities.

While it’s not very glamorous to take career advice from a raccoon or a panda, we can learn something from them about the dilemmas we face. Do we want to be like a raccoon, able to survive anywhere, although never maximizing our potential in a single area? Or like a panda, unstoppable in the right context, but struggling in an inappropriate one?

Costs and Benefits

Generalists have the advantage of interdisciplinary knowledge, which fosters creativity and a firmer understanding of how the world works. They have a better overall perspective and can generally perform second-order thinking in a wider range of situations than the specialist can.

Generalists often possess transferable skills, allowing them to be flexible with their career choices and adapt to a changing world. They can do a different type of work and adapt to changes in the workplace. Gatekeepers tend to cause fewer problems for generalists than for specialists.

Managers and leaders are often generalists because they need a comprehensive perspective of their entire organization. And an increasing number of companies are choosing to have a core group of generalists on staff, and hire freelance specialists only when necessary.

The métiers at the lowest risk of automation in the future tend to be those which require a diverse, nuanced skill set. Construction vehicle operators, blue collar workers, therapists, dentists, and teachers included.

When their particular skills are in demand, specialists experience substantial upsides. The scarcity of their expertise means higher salaries, less competition, and more leverage. Nurses, doctors, programmers, and electricians are currently in high demand where I live, for instance.

Specialists get to be passionate about what they do — not in the usual “follow your passion!” way, but in the sense that they can go deep and derive the satisfaction that comes from expertise. Garrett Hardin offers his perspective on the value of specialists: 

…we cannot do without experts. We accept this fact of life, but not without anxiety. There is much truth in the definition of the specialist as someone who “knows more and more about less and less.” But there is another side to the coin of expertise. A really great idea in science often has its birth as apparently no more than a particular answer to a narrow question; it is only later that it turns out that the ramifications of the answer reach out into the most surprising corners. What begins as knowledge about very little turns out to be wisdom about a great deal.

Hardin cites the development of probability theory as an example. When Blaise Pascal and Pierre de Fermat sought to devise a means of dividing the stakes in an interrupted gambling game, their expertise created a theory with universal value.

The same goes for many mental models and unifying theories. Specialists come up with them, and generalists make use of them in surprising ways.

The downside is that specialists are vulnerable to change. Many specialist jobs are disappearing as technology changes. Stockbrokers, for example, face the possibility of replacement by AI in coming years. That doesn’t mean no one will hold those jobs, but demand will decrease. Many people will need to learn new work skills, and starting over in a new field will put them back decades. That’s a serious knock, both psychologically and financially.

Specialists are also subject to “‘man with a hammer” syndrome. Their area of expertise can become the lens they see everything through.

As Michael Mauboussin writes in Think Twice:

…people stuck in old habits of thinking are failing to use new means to gain insight into the problems they face. Knowing when to look beyond experts requires a totally fresh point of view and one that does not come naturally. To be sure, the future for experts is not all bleak. Experts retain an advantage in some crucial areas. The challenge is to know when and how to use them.

Understanding and staying within their circle of competence is even more important for specialists. A specialist who is outside of their circle of competence and doesn’t know it is incredibly dangerous.

Philip Tetlock performed an 18-year study to look at the quality of expert predictions. Could people who are considered specialists in a particular area forecast the future with greater accuracy than a generalist? Tetlock tracked 284 experts from a range of disciplines, recording the outcomes of 28,000 predictions.

The results were stark: predictions coming from generalist thinkers were more accurate. Experts who stuck to their specialized areas and ignored interdisciplinary knowledge faired worse. The specialists tended to be more confident in their erroneous predictions than the generalists. The specialists made definite assertions — which we know from probability theory to be a bad idea. It seems that generalists have an edge when it comes to Bayesian updating, recognizing probability distributions, and long-termism.

Organizations, industries, and the economy need both generalists and specialists. And when we lack the right balance, it creates problems. Millions of jobs remain unfilled, while millions of people lack employment. Many of the empty positions require specialized skills. Many of the unemployed have skills which are too general to fill those roles. We need a middle ground.

The Generalized Specialist

The economist, philosopher, and writer Henry Hazlitt sums up the dilemma:

In the modern world knowledge has been growing so fast and so enormously, in almost every field, that the probabilities are immensely against anybody, no matter how innately clever, being able to make a contribution in any one field unless he devotes all his time to it for years. If he tries to be the Rounded Universal Man, like Leonardo da Vinci, or to take all knowledge for his province, like Francis Bacon, he is most likely to become a mere dilettante and dabbler. But if he becomes too specialized, he is apt to become narrow and lopsided, ignorant on every subject but his own, and perhaps dull and sterile even on that because he lacks perspective and vision and has missed the cross-fertilization of ideas that can come from knowing something of other subjects.

What’s the safest option, the middle ground?

By many accounts, it’s being a specialist in one area, while retaining a few general iterative skills. That might sound like it goes against the idea of specialists and generalists being mutually exclusive, but it doesn’t.

A generalizing specialist has a core competency which they know a lot about. At the same time, they are always learning and have a working knowledge of other areas. While a generalist has roughly the same knowledge of multiple areas, a generalizing specialist has one deep area of expertise and a few shallow ones. We have the option of developing a core competency while building a base of interdisciplinary knowledge.

“The fox knows many things, but the hedgehog knows one big thing.”

— Archilochus

As Tetlock’s research shows, for us to understand how the world works, it’s not enough to home in on one tiny area for decades. We need to pull ideas from everywhere, remaining open to having our minds changed, always looking for disconfirming evidence. Joseph Tussman put it this way: “If we do not let the world teach us, it teaches us a lesson.”

Many great thinkers are (or were) generalizing specialists.

Shakespeare specialized in writing plays, but his experiences as an actor, poet, and part owner of a theater company informed what he wrote. So did his knowledge of Latin, agriculture, and politics. Indeed, the earliest known reference to his work comes from a critic who accused him of being “an absolute Johannes factotum” (jack of all trades).

Leonardo Da Vinci was an infamous generalizing specialist. As well as the art he is best known for, Da Vinci dabbled in engineering, music, literature, mathematics, botany, and history. These areas informed his art — note, for example, the rigorous application of botany and mathematics in his paintings. Some scholars consider Da Vinci to be the first person to combine interdisciplinary knowledge in this way or to recognize that a person can branch out beyond their defining trade.

Johannes Kepler revolutionized our knowledge of planetary motion by combining physics and optics with his main focus, astronomy. Military strategist John Boyd designed aircraft and developed new tactics, using insights from divergent areas he studied, including thermodynamics and psychology. He could think in a different manner from his peers, who remained immersed in military knowledge for their entire careers.

Shakespeare, Da Vinci, Kepler, and Boyd excelled by branching out from their core competencies. These men knew how to learn fast, picking up the key ideas and then returning to their specialties. Unlike their forgotten peers, they didn’t continue studying one area past the point of diminishing returns; they got back to work — and the results were extraordinary.

Many people seem to do work which is unrelated to their area of study or their prior roles. But dig a little deeper and it’s often the case that knowledge from the past informs their present. Marcel Proust put it best: “the real act of discovery consists not in finding new lands, but in seeing with new eyes.”

Interdisciplinary knowledge is what allows us to see with new eyes.

When Charlie Munger was asked whether to become a polymath or a specialist at the 2017 shareholders meeting for the Daily Journal, his answer surprised a lot of people. Many expected the answer to be obvious. Of course, he would recommend that people become generalists. Only this is not what he said.

Munger remarked:

I don’t think operating over many disciplines, as I do, is a good idea for most people. I think it’s fun, that’s why I’ve done it. And I’m better at it than most people would be, and I don’t think I’m good at being the very best at handling differential equations. So, it’s been a wonderful path for me, but I think the correct path for everybody else is to specialize and get very good at something that society rewards, and then to get very efficient at doing it. But even if you do that, I think you should spend 10 to 20% of your time [on] trying to know all the big ideas in all the other disciplines. Otherwise … you’re like a one-legged man in an ass-kicking contest. It’s not going to work very well. You have to know the big ideas in all the disciplines to be safe if you have a life lived outside a cave. But no, I think you don’t want to neglect your business as a dentist to think great thoughts about Proust.

In his comments, we can find the underlying approach most likely to yield exponential results: Specialize most of the time, but spend time understanding the broader ideas of the world.

This approach isn’t what most organizations and educational institutions provide. Branching out isn’t in many job descriptions or in many curricula. It’s a project we have to undertake ourselves, by reading a wide range of books, experimenting with different areas, and drawing ideas from each one.

Still curious? Check out the biographies of Leonardo da Vinci and Ben Fraklin


Comment on Facebook | Discuss on Twitter | Save to Pocket

Gradually Getting Closer to the Truth

You can use a big idea without a physics-like need for exact precision. The key to remember is moving closer to reality by updating.

Consider this excerpt from Philip Tetlock and Dan Gardner in Superforecasting

The superforecasters are a numerate bunch: many know about Bayes' theorem and could deploy it if they felt it was worth the trouble. But they rarely crunch the numbers so explicitly. What matters far more to the superforecasters than Bayes' theorem is Bayes' core insight of gradually getting closer to the truth by constantly updating in proportion to the weight of the evidence.

So they know the numbers. This numerate filter is the second of Garrett Hardin‘s three filters we need to think about problems.

Hardin writes:

The numerate temperament is one that habitually looks for approximate dimensions, ratios, proportions, and rates of change in trying to grasp what is going on in the world.

[…]

Just as “literacy” is used here to mean more than merely reading and writing, so also will “numeracy” be used to mean more than measuring and counting. Examination of the origins of the sciences shows that many major discoveries were made with very little measuring and counting. The attitude science requires of its practitioners is respect, bordering on reverence, for ration, proportions, and rates of change.

Rough and ready back-of-the-envelope calculations are often sufficient to reveal the outline of a new and important scientific discovery … In truth, the essence of many of the major insights of science can be grasped with no more than child’s ability to measure, count, and calculate.

 

We can find another example in investing. Charlie Munger, commenting at the 1996 Berkshire Hathaway Annual Meeting, said: “Warren often talks about these discounted cash flows, but I’ve never seen him do one. If it isn’t perfectly obvious that it’s going to work out well if you do the calculation, then he tends to go on to the next idea.” Buffett retorted: “It's true. If (the value of a company) doesn't just scream out at you, it's too close.”

Precision is easy to teach but it's missing the point.

Philip Tetlock: Ten Commandments for Aspiring Superforecasters

The Knowledge Project interview with Philip Tetlock deconstructs our ability to make accurate predictions into specific components. He learned through his work on The Good Judgment Project.

In Superforecasting: The Art and Science of Prediction, Tetlock and Dan Gardner (his co-author), set out to distill the ten key themes that have been “experimentally demonstrated to boost accuracy” in the real-world.

1. Triage

Focus on questions where your hard work is likely to pay off. Don’t waste time either on easy “clocklike” questions (where simple rules of thumb can get you close to the right answer) or on impenetrable “cloud-like” questions (where even fancy statistical models can’t beat the dart-throwing chimp). Concentrate on questions in the Goldilocks zone of difficulty, where effort pays off the most.

For instance, don't ask, “Who will win the world series in 2050?” That's impossible to forecast and unknowable. The question becomes more interesting when we come closer to home. Asking in April who will win the World Series for the upcoming season and how much justifiable confidence we can have in that answer is a different proposition. While we have low confidence in who will win, we can have a lot more than trying to predict the 2050 winner. At the worst we can narrow the range of outcomes. This allows us to move back on the continuum from uncertainty to risk.

Certain classes of outcomes have well-deserved reputations for being radically unpredictable (e.g., oil prices, currency markets). But we usually don’t discover how unpredictable outcomes are until we have spun our wheels for a while trying to gain analytical traction. Bear in mind the two basic errors it is possible to make here. We could fail to try to predict the potentially predictable or we could waste our time trying to predict the unpredictable. Which error would be worse in the situation you face?

2. Break seemingly intractable problems into tractable sub-problems.

This is Fermi-style thinking. Enrico Fermi designed the first atomic reactor. When he wasn't doing that he loved to tackle challenging questions such as “How many piano tuners are in Chicago?” At first glance this seems very difficult. Fermi started by decomposing the problem into smaller parts and putting them into the buckets of knowable and unknowable. By working at a problem this way you expose what you don't know or, as Tetlock and Gardner put it, you “flush ignorance into the open.” It's better to air your assumptions and discover your errors quickly than to hide behind jargon and fog. Superforecasters are excellent at Fermi-izing — even when it comes to seemingly unquantifiable things like love.

The surprise is how often remarkably good probability estimates arise from a remarkably crude series of assumptions and guesstimates.

3. Strike the right balance between inside and outside views.

Echoing Michael Mauboussin, who cautioned that we should pay attention to what's the same, Tetlock and Gardner add a historical perspective:

Superforecasters know that there is nothing new under the sun. Nothing is 100% “unique.” Language purists be damned: uniqueness is a matter of degree. So superforecasters conduct creative searches for comparison classes even for seemingly unique events, such as the outcome of a hunt for a high-profile terrorist (Joseph Kony) or the standoff between a new socialist government in Athens and Greece’s creditors. Superforecasters are in the habit of posing the outside-view question: How often do things of this sort happen in situations of this sort?

The planning fallacy is a derivative of this.

4. Strike the right balance between under- and overreacting to evidence.

Belief updating is to good forecasting as brushing and flossing are to good dental hygiene. It can be boring, occasionally uncomfortable, but it pays off in the long term. That said, don’t suppose that belief updating is always easy because it sometimes is. Skillful updating requires teasing subtle signals from noisy news flows— all the while resisting the lure of wishful thinking.

Savvy forecasters learn to ferret out telltale clues before the rest of us. They snoop for nonobvious lead indicators, about what would have to happen before X could, where X might be anything from an expansion of Arctic sea ice to a nuclear war in the Korean peninsula. Note the fine line here between picking up subtle clues before everyone else and getting suckered by misleading clues.

The key here is a rational Bayesian updating of your beliefs. This is the same ethos behind Charlie Munger's thoughts on killing your best loved ideas. The world doesn't work the way we want it to but it does signal to us when things change. If we pay attention and adapt we let the world do most of the work for us.

5. Look for the clashing causal forces at work in each problem.

For every good policy argument, there is typically a counterargument that is at least worth acknowledging. For instance, if you are a devout dove who believes that threatening military action never brings peace, be open to the possibility that you might be wrong about Iran. And the same advice applies if you are a devout hawk who believes that soft “appeasement” policies never pay off. Each side should list, in advance, the signs that would nudge them toward the other.

There are no paint-by-number rules here. Synthesis is an art that requires reconciling irreducibly subjective judgments. If you do it well, engaging in this process of synthesizing should transform you from a cookie-cutter dove or hawk into an odd hybrid creature, a dove-hawk, with a nuanced view of when tougher or softer policies are likelier to work.

If you really want to have fun at meetings (and simultaneously decrease your popularity with your bosses) start asking what would cause them to change their mind. Never forget that having an opinion is hard work. You really need to concentrate and rag on the problem.

6. Strive to distinguish as many degrees of doubt as the problem permits but no more.

This could easily be called nuance matters. The more degrees of uncertainty you can distinguish the better.

As in poker, you have an advantage if you are better than your competitors at separating 60/ 40 bets from 40/ 60— or 55/ 45 from 45/ 55. Translating vague-verbiage hunches into numeric probabilities feels unnatural at first but it can be done. It just requires patience and practice.

7. Strike the right balance between under- and overconfidence, between prudence and decisiveness.

Superforecasters understand the risks both of rushing to judgment and of dawdling too long near “maybe.” They routinely manage the trade-off between the need to take decisive stands (who wants to listen to a waffler?) and the need to qualify their stands (who wants to listen to a blowhard?). They realize that long-term accuracy requires getting good scores on both calibration and resolution— which requires moving beyond blame-game ping-pong. It is not enough just to avoid the most recent mistake. They have to find creative ways to tamp down both types of forecasting errors— misses and false alarms— to the degree a fickle world permits such uncontroversial improvements in accuracy.

8. Look for the errors behind your mistakes but beware of rearview-mirror hindsight biases.

It's easy to justify or rationalize your failure. Don't. Own it and keep score with a decision journal. You want to learn where you went wrong and determine ways to get better. And don't just look at failures. Evaluate successes as well so you can determine when you were just plain lucky.

9. Bring out the best in others and let others bring out the best in you.

Master the fine art of team management, especially perspective taking (understanding the arguments of the other side so well that you can reproduce them to the other’s satisfaction), precision questioning (helping others to clarify their arguments so they are not misunderstood), and constructive confrontation (learning to disagree without being disagreeable). Wise leaders know how fine the line can be between a helpful suggestion and micromanagerial meddling or between a rigid group and a decisive one or between a scatterbrained group and an open-minded one.

10. Master the error-balancing bicycle.

Implementing each commandment requires balancing opposing errors. Just as you can’t learn to ride a bicycle by reading a physics textbook, you can’t become a superforecaster by reading training manuals. Learning requires doing, with good feedback that leaves no ambiguity about whether you are succeeding—“ I’m rolling along smoothly!”— or whether you are failing—“ crash!”

As with anything, doing more of it doesn't mean you're getting better at it. You need to do more than just go through the motions.  The way to get better is deliberate practice.

And finally …

“It is impossible to lay down binding rules,” Helmuth von Moltke warned, “because two cases will never be exactly the same.” Guidelines (or maps) are the best we can do in a world where nothing represents the whole. As George Box said: “All models are false. Some are useful.”

***

Mark Steed, a former member of The Good Judgment Project offered us 13 ways to make better decisions.

Philip Tetlock on The Art and Science of Prediction

Philip Tetlock small

This is the sixth episode of The Knowledge Project, a podcast aimed at acquiring wisdom through interviews with fascinating people to gain insights into how they think, live, and connect ideas.

***

On this episode, I'm happy to have Philip Tetlock, professor at the University of Pennsylvania. He's the co-leader of The Good Judgement Project, which is a multi-year forecasting study. He's also the author of Superforecasting: The Art and Science of Prediction and Expert Political Judgment: How Good Is It? How Can We Know?

The subject of this interview is how we can get better at the art and science of prediction. We dive into what makes some people better at making predictions and how we can learn to improve our ability to guess the future. I hope you enjoy the conversation as much as I did.

***

Listen

***

Show Notes

Transcript:
A complete transcript is available for members.

Books Mentioned

“the truth is that prediction is hard, often impossible.”

Philip Tetlock, author of Expert Political Judgment, co-authors an interesting article in foreign policy.

Academic research suggests that predicting events five years into the future is so difficult that most experts perform only marginally better than dart-throwing chimps.

The best way to become a better-calibrated appraiser of long-term futures is to get in the habit of making quantitative probability estimates that can be objectively scored for accuracy over long stretches of time. Explicit quantification enables explicit accuracy feedback, which enables learning. This requires extraordinary organizational patience — an investment that may span decades — but the stakes are high enough to merit a long-term investment.

Still curious? Expert Political Judgment explores what constitutes good judgment in predicting future events, and looks at why experts are often wrong in their forecasts.