Making Decisions in a Complex Adaptive System

complexadaptive

In Think Twice: Harnessing the Power of Counterintuition, Mauboussin does a good job adding to the work we’ve already done on complex adaptive systems:

You can think of a complex adaptive system in three parts (see the image at the top of this post). First, there is a group of heterogeneous agents. These agents can be neurons in your brain, bees in a hive, investors in a market, or people in a city. Heterogeneity means each agent has different and evolving decision rules that both reflect the environment and attempt to anticipate change in it. Second, these agents interact with one another, and their interactions create structure— scientists often call this emergence. Finally, the structure that emerges behaves like a higher-level system and has properties and characteristics that are distinct from those of the underlying agents themselves. … The whole is greater than the sum of the parts.

***

The inability to understand the system based on its components prompted Nobel Prize winner and physicist Philip Anderson, to draft the essay, “More Is Different.” Anderson wrote, “The behavior of large and complex aggregates of elementary particles, it turns out, is not to be understood in terms of the simple extrapolation of the properties of a few particles. Instead, at each level of complexity entirely new properties appear.”

Mauboussin comments that we are fooled by randomness:

The problem goes beyond the inscrutable nature of complex adaptive systems. Humans have a deep desire to understand cause and effect, as such links probably conferred humans with evolutionary advantage. In complex adaptive systems, there is no simple method for understanding the whole by studying the parts, so searching for simple agent-level causes of system-level effects is useless. Yet our minds are not beyond making up a cause to relieve the itch of an unexplained effect. When a mind seeking links between cause and effect meets a system that conceals them, accidents will happen.

***
Misplaced Focus on the Individual

One mistake we make is extrapolating the behaviour of an individual component, say an individual, to explain the entire system. Yet when we have to solve a problem dealing with a complex system, we often address an individual component. In so doing, we ignore Garrett Hardin’s first law of Ecology, you can never do merely one thing and become a fragilista.

That unintended system-level consequences arise from even the best-intentioned individual-level actions has long been recognized. But the decision-making challenge remains for a couple of reasons. First, our modern world has more interconnected systems than before. So we encounter these systems with greater frequency and, most likely, with greater consequence. Second, we still attempt to cure problems in complex systems with a naïve understanding of cause and effect.

***

When I speak with executives from around the world going through a period of poor performance, it doesn’t take long for them to mention they want to hire a star from another company. “If only we had Kate,” they’ll say, “we could smash the competition and regain our footing.”

At first, poaching stars from competitors or even teams within the same organization seems like a winning strategy. But once the star comes over the results often fail to materialize.

What we fail to grasp is that their performance is part of an ecosystem and removing them from that ecosystem — that is isolating the individual performance — is incredibly hard without properly considering the entire ecosystem. (Reversion to the mean also likely accounts for some of the star’s fading as well).

Three Harvard professors concluded, “When a company hires a star, the star’s performance plunges, there is a sharp decline in the functioning of the group or team the person works with, and the company’s market value falls.”

If it sounds like a lot of work to think this through at many levels, it should be. Why should it be easy?

Another example of this at an organizational level has to do with innovation. Most people want to solve the innovation problem. Ignoring for a second that that is the improper framing, how do most organizations go about this? They copy what the most successful organizations do. I can’t count the number of times the solution to an organization’s “innovation problem” is to be more like Google. Well-intentioned executives blindly copy approaches by others such as 20% innovation time, without giving an ounce of thought to the role the ecosystem plays.

Isolating and focusing on an individual part of a complex adaptive system without an appreciation and understanding of that system itself is sure to lead to disaster.

***
What Should We Do?

So this begs the question, what should we do when we find ourselves dealing with a complex adaptive system? Mauboussin provides three pieces of advice:

1. Consider the system at the correct level.

Remember the phrase “more is different.” The most prevalent trap is extrapolating the behavior of individual agents to gain a sense of system behavior. If you want to understand the stock market, study it at the market level. Consider what you see and read from individuals as entertainment, not as education. Similarly, be aware that the function of an individual agent outside the system may be very different from that function within the system. For instance, mammalian cells have the same metabolic rates in vitro, whether they are from shrews or elephants. But the metabolic rate of cells in small mammals is much higher than the rate of those in large mammals. The same structural cells work at different rates, depending on the animals they find themselves in.

2. Watch for tightly coupled systems.

A system is tightly coupled when there is no slack between items, allowing a process to go from one stage to the next without any opportunity to intervene. Aircraft, space missions, and nuclear power plants are classic examples of complex, tightly coupled systems. Engineers try to build in buffers or redundancies to avoid failure, but frequently don’t anticipate all possible contingencies. Most complex adaptive systems are loosely coupled, where removing or incapacitating one or a few agents has little impact on the system’s performance. For example, if you randomly remove some investors, the stock market will continue to function fine. But when the agents lose diversity and behave in a coordinated fashion, a complex adaptive system can behave in a tightly coupled fashion. Booms and crashes in financial markets are an illustration.

3. Use simulations to create virtual worlds.

Dealing with complex systems is inherently tricky because the feedback is equivocal, information is limited, and there is no clear link between cause and effect. Simulation is a tool that can help our learning process. Simulations are low cost, provide feedback, and have proved their value in other domains like military planning and pilot training.

Still Curious? Think Twice: Harnessing the Power of Counterintuition.

Books Everyone Should Read on Psychology and Behavioral Economics

Psychology and Behavioral Economics Books

Earlier this year, a prominent friend of mine was tasked with coming up with a list of behavioral economics book recommendations for the military leaders of a G7 country and I was on the limited email list asking for input.

Yikes.

While I read a lot and I’ve offered up books to sports teams and fortune 100 management teams, I’ve never contributed to something as broad as educating a nation’s military leaders. While I have a huge behavorial economics reading list, this wasn’t where I started.

Not only did I want to contribute, but I wanted to choose books that these military leaders wouldn’t normally have come across in everyday life. Books they were unlikely to have read. Books that offered perspective.

Given that I couldn’t talk to them outright, I was really trying to answer the question ‘what would I like to communicate to military leaders through non-fiction books?’ There were no easy answers.

I needed to offer something timeless. Not so outside the box that they wouldn’t approach it, and not so hard to find that those purchasing the books would give up and move on to the next one on the list. And it can’t be so big they get intimidated by the commitment to read. On top of that, you need a book that starts strong because, in my experience of dealing with C-level executives, they stop paying attention after about 20 pages if it’s not relevant or challenging them in the right way.

In short there is no one-size-fits-all but to make the biggest impact you have to consider all of these factors.

While the justifications for why people chose the books below are confidential, I can tell you what books were on the final email that I saw. I left one book off the list, which I thought was a little too controversial to post.

These books have nothing to do with military per se, rather they deal with enduring concepts like ecology, intuition, game theory, strategy, biology, second order thinking, and behavioral psychology. In short these books would benefit most people who want to improve their ability to think, which is why I’m sharing them with you.

If you’re so inclined you can try to guess which ones I recommended in the comments. Read wisely.

In no order and with no attribution:

  1. Risk Savvy: How to Make Good Decisions by Gerd Gigerenzer
  2. The Righteous Mind: Why Good People Are Divided by Politics and Religion by Jonathan Haidt
  3. The Checklist Manifesto: How to Get Things Right by Atul Gawande
  4. The Darwin Economy: Liberty, Competition, and the Common Good by Robert H. Frank
  5. David and Goliath: Underdogs, Misfits, and the Art of Battling Giants by Malcolm Gladwell
  6. Predictably Irrational, Revised and Expanded Edition: The Hidden Forces That Shape Our Decisions by Dan Ariely
  7. Thinking, Fast and Slow by Daniel Kahneman
  8. The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life by Robert Trivers
  9. The Hour Between Dog and Wolf: Risk Taking, Gut Feelings and the Biology of Boom and Bust by John Coates
  10. Adapt: Why Success Always Starts with Failure by Tim Harford
  11. The Lessons of History by Will & Ariel Durant
  12. Poor Charlie’s Almanack
  13. Passions Within Reason: The Strategic Role of the Emotions by Robert H. Frank
  14. The Signal and the Noise: Why So Many Predictions Fail–but Some Don’t by Nate Silver
  15. Sex at Dawn: How We Mate, Why We Stray, and What It Means for Modern Relationships by Christopher Ryan & Cacilda Jetha
  16. The Red Queen: Sex and the Evolution of Human Nature by Matt Ridley
  17. Introducing Evolutionary Psychology by Dylan Evans & Oscar Zarate
  18. Filters Against Folly: How To Survive Despite Economists, Ecologists, and the Merely Eloquent by Garrett Hardin
  19. Games of Strategy (Fourth Edition) by Avinash Dixit, Susan Skeath & David H. Reiley, Jr.
  20. The Theory of Political Coalitions by William H. Riker
  21. The Evolution of War and its Cognitive Foundations (PDF) by John Tooby & Leda Cosmides.
  22. Fight the Power: Lanchester’s Laws of Combat in Human Evolution by Dominic D.P. Johnson & Niall J. MacKay.

How Situations Influence Decisions

Michael Mauboussin, the first guest on my podcast, The Knowledge Project, explains how our situations influence our decisions enormously in Think Twice: Harnessing the Power of Counterintuition.

Mistakes born out of situations are difficult to avoid, in part because the influences on us are operating at a subconscious level. “Making good decisions in the face of subconscious pressure,” Mauboussin writes, “requires a very high degree of background knowledge and self-awareness.”

How do you feel when you read the word “treasure”? Do you feel good? What images come to mind? If you are like most people, just ruminating on “treasure” gives you a little lift. Our minds naturally make connections and associate ideas. So if someone introduces a cue to you— a word, a smell, a symbol— your mind often starts down an associative path. And you can be sure the initial cue will color a decision that waits at the path’s end. All this happens outside of your perception.

People around us also influence our decisions, often with good reason. Social influence arises for a couple of reasons. The first is asymmetric information, a fancy phrase meaning someone knows something you don’t. In those cases, imitation makes sense because the information upgrade allows you to make better decisions.

Peer pressure, or the desire to be part of the in-group, is a second source of social influence. For good evolutionary reasons, humans like to be part of a group— a collection of interdependent individuals— and naturally spend a good deal of time assessing who is “in” and who is “out.” Experiments in social psychology have repeatedly confirmed this.

We explain behavior based on an individual’s choices and disposition and not the situation. That is, we associate bad behaviour with the person and not the situation. Unless, of course, we’re talking about ourselves. This is “the fundamental attribution error”, a phrase coined by Lee Ross, a social psychologist at Stanford University.

There are two sides to this sword as the power of situations can work for good and evil. “Some of the greatest atrocities known to mankind,” Mauboussin writes, “resulted from putting normal people into bad situations.”

We believe our choices are independent of circumstance, however, the evidence points in another direction.

***
Some Wine With Your Music?

Consider how something as simple as the music playing in a store influences what wine we purchase.

Imagine strolling down the supermarket aisle and coming upon a display of French and German wines, roughly matched for price and quality. You do some quick comparisons, place a German wine in your cart, and continue shopping. After you check out, a researcher approaches and asks why you bought the German wine. You mention the price, the wine’s dryness, and how you anticipate it will go nicely with a meal you are planning. The researcher then asks whether you noticed the German music playing and whether it had any bearing on your decision. Like most, you would acknowledge hearing the music and avow that it had nothing to do with your selection.

But this isn’t a hypothetical, it’s an actual study and the results affirm that the environment influences our decisions.

In this test, the researchers placed the French and German wines next to each other, along with small national flags. Over two weeks, the scientists alternated playing French accordion music and German Bierkeller pieces and watched the results. When French music played, French wines represented 77 percent of the sales. When German music played, consumers selected German wines 73 percent of the time. (See the image below) The music made a huge difference in shaping purchases. But that’s not what the shoppers thought.

While the customers acknowledged that the music made them think of either France or Germany, 86 percent denied the tunes had any influence on their choice.

Music_decisions

This is an example of priming, which psychologists formally define as “the incidental activation of knowledge structures by the current situational context.”1 and priming happens all the time. For priming to be most effective it must have a strong connection to our situation’s goals.

Another example of how situations influence us is the default. In a fast moving world of non-stop bits and bytes the default is the path of least resistance — that is, it’s the system one option. To move away from the default is labour intensive on our brains. Studies have repeatedly shown that most people go with defaults.

This applies to a wide array of choices, from insignificant issues like the ringtone on a new cell phone to consequential issues like financial savings, educational choice, and medical alternatives. Richard Thaler, an economist, and Cass Sunstein, a law professor, call the relationship between choice presentation and the ultimate decision “choice architecture.” They convincingly argue that we can easily nudge people toward a particular decision based solely on how we arrange the choices for them.

One context for decision making is how choices are structured. Knowing that many people opt for the default option, we can influence (for better or worse) large groups of people.

Mauboussin relates a story about a prominent psychologist popular on the speaking circuit that “underscores how underappreciated choice architecture remains.”

When companies call to invite him to speak, he offers them two choices. Either they can pay him his set fee and get a standard presentation, or they can pay him nothing in exchange for the opportunity to work with him on an experiment to improve choice architecture (e.g., redesign a form or Web site). Of course, the psychologist benefits by getting more real-world results on choice architecture, but it seems like a pretty good deal for the company as well, because an improved architecture might translate into financial benefits vastly in excess of his speaking fee. He noted ruefully that so far not one company has taken him up on his experiment offer.

(As a brief aside, I engage in public speaking on a fairly regular basis. I’ve toyed with similar ideas. Once I even went as far as offering to speak for no pre-set fee, only “value added” as judged by the client. They opted for the fee.)

Another great example of how environments affect behaviour is Stanley Milgram’s famous experiment on obedience to authority. “Ordinary people, simply doing their jobs, and without any particular hostility on their part, can become agents in a terrible destructive process,” wrote Stanley Milgram. The Stanford Prison Experiment is, yet, another example.

***

The key point is that situations are generally more powerful than we think and we can do things to resist the pull of “unwelcome social influence.” Mauboussin offers four tips.

1. Be aware of your situation.

You can think of this in two parts. There is the conscious element, where you can create a positive environment for decision making in your own surroundings by focusing on process, keeping stress to an acceptable level, being a thoughtful choice architect, and making sure to diffuse the forces that encourage negative behaviors.

Then there is coping with the subconscious influences. Control over these influences requires awareness of the influence, motivation to deal with it, and the willingness to devote attention to address possible poor decisions. In the real world, satisfying all three control conditions is extremely difficult, but the path starts with awareness.

2. Consider the situation first and the individual second.

This concept, called attributional charity, insists that you evaluate the decisions of others by starting with the situation and then turning to the individuals, not the other way around. While easier for Easterners than Westerners, most of us consistently underestimate the role of the situation in assessing the decisions we see others make. Try not to make the fundamental attribution error.

3. Watch out for the institutional imperative.

Warren Buffett, the celebrated investor and chairman of Berkshire Hathaway, coined the term institutional imperative to explain the tendency of organizations to “mindlessly” imitate what peers are doing. There are typically two underlying drivers of the imperative. First, companies want to be part of the in-group, much as individuals do. So if some companies in an industry are doing mergers, chasing growth, or expanding geographically, others will be tempted to follow. Second are incentives. Executives often reap financial rewards by following the group. When decision makers make money from being part of the crowd, the draw is nearly inescapable.

One example comes from a Financial Times interview with the former chief executive officer of Citigroup Chuck Prince in 2007, before the brunt of the financial crisis. “When the music stops, things will be complicated,” offered Prince, demonstrating that he had some sense of what was to come. “But as long as the music is playing, you’ve got to get up and dance.” The institutional imperative is rarely a good dance partner.

4. Avoid inertia.

Periodically revisit your processes and ask whether they are serving their purpose. Organizations sometimes adopt routines and structures that become crystallized, impeding positive change. Efforts to reform education in the United States, for example, have been met with resistance from teachers and administrators who prefer the status quo.

We like to think that we’re better than the situation, that we follow the decision making process and rationally weigh the facts, consider alternatives, and determine the best course of action. While others are easily influenced, we are not. This is how we’re wrong.

Decision making is fundamentally a social exercise, something I cover in my Re:Think Decision Making workshop.

1. “Automaticity of Social Behavior: Direct Effects of Trait Construction and Stereotype Activation on Action”

Simple Rules: How to Thrive in a Complex World

Simple Rules

“Simple rules are shortcut strategies that save time and effort by focusing our attention and simplifying the way we process information. The rules aren’t universal— they’re tailored to the particular situation and the person using them.”

We use simple rules to guide decision making every day. In fact, without them, we’d be paralyzed by the sheer mental brainpower required to sift through the complicated messiness of our world. You can think of them as heuristics. Like heuristics, most of the time they work yet some of the time they don’t.

Simple Rules: How to Thrive in a Complex World, a book by Donald Sull and Kathleen Eisenhardt, explores the understated power that comes from using simple rules. As they define them, simple rules refer to “a handful of guidelines tailored to the user and the task at hand, which balance concrete guidance with the freedom to exercise judgment.” These rules “provide a powerful weapon against the complexity that threatens to overwhelm individuals, organizations, and society as a whole. Complexity arises whenever a system— technical, social, or natural— has multiple interdependent parts.”

They work, the authors argue, because they do three things well.

First, they confer the flexibility to pursue new opportunities while maintaining some consistency. Second, they can produce better decisions. When information is limited and time is short, simple rules make it fast and easy for people, organizations, and governments to make sound choices. They can even outperform complicated decision-making approaches in some situations. Finally, simple rules allow the members of a community to synchronize their activities with one another on the fly.

Effective simple rules share four common traits …

First, they are limited to a handful. Capping the number of rules makes them easy to remember and maintains a focus on what matters most. Second, simple rules are tailored to the person or organization using them. College athletes and middle-aged dieters may both rely on simple rules to decide what to eat, but their rules will be very different. Third, simple rules apply to a well-defined activity or decision, such as prioritizing injured soldiers for medical care. Rules that cover multiple activities or choices end up as vague platitudes, such as “Do your best” and “Focus on customers.” Finally, simple rules provide clear guidance while conferring the latitude to exercise discretion.

***
Simple Rules for a Complex World

People often attempt to address complex problems with complex solutions. For example, governments tend to manage complexity by trying to anticipate every possible scenario that might arise, and then promulgate regulations to cover every case.

Consider how central bankers responded to increased complexity in the global banking system. In 1988 bankers from around the world met in Basel, Switzerland, to agree on international banking regulations, and published a 30-page agreement (known as Basel I). Sixteen years later, the Basel II accord was an order of magnitude larger, at 347 pages, and Basel III was twice as long as its predecessor. When it comes to the sheer volume of regulations generated, the U.S. Congress makes the central bankers look like amateurs. The Glass-Steagall Act, a law passed during the Great Depression, which guided U.S. banking regulation for seven decades, totaled 37 pages. Its successor, Dodd-Frank, is expected to weigh in at over 30,000 pages when all supporting legislation is complete.

Meeting complexity with complexity can create more confusion than it resolves. The policies governing U.S. income taxes totaled 3.8 million words as of 2010. Imagine a book that is seven times as long as War and Peace, but without any characters, plot points, or insight into the human condition. That book is the U.S. tax code.

[…]

Applying complicated solutions to complex problems is an understandable approach, but flawed. The parts of a complex system can interact with one another in many different ways, which quickly overwhelms our ability to envision all possible outcomes.

[…]

Complicated solutions can overwhelm people, thereby increasing the odds that they will stop following the rules. A study of personal income tax compliance in forty-five countries found that the complexity of the tax code was the single best predictor of whether citizens would dodge or pay their taxes. The complexity of the regulations mattered more than the highest marginal tax rate, average levels of education or income, how fair the tax system was perceived to be, and the level of government scrutiny of tax returns.

***
Overfitting

Simple rules do not trump complicated ones all the time but they work more often than we think. Gerd Gigerenzer is a key contributor in this space. He thinks that simple rules can allow for better decision making.

Why can simpler models outperform more complex ones? When underlying cause-and-effect relationships are poorly understood, decision makers often look for patterns in historical data under the assumption that past events are a good indicator of future trends. The obvious problem with this approach is that the future may be genuinely different from the past. But a second problem is subtler. Historical data includes not only useful signal, but also noise— happenstance correlations between variables that do not reveal an enduring cause-and-effect relationship. Fitting a model too closely to historical data hardwires error into the model, which is known as overfitting. The result is a precise prediction of the past that may tell us little about what the future holds.

Simple rules focus on the critical variables that govern a situation and help you ignore the peripheral ones. Of course, in order to identify the key variables, you need to be operating in your circle of competence. When we pay too much attention to irrelevant or otherwise unimportant information, we fail to grasp the power of the most important ones and give them the weighting they deserve. Simple rules also make it more likely people will act on them. This is something Napoleon intuitively understood.

When instructing his troops, Napoleon realized that complicated instructions were difficult to understand, explain, and execute. So, rather than complicated strategies he passed along simple ones, such as: Attack.

***
Making Better Decisions

The book mentions three types of rules that “improve decision making by structuring choices and centering on what to do (and what not to do): boundary, prioritizing, and stopping rules.

Boundary Rules cover what to do …

Boundary rules guide the choice of what to do (and not do) without requiring a lot of time, analysis, or information. Boundary rules work well for categorical choices, like a judge’s yes-or-no decision on a defendant’s bail, and decisions requiring many potential opportunities to be screened quickly. These rules also come in handy when time, convenience, and cost matter.

Prioritizing rules rank options to help decide which of multiple paths to pursue.

Prioritizing rules can help you rank a group of alternatives competing for scarce money, time, or attention. … They are especially powerful when applied to a bottleneck, an activity or decision that keeps individuals or organizations from reaching their objectives. Bottlenecks represent pinch-points in companies, where the number of opportunities swamps available resources, and prioritizing rules can ensure that these resources are deployed where they can have the greatest impact. In business settings, prioritizing rules can be used to assign engineers to new-product-development projects, focus sales representatives on the most promising customers, and allocate advertising expenditure across multiple products, to name only a few possibilities.

Stopping rules help you learn when to reverse a decision. Nobel Prize-winning economist Herbert Simon argued that we lack the information, time, and mental engine to determine the single best path when faced with a slew of options. Instead we rely on a heuristic to help us stop searching when we find something that’s good enough. Simon called this satisficing. If you think that’s hard, it’s even hard to stop doing something we’re already doing. Yet when it comes to our key investments of time, money, and energy we have to know when to pull the plug.

Sometimes we pursue goals at all costs and ignore our self-imposed stopping rule. This goal induced blindness can be deadly.

A cross-continental team of researchers matched 145 Chicagoans with demographically similar Parisians. Both the Chicagoans and Parisians used stopping rules to decide when to finish eating, but the rules themselves were very different. The Parisians employed rules like “Stop eating when I start feeling full,” linking their decision to internal cues about satiation. The Chicagoans, in contrast, were more likely to follow rules linked to external factors, such as “Stop eating when I run out of a beverage,” or “Stop eating when the TV show I’m watching is over.” Stopping rules that rely on internal cues— like when the food stops tasting good or you feel full— decrease the odds that people eat more than their body needs or even wants.

Stopping rules are particularly critical in situations when people tend to double down on a losing hand.

These three decision rules—boundary, prioritizing, and stopping—help provide guidelines on what to do—”what is acceptable to do, what is more important to do, and what to stop doing.”

***
Doing Things Better

Process rules, in contrast to boundary rules, focus on how to do things better.

Process rules work because they steer a middle path between the chaos of too few rules that can result in confusion and mistakes, and the rigidity of so many rules that there is little ability to adapt to the unexpected or take advantage of new opportunities. Simply put, process rules are useful whenever flexibility trumps consistency.

The most widely used process rule is the how-to rule. How-to rules guide the basics of executing tasks, from playing golf to designing new products. The other process rules, coordination and timing, are special cases of how-to rules that apply in particular situations. Coordination rules center on getting something done when multiple actors— people, organizations, or nations— have to work together. These rules orchestrate the behaviors of, for example, schooling fish, Zipcar members, and content contributors at Wikipedia. In contrast, timing rules center on getting things done in situations where temporal factors such as rhythms, sequences, and deadlines are relevant. These rules set the timing of, for example, when to get up every day and when dragonflies migrate.

While I was skeptical, the book is well worth reading. I suggest you check it out.

The Wisdom of Crowds and The Expert Squeeze

As networks harness the wisdom of crowds, the ability of experts to add value in their predictions is steadily declining. This is the expert squeeze.

As networks harness the wisdom of crowds, the ability of experts to add value in their predictions is steadily declining. This is the expert squeeze.

In Think Twice: Harnessing the Power of Counterintuition, Michael Mauboussin, the first guest on my podcast, The Knowledge Project, explains the expert squeeze and its implications for how we make decisions.

As networks harness the wisdom of crowds and computing power grows, the ability of experts to add value in their predictions is steadily declining. I call this the expert squeeze, and evidence for it is mounting. Despite this trend, we still pine for experts— individuals with special skill or know-how— believing that many forms of knowledge are technical and specialized. We openly defer to people in white lab coats or pinstripe suits, believing they hold the answers, and we harbor misgivings about computergenerated outcomes or the collective opinion of a bunch of tyros.

The expert squeeze means that people stuck in old habits of thinking are failing to use new means to gain insight into the problems they face. Knowing when to look beyond experts requires a totally fresh point of view, and one that does not come naturally. To be sure, the future for experts is not all bleak. Experts retain an advantage in some crucial areas. The challenge is to know when and how to use them.

The Value of Experts
The Value of Experts

So how can we manage this in our role as decision maker? The first step is to classify the problem.

(The figure above — The Value of Experts) helps to guide this process. The second column from the left covers problems that have rules-based solutions with limited possible outcomes. Here, someone can investigate the problem based on past patterns and write down rules to guide decisions. Experts do well with these tasks, but once the principles are clear and well defined, computers are cheaper and more reliable. Think of tasks such as credit scoring or simple forms of medical diagnosis. Experts agree about how to approach these problems because the solutions are transparent and for the most part tried and true.

[…]

Now let’s go to the opposite extreme, the column on the far right that deals with probabilistic fields with a wide range of outcomes. Here are no simple rules. You can only express possible outcomes in probabilities, and the range of outcomes is wide. Examples include economic and political forecasts. The evidence shows that collectives outperform experts in solving these problems.

[…]

The middle two columns are the remaining province for experts. Experts do well with rules-based problems with a wide range of outcomes because they are better than computers at eliminating bad choices and making creative connections between bits of information.

Once you’ve classified the problem, you can turn to the best method for solving it.

… computers and collectives remain underutilized guides for decision making across a host of realms including medicine, business, and sports. That said, experts remain vital in three capacities. First, experts must create the very systems that replace them. … Of course, the experts must stay on top of these systems, improving the market or equation as need be.

Next, we need experts for strategy. I mean strategy broadly, including not only day-to-day tactics but also the ability to troubleshoot by recognizing interconnections as well as the creative process of innovation, which involves combining ideas in novel ways. Decisions about how best to challenge a competitor, which rules to enforce, or how to recombine existing building blocks to create novel products or experiences are jobs for experts.

Finally, we need people to deal with people. A lot of decision making involves psychology as much as it does statistics. A leader must understand others, make good decisions, and encourage others to buy in to the decision.

So what are the practical tips you can do to make the expert squeeze work for you instead of against you? Here Mauboussin offers 3 tips.

1. Match the problem you face with the most appropriate solution.

What we know is that experts do a poor job in many settings, suggesting that you should try to supplement expert views with other approaches.

2. Seek diversity.

(Philip) Tetlock’s work shows that while expert predictions are poor overall, some are better than others. What distinguishes predictive ability is not who the experts are or what they believe, but rather how they think. Borrowing from Archilochus— through Isaiah Berlin— Tetlock sorted experts into hedgehogs and foxes. Hedgehogs know one big thing and try to explain everything through that lens. Foxes tend to know a little about a lot of things and are not married to a single explanation for complex problems. Tetlock finds that foxes are better predictors than hedgehogs. Foxes arrive at their decisions by stitching “together diverse sources of information,” lending credence to the importance of diversity. Naturally, hedgehogs are periodically right— and often spectacularly so— but do not predict as well as foxes over time. For many important decisions, diversity is the key at both the individual and collective levels.

3. Use technology when possible. Leverage technology to side-step the squeeze when possible.

Flooded with candidates and aware of the futility of most interviews, Google decided to create algorithms to identify attractive potential employees. First, the company asked seasoned employees to fill out a three-hundred-question survey, capturing details about their tenure, their behavior, and their personality. The company then compared the survey results to measures of employee performance, seeking connections. Among other findings, Google executives recognized that academic accomplishments did not always correlate with on-the-job performance. This novel approach enabled Google to sidestep problems with ineffective interviews and to start addressing the discrepancy.

Learning the difference between when experts help or hurt can go a long way toward avoiding stupidity. This starts with identifying the type of problem you’re facing and then considering the various approaches to solve the problem with pros and cons.

Still curious? Follow up by reading Generalists vs. Specialists, Think Twice: Harnessing the Power of Counterintuition, and reviewing the work of Philip Tetlock on why how you think matters more than what you think.

13 Practical Ideas That Have Helped Me Make Better Decisions

This article is a collaboration between Mark Steed and myself. He did most of the work. Mark was a participant at the last Re:Think Decision Making event as well as a member of the Good Judgment Project. I asked him to put together something on making better predictions. This is the result.

We all face decisions. Sometimes we think hard about a specific decision, other times, we make decisions without thinking. If you’ve studied the genre you’ve probably read Taleb, Tversky, Kahneman, Gladwell, Ariely, Munger, Tetlock, Mauboussin and/or Thaler. These pioneers write a lot about “rationality” and “biases”.

Rationality dictates the selection of the best choice among however many options. Biases of a cognitive or emotional nature creep in and are capable of preventing the identification of the “rational” choice. These biases can exist in our DNA or can be formed through life experiences. The mentioned authors consider biases extensively, and, lucky for us, their writings are eye-opening and entertaining.

Rather than rehash what brighter minds have discussed, I’ll focus on practical ideas that have helped me make better decisions. I think of this as a list of “lessons learned (so far)” from my work in asset management and as a forecaster for the Good Judgment Project. I’ve held back on submitting this given the breadth and depth of the FS readers, but, rather than expect perfection, I wanted to put something on the table because I suspect many of you have useful ideas that will help move the conversation forward.

1. This is a messy business. Studying decision science can easily motivate self-loathing. There are over one-hundred cognitive biases that might prevent us from making calculated and “rational” decisions. What, you can’t create a decision tree with 124 decision nodes, complete with assorted probabilities in split seconds? I asked around, and it turns out, not many people can. Since there is no way to eliminate all the potential cognitive biases and I don’t possess the mental faculties of Dr. Spock or C-3PO, I might as well live with the fact that some decisions will be more elegant than others.

2. We live and work in dynamic environments. Dynamic environments adapt. The opposite of dynamic environments are static environments. Financial markets, geopolitical events, team sports, etc. are examples of dynamic “environments” because relationships between agents evolve and problems are often unpredictable. Changes from one period are conditional on what happened the previous period. Casinos are more representative of static environments. Not casinos necessarily, but the games inside. If you play Roulette, your odds of winning are always the same and it doesn’t matter what happened the previous turn.

3. Good explanatory models are not necessarily good predictive models. Dynamic environments have a habit of desecrating rigid models. While blindly following an elegant model may be ill-advised, strong explanatory models are excellent guideposts when paired with sound judgment and intuition. Just as I’m not comfortable with the automatic pilot flying a plane without a human in the cockpit, I’m also not comfortable with a human flying a plane without the help of technology. It has been said before, people make models better and models make people better.

4. Instinct is not always irrational.  The rule of thumb, otherwise known as heuristics, provide better results than more complicated analytical techniques. Gerd Gigerenzer, is the thought leader and his book Risk Savvy: How to Make Good Decisions is worth reading. Most literature despises heuristics, but he asserts intuition proves superior because optimization is sometimes mathematically impossible or exposed to sampling error. He often uses the example of Harry Markowitz, who won a Nobel Prize in Economics in 1990 for his work on Modern Portfolio Theory. Markowitz discovered a method for determining the “optimal” mix of assets. However, Markowitz himself did not follow his Nobel prize-winning mean-variance theory but instead used a 1/N heuristic by spreading his dollars equally across N number of investments. He concluded that his 1/N strategy would perform better than a mean-optimization application unless the mean-optimization model had 500 years to compete.  Our intuition is more likely to be accurate if it is preceded by rigorous analysis and introspection. And simple rules are more effective at communicating winning strategies in complex environments. When coaching a child’s soccer team, it is far easier teaching a few basic principles, than articulating the nuances of every possible situation.

5. Decisions are not evaluated in ways that help us reduce mistakes in the future. Our tendency is to only critique decisions where the desired outcome was not achieved while uncritically accepting positive outcomes even if luck, or another factor, produced the desired result. At the end of the day I understand all we care about are results, but good processes are more indicative of future success than good results.

6. Success is ill-defined. In some cases this is relatively straightforward. If the outcome is binary, either it did, or did not happen, success is easy to identify. But this is more difficult in situations where the outcome can take a range of potential values, or when individuals differ on what the values should be.

7. We should care a lot more about calibration. Confidence, not just a decision, should be recorded (and to be clear, decisions should be recorded). Next time you have a major decision, ask yourself how confident you are that the desired outcome will be achieved. Are you 50% confident? 90%? Write it down. This helps with calibration. For all decisions in which you are 50% confident, half should be successes. And you should be right nine out of ten times for all decisions in which you are 90% confident. If you are 100% confident, you should never be wrong. If you don’t know anything about a specific subject then you should be no more confident than a coin flip. It’s amazing how we will assign high confidence to an event we know nothing about. Turns out this idea is pretty helpful. Let’s say someone brings an idea to you and you know nothing about it. Your default should be 50/50; you might as well flip a coin. Then you just need to worry about the costs/payouts.

8. Probabilities are one thing, payouts are another. You might feel 50/50 about your chances but you need to know your payouts if you are right. This is where the expected value comes in handy. It’s the probability of being right multiplied by the payout if you are right, plus the probability of being wrong multiplied by the cost. E= .50(x) + .50(y). Say someone on your team has an idea for a project and you decided there is a 50% chance that it succeeds and, if it does, you double your money, if it doesn’t, you lose what you invested. If the project required $10mm, then the expected outcome is calculated as .50*20 + .50*0 = 10, or $10mm. If you repeat this process a number of times, approving only projects with a 2:1 payout and 50% probability of success you would likely end up with the same amount you started with. Binary outcomes that have a 50/50 probability should have a double-or-nothing payout. This is even more helpful given #7 above. If you were tracking this employee’s calibration you would have a sense as to whether their forecasts are accurate. As a team member or manager, you would want to know if a specific employee is 90% confident all the time but only 50% accurate. More importantly, you would want to know if a certain team member is usually right when they express 90% or 100% confidence. Use a Brier Score to track colleagues but provide an environment to encourage discussion and openness.

9. We really are overconfident. Starting from the assumption that we are probably only 50% accurate is not a bad idea. Phil Tetlock, a professor at UPenn, Team Leader for the Good Judgment Project and author of Expert Political Judgment: How Good Is It? How Can We Know?, suggested political pundits are about 53% accurate regarding political forecasts while CXO Advisory tracks investment gurus and finds they are, in aggregate, about 48% accurate. These are experts making predictions about their core area of expertise. Consider the rate of divorce in the U.S., currently around 40%-50%, as additional evidence that sometimes we don’t know as much as we think. Experts are helpful in explaining a specific discipline but they are less helpful in dynamic environments. If you need something fixed, like a car, a clock or an appliance then experts can be very helpful. Same for tax and accounting advice. It’s not because this stuff is simple, it’s because the environment is static.

10. Improving estimations of probabilities and payouts is about polishing our 1) subject matter expertise and 2) cognitive processing abilities. Learning more about a given subject reduces uncertainty and allows us to move from the lazy 50/50 forecast. Say you travel to Arizona and get stung by a scorpion. Rather than assume a 50% probability of death you can do a quick internet search and learn no one has died from a scorpion bite in Arizona since the 1960s. Overly simplistic, but, you get the picture. Second, data needs to be interpreted in a cogent way. Let’s say you work in asset management and one of your portfolio managers has made three investments that returned -5%, -12% and 22%. What can you say about the manager (other than two of the three investments lost money)? Does the information allow you to claim the portfolio manager is a bad manager? Does the information allow you to claim you can confidently predict his/her average rate of return? Unless you’ve had some statistics, it might not be entirely clear what clinical conclusions you can draw. What if you flipped a coin three times and came up with tails on two of them? That wouldn’t seem so strange. Two-thirds is the same as 66%. If you tossed the coin one-hundred times and got 66 tails, that would be a little more interesting. The more observations, the higher our confidence should be. A 95% confidence interval for the portfolio manager’s average return would be a range between -43% and 45%. Is that enough to take action?

11. Bayesian analysis is more useful than we think. Bayesian thinking helps direct given false/true positives and false/true negatives. It’s the probability of a hypothesis given some observed data. For example, what’s the likelihood of X (this new hire will place in the top 10% of the firm) given Y (they graduated from an Ivy League school)? A certain percentage of employees are top-performing employees, some Ivy League grads will be top-performers (others not) and some non-Ivy League grads will be top-performers (others not). If I’m staring at a random employee trying to guess whether they are a top-performing employee all I have are the starting odds, and, if only the top 10% qualify, I know my chances are 1 in 10. But I can update my odds if supplied information regarding their education. Here’s another example. What is the likelihood a project will be successful (X) given it missed one of the first two milestones (Y)?. There are lots of helpful resources online if you want to learn more but think of it this way (hat tip to Kalid Azad at Better Explained); original odds x the evidence adjustment = your new odds. The actual equation is more complicated but that is the intuition behind it. Bayesian analysis has its naysayers. In the examples provided, the prior odds of success are known, or could easily be obtained, but this isn’t always true. Most of the time subjective prior probabilities are required and this type of tomfoolery is generally discouraged. There are ways around that, but no time to explain it here.

12. A word about crowds. Is there a wisdom of crowds? Some say yes, others say no. My view is that crowds can be very useful if individual members of the crowd are able to vote independently or if the environment is such that there are few repercussions for voicing disagreement. Otherwise, I think signaling effects from seeing how others are “voting” is too much evolutionary force to overcome with sheer rational willpower. Our earliest ancestors ran when the rest of the tribe ran. Not doing so might have resulted in an untimely demise.

13. Analyze your own motives. Jonathan Haidt, author of The Righteous Mind: Why Good People Are Divided by Politics and Religion, is credited with teaching that logic isn’t used to find truth, it’s used to win arguments. Logic may not be the only source of truth (and I have no basis for that claim). Keep this in mind as it has to do with the role of intuition in decision making.

Just a few closing thoughts.

We are pretty hard on ourselves. My process is to make the best decisions I can, realizing not all of them will be optimal. I have a method to track my decisions and to score how accurate I am. Sometimes I use heuristics, but I try to keep those to within my area of competency, as Munger says. I don’t do lists of pros and cons because I feel like I’m just trying to convince myself, either way.

If I have to make a big decision, in an unfamiliar area, I try to learn as much as I can about the issue on my own and from experts, assess how much randomness could be present, formulate my thesis, look for contradictory information, try and build downside protection (risking as little as possible) and watch for signals that may indicate a likely outcome. Many of my decisions have not worked out, but most of them have. As the world changes, so will my process, and I look forward to that.

Have something to say? Become a member: join the slack conversation and chat with Mark directly.