Tag: Illusion of Control

The Psychology of Risk and Reward

The Psychology of Risk and Reward

An excerpt from The Aspirational Investor: Taming the Markets to Achieve Your Life's Goals that I think you'd enjoy.

Most of us have a healthy understanding of risk in the short term.

When crossing the street, for example, you would no doubt speed up to avoid an oncoming car that suddenly rounds the corner.

Humans are wired to survive: it’s a basic instinct that takes command almost instantly, enabling our brains to resolve ambiguity quickly so that we can take decisive action in the face of a threat.

The impulse to resolve ambiguity manifests itself in many ways and in many contexts, even those less fraught with danger. Glance at the (above) picture for no more than a couple of seconds. What do you see?

Some observers perceive the profile of a young woman with flowing hair, an elegant dress, and a bonnet. Others see the image of a woman stooped in old age with a wart on her large nose. Still others—in the gifted minority—are able to see both of the images simultaneously.

What is interesting about this illusion is that our brains instantly decide what image we are looking at, based on our first glance. If your initial glance was toward the vertical profile on the left-hand side, you were all but destined to see the image of the elegant young woman: it was just a matter of your brain interpreting every line in the picture according to the mental image that you already formed, even though each line can be interpreted in two different ways. Conversely, if your first glance fell on the central dark horizontal line that emphasizes the mouth and chin, your brain quickly formed an image of the older woman.

Regardless of your interpretation, your brain wasn’t confused. It simply decided what the picture was and filled in the missing pieces. Your brain resolved ambiguity and extracted order from conflicting information.

What does this have to do with decision making? Every bit of information can be interpreted differently according to our perspective. Ashvin Chhabra directs us to investing. I suggest you reframe this in the context of decision making in general.

Every trade has a seller and a buyer: your state of mind is paramount. If you are in a risk-averse mental framework, then you are likely to interpret a further fall in stocks as additional confirmation of your sell bias. If instead your framework is positive, you will interpret the same event as a buying opportunity.

The challenge of investing is compounded by the fact that our brains, which excel at resolving ambiguity in the face of a threat, are less well equipped to navigate the long term intelligently. Since none of us can predict the future, successful investing requires planning and discipline.

Unfortunately, when reason is in apparent conflict with our instincts—about markets or a “hot stock,” for example—it is our instincts that typically prevail. Our “reptilian brain” wins out over our “rational brain,” as it so often does in other facets of our lives. And as we have seen, investors trade too frequently, and often at the wrong time.

One way our brains resolve conflicting information is to seek out safety in numbers. In the animal kingdom, this is called “moving with the herd,” and it serves a very important purpose: helping to ensure survival. Just as a buffalo will try to stay with the herd in order to minimize its individual vulnerability to predators, we tend to feel safer and more confident investing alongside equally bullish investors in a rising market, and we tend to sell when everyone around us is doing the same. Even the so-called smart money falls prey to a herd mentality: one study, aptly titled “Thy Neighbor’s Portfolio,” found that professional mutual fund managers were more likely to buy or sell a particular stock if other managers in the same city were also buying or selling.

This comfort is costly. The surge in buying activity and the resulting bullish sentiment is self-reinforcing, propelling markets to react even faster. That leads to overvaluation and the inevitable crash when sentiment reverses. As we shall see, such booms and busts are characteristic of all financial markets, regardless of size, location, or even the era in which they exist.

Even though the role of instinct and human emotions in driving speculative bubbles has been well documented in popular books, newspapers, and magazines for hundreds of years, these factors were virtually ignored in conventional financial and economic models until the 1970s.

This is especially surprising given that, in 1951, a young PhD student from the University of Chicago, Harry Markowitz, published two very important papers. The first, entitled “Portfolio Selection,” published in the Journal of Finance, led to the creation of what we call modern portfolio theory, together with the widespread adoption of its important ideas such as asset allocation and diversification. It earned Harry Markowitz a Nobel Prize in Economics.

The second paper, entitled “The Utility of Wealth” and published in the prestigious Journal of Political Economy, was about the propensity of people to hold insurance (safety) and to buy lottery tickets at the same time. It delved deeper into the psychological aspects of investing but was largely forgotten for decades.

The field of behavioral finance really came into its own through the pioneering work of two academic psychologists, Amos Tversky and Daniel Kahneman, who challenged conventional wisdom about how people make decisions involving risk. Their work garnered Kahneman the Nobel Prize in Economics in 2002. Behavioral finance and neuroeconomics are relatively new fields of study that seek to identify and understand human behavior and decision making with regard to choices involving trade-offs between risk and reward. Of particular interest are the human biases that prevent individuals from making fully rational financial decisions in the face of uncertainty.

As behavioral economists have documented, our propensity for herd behavior is just the tip of the iceberg. Kahneman and Tversky, for example, showed that people who were asked to choose between a certain loss and a gamble, in which they could either lose more money or break even, would tend to choose the double down (that is, gamble to avoid the prospect of losses), a behavior the authors called “loss aversion.” Building on this work, Hersh Shefrin and Meir Statman, professors at the University of Santa Clara Leavey School of Business, have linked the propensity for loss aversion to investors’ tendency to hold losing investments too long and to sell winners too soon. They called this bias the disposition effect.

The lengthy list of behaviorally driven market effects often converge in an investor’s tale of woe. Overconfidence causes investors to hold concentrated portfolios and to trade excessively, behaviors that can destroy wealth. The illusion of control causes investors to overestimate the probability of success and underestimate risk because of familiarity—for example, causing investors to hold too much employer stock in their 401(k) plans, resulting in under-diversification. Cognitive dissonance causes us to ignore evidence that is contrary to our opinions, leading to myopic investing behavior. And the representativeness bias leads investors to assess risk and return based on superficial characteristics—for example, by assuming that shares of companies that make products you like are good investments.

Several other key behavioral biases come into play in the realm of investing. Framing can cause investors to make a decision based on how the question is worded and the choices presented. Anchoring often leads investors to unconsciously create a reference point, say for securities prices, and then adjust decisions or expectations with respect to that anchor. This bias might impede your ability to sell a losing stock, for example, in the false hope that you can earn your money back. Similarly, the endowment bias might lead you to overvalue a stock that you own and thus hold on to the position too long. And regret aversion may lead you to avoid taking a tough action for fear that it will turn out badly. This can lead to decision paralysis in the wake of a market crash, even though, statistically, it is a good buying opportunity.

Behavioral finance has generated plenty of debate. Some observers have hailed the field as revolutionary; others bemoan the discipline’s seeming lack of a transcendent, unifying theory. This much is clear: behavioral finance treats biases as mistakes that, in academic parlance, prevent investors from thinking “rationally” and cause them to hold “suboptimal” portfolios.

But is that really true? In investing, as in life, the answer is more complex than it appears. Effective decision making requires us to balance our “reptilian brain,” which governs instinctive thinking, with our “rational brain,” which is responsible for strategic thinking. Instinct must integrate with experience.

Put another way, behavioral biases are nothing more than a series of complex trade-offs between risk and reward. When the stock market is taking off, for example, a failure to rebalance by selling winners is considered a mistake. The same goes for a failure to add to a position in a plummeting market. That’s because conventional finance theory assumes markets to be inherently stable, or “mean-reverting,” so most deviations from the historical rate of return are viewed as fluctuations that will revert to the mean, or self-correct, over time.

But what if a precipitous market drop is slicing into your peace of mind, affecting your sleep, your relationships, and your professional life? What if that assumption about markets reverting to the mean doesn’t hold true and you cannot afford to hold on for an extended period of time? In both cases, it might just be “rational” to sell and accept your losses precisely when investment theory says you should be buying. A concentrated bet might also make sense, if you possess the skill or knowledge to exploit an opportunity that others might not see, even if it flies in the face of conventional diversification principles.

Of course, the time to create decision rules for extreme market scenarios and concentrated bets is when you are building your investment strategy, not in the middle of a market crisis or at the moment a high-risk, high-reward opportunity from a former business partner lands on your desk and gives you an adrenaline jolt. A disciplined process for managing risk in relation to a clear set of goals will enable you to use the insights offered by behavioral finance to your advantage, rather than fall prey to the common pitfalls. This is one of the central insights of the Wealth Allocation Framework. But before we can put these insights to practical use, we need to understand the true nature of financial markets.

Countering the Inside View and Making Better Decisions

Countering the Inside View And Making Better Decisions

You can reduce the number of mistakes you make by thinking about problems more clearly.

In his book Think Twice: Harnessing the Power of Counterintuition, Michael Mauboussin discusses how we can “fall victim to simplified mental routines that prevent us from coping with the complex realities inherent in important judgment calls.” One of those routines is the inside view, which we're going to talk about in this article but first let's get a bit of context.

No one wakes up thinking, “I am going to make bad decisions today.” Yet we all make them. What is particularly surprising is some of the biggest mistakes are made by people who are, by objective standards, very intelligent. Smart people make big, dumb, and consequential mistakes.

[…]

Mental flexibility, introspection, and the ability to properly calibrate evidence are at the core of rational thinking and are largely absent on IQ tests. Smart people make poor decisions because they have the same factory settings on their mental software as the rest of us, and that software isn’t designed to cope with many of today’s problems.

We don't spend enough time thinking and learning from the process. Generally we're pretty ambivalent about the process by which we make decisions.

… typical decision makers allocate only 25 percent of their time to thinking about the problem properly and learning from experience. Most spend their time gathering information, which feels like progress and appears diligent to superiors. But information without context is falsely empowering.

That reminds me of what Daniel Kahneman wrote in Thinking, Fast and Slow:

A remarkable aspect of your mental life is that you are rarely stumped … The normal state of your mind is that you have intuitive feelings and opinions about almost everything that comes your way. You like or dislike people long before you know much about them; you trust or distrust strangers without knowing why; you feel that an enterprise is bound to succeed without analyzing it.

So we're not really gathering information as much as trying to satisfice our existing intuition. The very thing a good decision process should help root out.

***
Ego Induced Blindness

One prevalent error we make is that we tend to favour the inside view over the outside view.

An inside view considers a problem by focusing on the specific task and by using information that is close at hand, and makes predictions based on that narrow and unique set of inputs. These inputs may include anecdotal evidence and fallacious perceptions. This is the approach that most people use in building models of the future and is indeed common for all forms of planning.

[…]

The outside view asks if there are similar situations that can provide a statistical basis for making a decision. Rather than seeing a problem as unique, the outside view wants to know if others have faced comparable problems and, if so, what happened. The outside view is an unnatural way to think, precisely because it forces people to set aside all the cherished information they have gathered.

When the inside view is more positive than the outside view you effectively have a base rate argument. You're saying (knowingly or, more likely, unknowingly) that this time is different. Our brains are all too happy to help us construct this argument.

Mauboussin argues that we embrace the inside view for a few primary reasons. First, we're optimistic by nature. Second, is the “illusion of optimism” (we see our future as brighter than that of others). Finally, is the illusion of control (we think that chance events are subject to our control).

One interesting point is that while we're bad at looking at the outside view when it comes to ourselves, we're better at it when it comes to other people.

In fact, the planning fallacy embodies a broader principle. When people are forced to look at similar situations and see the frequency of success, they tend to predict more accurately. If you want to know how something is going to turn out for you, look at how it turned out for others in the same situation. Daniel Gilbert, a psychologist at Harvard University, ponders why people don’t rely more on the outside view, “Given the impressive power of this simple technique, we should expect people to go out of their way to use it. But they don’t.” The reason is most people think of themselves as different, and better, than those around them.

So it's mostly ego. I'm better than the people tackling this problem before me. We see the differences between situations and use those as rationalizations as to why things are different this time.

Consider this:

We incorrectly think that differences are more valuable than similarities.

After all, anyone can see what’s the same but it takes true insight to see what’s different, right? We’re all so busy trying to find differences that we forget to pay attention to what is the same.

***
How to Incorporate the Outside View into your Decisions

In Think Twice, Mauboussin distills the work of Kahneman and Tversky into four steps and adds some commentary.

1. Select a Reference Class

Find a group of situations, or a reference class, that is broad enough to be statistically significant but narrow enough to be useful in analyzing the decision that you face. The task is generally as much art as science, and is certainly trickier for problems that few people have dealt with before. But for decisions that are common—even if they are not common for you— identifying a reference class is straightforward. Mind the details. Take the example of mergers and acquisitions. We know that the shareholders of acquiring companies lose money in most mergers and acquisitions. But a closer look at the data reveals that the market responds more favorably to cash deals and those done at small premiums than to deals financed with stock at large premiums. So companies can improve their chances of making money from an acquisition by knowing what deals tend to succeed.

2. Assess the distribution of outcomes.

Once you have a reference class, take a close look at the rate of success and failure. … Study the distribution and note the average outcome, the most common outcome, and extreme successes or failures.

[…]

Two other issues are worth mentioning. The statistical rate of success and failure must be reasonably stable over time for a reference class to be valid. If the properties of the system change, drawing inference from past data can be misleading. This is an important issue in personal finance, where advisers make asset allocation recommendations for their clients based on historical statistics. Because the statistical properties of markets shift over time, an investor can end up with the wrong mix of assets.

Also keep an eye out for systems where small perturbations can lead to large-scale change. Since cause and effect are difficult to pin down in these systems, drawing on past experiences is more difficult. Businesses driven by hit products, like movies or books, are good examples. Producers and publishers have a notoriously difficult time anticipating results, because success and failure is based largely on social influence, an inherently unpredictable phenomenon.

3. Make a prediction.

With the data from your reference class in hand, including an awareness of the distribution of outcomes, you are in a position to make a forecast. The idea is to estimate your chances of success and failure. For all the reasons that I’ve discussed, the chances are good that your prediction will be too optimistic.

Sometimes when you find the right reference class, you see the success rate is not very high. So to improve your chance of success, you have to do something different than everyone else.

4. Assess the reliability of your prediction and fine-tune.

How good we are at making decisions depends a great deal on what we are trying to predict. Weather forecasters, for instance, do a pretty good job of predicting what the temperature will be tomorrow. Book publishers, on the other hand, are poor at picking winners, with the exception of those books from a handful of best-selling authors. The worse the record of successful prediction is, the more you should adjust your prediction toward the mean (or other relevant statistical measure). When cause and effect is clear, you can have more confidence in your forecast.

***

The main lesson we can take from this is that we tend to focus on what's different whereas the best decisions often focus on just the opposite: what's the same. While this situation seems a little different, it's almost always the same.

As Charlie Munger has said: “if you notice, the plots are very similar. The same plot comes back time after time.”

Particulars may vary but, unless those particulars are the variables that govern the outcome of the situation, the pattern remains. If we're going to focus on what's different rather than what's the same, you'd best be sure the variables you're clinging to matter.

Certainty Is an Illusion

We all try to avoid uncertainty, even if it means being wrong. We take comfort in certainty and we demand it of others, even when we know it's impossible.

Gerd Gigerenzer argues in Risk Savvy: How to Make Good Decisions that life would be pretty dull without uncertainty.

If we knew everything about the future with certainty, our lives would be drained of emotion. No surprise and pleasure, no joy or thrill— we knew it all along. The first kiss, the first proposal, the birth of a healthy child would be about as exciting as last year’s weather report. If our world ever turned certain, life would be mind-numbingly dull.

***
The Illusion of Certainty

We demand certainty of others. We ask our bankers, doctors, and political leaders (among others) to give it to us. What they deliver, however, is the illusion of certainty. We feel comfortable with this.

Many of us smile at old-fashioned fortune-tellers. But when the soothsayers work with computer algorithms rather than tarot cards, we take their predictions seriously and are prepared to pay for them. The most astounding part is our collective amnesia: Most of us are still anxious to see stock market predictions even if they have been consistently wrong year after year.

Technology changes how we see things – it amplifies the illusion of certainty.

When an astrologer calculates an expert horoscope for you and foretells that you will develop a serious illness and might even die at age forty-nine, will you tremble when the date approaches? Some 4 percent of Germans would; they believe that an expert horoscope is absolutely certain.

But when technology is involved, the illusion of certainty is amplified. Forty-four percent of people surveyed think that the result of a screening mammogram is certain. In fact, mammograms fail to detect about ten percent of cancers, and the younger the women being tested, the more error-prone the results, because their breasts are denser.

“Not understanding a new technology is one thing,” Gigerenzer writes, “believing that it delivers certainty is another.”

It's best to remember Ben Franklin: “In this world nothing can be said to be certain, except death and taxes.”

***
The Security Blanket

Where does our need for certainty come from?

People with a high need for certainty are more prone to stereotypes than others and are less inclined to remember information that contradicts their stereotypes. They find ambiguity confusing and have a desire to plan out their lives rationally. First get a degree, a car, and then a career, find the most perfect partner, buy a home, and have beautiful babies. But then the economy breaks down, the job is lost, the partner has an affair with someone else, and one finds oneself packing boxes to move to a cheaper place. In an uncertain world, we cannot plan everything ahead. Here, we can only cross each bridge when we come to it, not beforehand. The very desire to plan and organize everything may be part of the problem, not the solution. There is a Yiddish joke: “Do you know how to make God laugh? Tell him your plans.”

To be sure, illusions have their function. Small children often need security blankets to soothe their fears. Yet for the mature adult, a high need for certainty can be a dangerous thing. It prevents us from learning to face the uncertainty pervading our lives. As hard as we try, we cannot make our lives risk-free the way we make our milk fat-free.

At the same time, a psychological need is not entirely to blame for the illusion of certainty. Manufacturers of certainty play a crucial role in cultivating the illusion. They delude us into thinking that our future is predictable, as long as the right technology is at hand.

***
Risk and Uncertainty

Two magnificently dressed young women sit upright on their chairs, calmly facing each other. Yet neither takes notice of the other. Fortuna, the fickle, wheel-toting goddess of chance, sits blindfolded on the left while human figures desperately climb, cling to, or tumble off the wheel in her hand. Sapientia, the calculating and vain deity of science, gazes into a hand-mirror, lost in admiration of herself. These two allegorical figures depict a long-standing polarity: Fortuna brings good or bad luck, depending on her mood, but science promises certainty.

Fortuna, the wheel-toting goddess of chance (left), facing Sapientia, the divine goddess of science (right).
Fortuna, the wheel-toting goddess of chance (left), facing Sapientia, the divine goddess of science (right).

This sixteenth -century woodcut was carved a century before one of the greatest revolutions in human thinking, the “probabilistic revolution,” colloquially known as the taming of chance. Its domestication began in the mid-seventeenth century. Since then, Fortuna’s opposition to Sapientia has evolved into an intimate relationship, not without attempts to snatch each other’s possessions. Science sought to liberate people from Fortuna’s wheel, to banish belief in fate, and replace chances with causes. Fortuna struck back by undermining science itself with chance and creating the vast empire of probability and statistics. After their struggles, neither remained the same: Fortuna was tamed, and science lost its certainty.

I explain more on the difference between risk and uncertainty here, but this diagram helps simplify things.
certainty_risk_uncertainty

***
The value of heuristics

The twilight of uncertainty comes in different shades and degrees. Beginning in the seventeenth century, the probabilistic revolution gave humankind the skills of statistical thinking to triumph over Fortuna, but these skills were designed for the palest shade of uncertainty, a world of known risk, in short, risk. I use this term for a world where all alternatives, consequences, and probabilities are known. Lotteries and games of chance are examples. Most of the time, however, we live in a changing world where some of these are unknown: where we face unknown risks, or uncertainty. The world of uncertainty is huge compared to that of risk. … In an uncertain world, it is impossible to determine the optimal course of action by calculating the exact risks. We have to deal with “unknown unknowns.” Surprises happen. Even when calculation does not provide a clear answer, however, we have to make decisions. Thankfully we can do much better than frantically clinging to and tumbling off Fortuna’s wheel. Fortuna and Sapientia had a second brainchild alongside mathematical probability, which is often passed over: rules of thumb, known in scientific language as heuristics.

***
How decisions change based on risk/uncertainty

When making decisions, the two sets of mental tools are required:
1. RISK: If risks are known, good decisions require logic and statistical thinking.
2. UNCERTAINTY: If some risks are unknown, good decisions also require intuition and smart rules of thumb.

Most of the time we need a combination of the two.

***

Risk Savvy: How to Make Good Decisions is a great read throughout.

Michael Mauboussin Explains How Cognitive Biases Lead You Astray

In this video, Michael Mauboussin, Chief Investment Strategist at Legg Mason, shares three mistakes about how your brain leads you astray with cognitive biases.

One tip, Mauboussin offers to counter hindsight bias, is to keep a decision-making journal.

Michael Mauboussin is the author of More More Than You Know: Finding Financial Wisdom in Unconventional Places and more recently, Think Twice: Harnessing the Power of Counterintuition.

Future Babble: Why expert predictions fail and why we believe them anyway

Future Babble has come out to mixed reviews. I think the book would interest anyone seeking wisdom.

Here are some of my notes:

First a little background: Predictions fail because the world is too complicated to be predicted with accuracy and we're wired to avoid uncertainty. However, we shouldn't blindly believe experts. The world is divided into two: foxes and hedgehogs. The fox knows many things whereas the hedgehog knows one big thing. Foxes beat hedgehogs when it comes to making predictions.

  • What we should ask is, in a non-linear world, why would we think oil prices can be predicted. Practically since the dawn of the oil industry in the nineteenth century, experts have been forecasting the price of oil. They've been wrong ever since. And yet this dismal record hasn't caused us to give up on the enterprise of forecasting oil prices. 
  • One of psychology's fundamental insights, wrote psychologist Daniel Gilbert, is that judgements are generally the products of non-conscious systems that operate quickly, on the basis scant evidence, and in a routine manner, and then pass their hurried approximations to consciousness, which slowly and deliberately adjust them. … (one consequence of this is that) Appearance equals reality. In the ancient environment in which our brains evolved, that as a good rule, which is why it became hard-wired into the brain and remains there to this day. (an example of this) as psychologists have shown, people often stereotype “baby-faced” adults as innocent, helpless, and needy.
  • We have a hard time with randomness. If we try, we can understand it intellectually, but as countless experiments have shown, we don't get it intuitively. This is why someone who plunks one coin after another into a slot machine without winning will have a strong and growing sense—the gambler's fallacy—that a jackpot is “due,” even though every result is random and therefore unconnected to what came before. … and people believe that a sequence of random coin tosses that goes “THTHHT” is far more likely than the sequence “THTHTH” even though they are equally likely.
  • People are particularly disinclined to see randomness as the explanation for an outcome when their own actions are involved. Gamblers rolling dice tend to concentrate and throw harder for higher numbers, softer for lower. Psychologists call this the “illusion of control.” … they also found the illusion is stronger when it involves prediction. In a sense, the “illusion of control” should be renamed the “illusion of prediction.”
  • Overconfidence is a universal human trait closely related to an equally widespread phenomenon known as “optimism bias.” Ask smokers about the risk of getting lung cancer from smoking and they'll say it's high. But their risk? Not so high. … The evolutionary advantage of this bias is obvious: It encourages people to take action and makes them more resilient in the face of setbacks.
  • … How could so many experts have been so wrong? … A crucial component of the answer lies in psychology. For all the statistics and reasoning involved, the experts derived their judgements, to one degree or another, from what they felt to be true. And in doing so they were fooled by a common bias. … This tendency to take current trends and project them into the future is the starting point of most attempts to predict. Very often. it's also the end point. That's not necessarily a bad thing. After all, tomorrow typically is like today. Current trends do tend to continue. But not always. Change happens. And the further we look into the future, the more opportunity there is for current rends to be modified, bent, or reversed. Predicting the future by projecting the present is like driving with no hands. It works while you are on a long stretch of straight road but even a gentle curve is trouble, and a sharp turn always ends in a flaming wreck.
  • When people attempt to judge how common something is—or how likely it is to happen in the future—they attempt to think of an example of that thing. If an example is recalled easily, it must be common. If it's harder to recall, it must be less common. … Again, this is not a conscious calculation. The “availability heuristic” is a tool of the unconscious mind.
  • “deviating too far from consensus leaves one feeling potentially ostracized from the group, with the risk that one may be terminated.” (Robert Shiller) … It's tempting to think that only ordinary people are vulnerable to conformity, that esteemed experts could not be so easily swayed. Tempting, but wrong. As Shiller demonstrated, “groupthink” is very much a disease that can strike experts. In fact, psychologist Irving Janis coined the term “groupthink” to describe expert behavior. In his 1972 classic, Victims of Groupthink, Janis investigated four high-level disasters: the defence of Pearl Harbour, the Bay of Pigs invasion, and escalation of the wars in Korea and Vietnam and demonstrated that conformity among highly educated, skilled, and successful people working in their fields of expertise was a root cause in each case.
  • (On corporate use of scenario planning)… Scenarios are not predictions, emphasizes Peter Schwartz, the guru of scenario planning. “They are tools for testing choices.” The idea is to have a clever person dream up a number of very different futures, usually three to four. … Managers then consider the implications of each, forcing them out of the rut of the status quo, and thinking about what they would do if confronted with real change. The ultimate goal is to make decisions that would stand up well in a wide variety of contexts. No one denies there maybe some value in such exercises. But how much value? The consultants who offer scenario planning services are understandably bullish, but ask them for evidence and they typically point to examples of scenarios that accurately foreshadowed the future. That is silly, frankly. For one thing, it contradicts their claim that scenarios are not predictions and al the misses would have to be considered, and the misses vastly outnumber the hits. … Consultants also cite the enormous popularity of scenario planning as proof of its enormous value… Lack of evidence aside, there are more disturbing reasons to be wary of scenarios. Remember that what drives the availably heuristic is not how many examples the mind can recall but how easily they are recalled. … and what are scenarios? Vivid, colourful, dramatic stories. Nothing could be easier to remember or recall. And so being exposed to a dramatic scenario about (whatever)… will make the depicted events feel much more likely to happen.
  • (on not having control) At its core, torture is a process of psychological destruction. and that process almost always begins with the torturer explicitly telling the victim he his powerless. “I decide when you can eat and sleep. I decide when you suffer, how you suffer, if it will end. I decide if you live or die.” …Knowing what will happen in the future is a form of control, even if we cannot change what will happen. …Uncertainty is potent… people who experienced the mild-but-unpredictable shocks experienced much more fear than those who got the strong-but-predictable shocks.
  • Our profound aversion to uncertainty helps explain what would otherwise be a riddle: Why do people pay so much attention to dark and scary predictions? Why do gloomy forecasts so often outnumber optimistic predictions, take up more media space, and sell more books? Part of this predilection for gloom is simply an outgrowth of what is sometimes called negativity bias: our attention is drawn more swiftly by bad news or images, and we are more likely to remember them than cheery information….People who's brains gave priority to bad news were much less likely to be eaten by lions or die some other pleasant death. … (negative) predictions are supported by our intuitive pessimism, so they feel right to us. And that conclusion is bolstered by our attraction to certainty. As strange as it sounds, we want to believe the expert predicting a dark future is less tormenting then suspecting it. Certainty is always preferable to uncertainty, even when what's certain is disaster.
  • Researchers have also shown that financial advisors who express considerable confidence in their stock forecasts are more trusted than those who are less confident, even when their objective records are the same. … This “confidence heuristic” like the availability heuristics, isn't necessarily a conscious decision path. We may not actually say to ourselves “she's so sure of herself she must be right”…
  • (on our love for stories) Confirmation bias also plays a critical role for the very simple reason that none of us is a blank slate. Every human brain is a vast warehouse of beliefs and assumptions about the world and how it works. Psychologists call these “schemas.” We love stories that fit our schemas; they're the cognitive equivalent of beautiful music. But a story that doesn't fit – a story that contradicts basic beliefs – is dissonant.
  • … What makes this mass delusion possible is the different emphasis we put on predictions that hit and those that miss. We ignore misses, even when they lie scattered by the dozen at our feet; we celebrate hits, even when we have to hunt for them and pretend there was more to them that luck.
  • By giving us the sense that we should have predicted what is now the present, or even that we actually did predict it when we did not, it strong suggests that we can predict the future. This is an illusion, and yet it seems only logical – which makes it a particularly persuasive illusion.

If you like the notes you should buy Future Babble. Like the book summaries? Check out my notes from Adapt: Why Success Always Starts With Failure, The Ambiguities of Experience, On Leadership.

Subscribe to Farnam Street via twitteremail, or RSS.

Can You Lose On Purpose? The Role of Skill and Luck

skill luck
An excellent paper by Michael Mauboussin, author of Think Twice, on the role of skill and luck. Mauboussin talks about reversion to the mean, insensitivity to sample size, and transitivity.

There's a simple and elegant test of whether there is skill in an activity: ask whether you can lose on purpose. If you can't lose on purpose, or if it's really hard, luck likely dominates that activity. If it's easy to lose on purpose, skill is more important.

In this report, we will discuss why unraveling skill and luck is so important, provide a framework for thinking about the contribution of skill and luck, offer some methods to help sort skill and luck in various domains, and define the key features of skill in the investment business.

Reversion to the mean
Your initial reaction may be that more luck means less predictable outcomes—which is true. But there is also an important insight that follows from knowing the relative contribution of skill and luck: the ratio of skill to luck shapes the rate of reversion to the mean. Specifically, the outcomes of activities laden with luck revert to the mean faster than activities with little luck. Reversion to the mean is present in all activities that have even a dash of luck, but the rapidity of the process hinges largely on how important a role luck plays. In fact, you can infer the relationship between skill and luck by analyzing past patterns of reversion to the mean….

You can think of skill as a drag on the reversion process for the activities that combine skill and luck. An above-average shooter in basketball, for example, may go through good or bad stretches of shooting. But she will not revert back to the average over time because of her skill. The same would be true of a below-average player. Luck may help or hinder in the short term, but reversion is limited because of the level of skill.

Humans, as natural pattern seekers, have a very difficult time dealing with reversion to the mean. The main challenge with the concept is that change within the system occurs at the same time as no change to the system. Change and no change operate side-by-side, causing a lot of confusion. The change part is reversion to the mean. As we will see, the results of the groups that have done really well or poorly in one period tend to move toward the average in future periods. Most people find it hard to internalize reversion to the mean because it is more natural to extrapolate the performance of the recent past. So if a stock, or an asset class, has done well the natural inclination is to assume it will continue to do well and to act accordingly….

Insensitivity in sample size
About 40 years ago, Amos Tversky and Daniel Kahneman identified a common decision-making bias they called the “belief in the law of small numbers.”

The idea is that we tend to view a relatively small sample of outcomes of a population as representative of the broad population of outcomes. The magnitude of this mistake grows larger as the luck-to-skill ratio rises….There are a number of factors that shift activities toward the luck side of the continuum. One, naturally, is simply sample size. A small number of observations make it very difficult to sort skill and luck….People often assume that building sample size is a matter of time. Within an activity that is true, but what really matters is the number of trials. Some activities pack a lot of trials into a short time, and others reflect a few trials over a long time. You can evaluate a high-frequency trading strategy a lot quicker than a buy-and-hold investment approach.

You don't make money by being smarter than the bettor next to you or by knowing which horse has the best odds of winning. You make money when the collective misprices the odds. The important idea is that you are competing not against other individuals, but against the wisdom of the crowd.

The illusion of control also comes into play here. This illusion says that when we perceive ourselves to be in control of a situation, we deem our probabilities of success to be higher than what chance dictates. Saying it differently, when we are in control we think our ratio of skill to luck is higher than it really is. Remarkably, this illusion even holds for activities that are all chance. For example, some people throw dice hard when they want a high number, and gently when they seek a small one. Like the belief in small numbers, this illusion is not a problem in high skill, low luck activities but becomes more problematic as the contribution of luck grows. Here again, our minds are poor at differentiating between activities, so what works in one setting fails miserably in another.

The best way to ensure satisfactory long-term results is to constantly improve skill, which often means enhancing a process. Gaining skill requires deliberate practice, which has a very specific meaning: it includes actions designed to improve performance, has repeatable tasks, incorporates high-quality feedback, and is not much fun.

Transitivity
Transitivity holds when there's a clear pecking order of skill and the better competitor always wins. In reality, matchups between individuals, teams, competitors, or strategies generally yield a lack of transitivity. What allows you to win in one environment may not work in another. As a general rule, transitivity tends to decrease as the complexity of the interaction increases. It is hard to characterize the degree of transitivity thinking only of skill and luck, but the idea remains very useful for decision makers…

Conclusion
The two main ways to assess skill and luck are through an analysis of persistence of performance (with streaks being a particularly useful subset of this approach) and its alter ego, reversion to the mean. The research shows evidence for persistence of performance in sports, business, and investing, although the evidence is strongest in sports. Studies of business and investing point to skill in both domains, although the percentage of companies or investors with skill is small.Reversion to the mean is also clear in each realm.

The central insight is that the more the outcomes of an activity rely on luck (or randomness), the more powerful reversion to the mean will be. As important, it is clear that many decision makers do not behave as if they understand reversion to the mean, and predictably make decisions that are, as a consequence, harmful to their long-term outcomes. This is particularly pronounced in the investment industry…

Michael Mauboussin is the author of More More Than You Know: Finding Financial Wisdom in Unconventional Places and more recently, Think Twice: Harnessing the Power of Counterintuition.

12