Tag: Risk

The Code of Hammurabi: The Best Rule To Manage Risk

hammurabi's code

Almost 4,000 years ago, King Hammurabi of Babylon, Mesopotamia, laid out one of the first sets of laws.

Hammurabi’s Code is among the oldest translatable writings. It consists of 282 laws, most concerning punishment. Each law takes into account the perpetrator’s status. The code also includes the earliest known construction laws, designed to align the incentives of builder and occupant to ensure that builders created safe homes:

  1. If a builder builds a house for a man and does not make its construction firm, and the house which he has built collapses and causes the death of the owner of the house, that builder shall be put to death.
  2. If it causes the death of the son of the owner of the house, they shall put to death a son of that builder.
  3. If it causes the death of a slave of the owner of the house, he shall give to the owner of the house a slave of equal value.
  4. If it destroys property, he shall restore whatever it destroyed, and because he did not make the house which he builds firm and it collapsed, he shall rebuild the house which collapsed at his own expense.
  5. If a builder builds a house for a man and does not make its construction meet the requirements and a wall falls in, that builder shall strengthen the wall at his own expense.

Hammurabi became ruler of Babylon in 1792 BC and held the position for 43 years. In the era of city-states, Hammurabi grew his modest kingdom (somewhere between 60 and 160 square kilometers) by conquering several neighboring states. Satisfied, then, with the size of the area he controlled, Hammurabi settled down to rule his people.

“This world of ours appears to be separated by a slight and precarious margin of safety from a most singular and unexpected danger.”

— Arthur Conan Doyle

Hammurabi was a fair leader and concerned with the well-being of his people. He transformed the area, ordering the construction of irrigation ditches to improve agricultural productivity, as well as supplying cities with protective walls and fortresses. Hammurabi also renovated temples and religious sites.

By today’s standards, Hammurabi was a dictator. Far from abusing his power, however, he considered himself the “shepherd” of his people. Although the Babylonians kept slaves, they too had rights. Slaves could marry other people of any status, start businesses, and purchase their freedom, and they were protected from mistreatment.

At first glance, it might seem as if we have little to learn from Hammurabi. I mean, why bother learning about the ancient Babylonians? They were just barbaric farmers, right?

It seems we’re not as different as it appears. Our modern beliefs are not separate from those of people in Hammurabi’s time; they are a continuation of them. Early legal codes are the ancestors of the ones we now put our faith in.

Whether a country is a dictatorship or democracy, one of the keys to any effective legal system is the ability for anyone to understand its laws. We’re showing cracks in ours and we can learn from the simplicity of Hammurabi’s Code, which concerned itself with practical justice and not lofty principles. To even call it a set of laws is misleading. The ancient Babylonians did not appear to have an equivalent term.

Three important concepts are implicit in Hammurabi’s Code: reciprocity, accountability, and incentives.

We have no figures for how often Babylonian houses fell down before and after the implementation of the Code. We have no idea how many (if any) people were put to death as a result of failing to adhere to Hammurabi’s construction laws. But we do know that human self-preservation instincts are strong. More than strong, they underlie most of our behavior. Wanting to avoid death is the most powerful incentive we have. If we assume that people felt and thought the same way 4000 years ago, we can guess at the impact of the Code.

Imagine yourself as a Babylonian builder. Each time you construct a house, there is a risk it will collapse if you make any mistakes. So, what do you do? You allow for the widest possible margin of safety. You plan for any potential risks. You don’t cut corners or try to save a little bit of money. No matter what, you are not going to allow any known flaws in the construction. It wouldn’t be worth it. You want to walk away certain that the house is solid.

Now contrast that with modern engineers or builders.

They don’t have much skin in the game. The worst they face if they cause a death is a fine. We saw this in Hurricane Katrina —1600 people died due to flooding caused in part by the poor design of hurricane protection systems in New Orleans. Hindsight analysis showed that the city’s floodwalls, levees, pumps, and gates were ill designed and maintained. The death toll was worse than it would otherwise have been. And yet, no one was held accountable.

Hurricane Katrina is regarded as a disaster that was part natural and part man-made. In recent months, in the Grenfell Tower fire in London, we saw the effects of negligent construction. At least 80 people died in a blaze that is believed to have started accidentally but that, according to expert analysis, was accelerated by the conscious use of cheap building materials that had failed safety tests.

The portions of Hammurabi’s Code that deal with construction laws, as brutal as they are (and as uncertain as we are of their short-term effects) illustrate an important concept: margins of safety. When we construct a system, ensuring that it can handle the expected pressures is insufficient.

A Babylonian builder would not have been content to make a house that was strong enough to handle just the anticipated stressors. A single Black Swan event — such as abnormal weather — could cause its collapse and in turn the builder’s own death, so builders had to allow for a generous margin of safety. The larger the better. In 59 mph winds, we do not want to be in a house built to withstand 60 mph winds.

But our current financial systems do not incentivize people to create wide margins of safety. Instead, they do the opposite — they encourage dangerous risk-taking.

Nassim Taleb referred to Hammurabi’s Code in a New York Times opinion piece in which he described a way to prevent bankers from threatening the public well-being. His solution? Stop offering bonuses for the risky behavior of people who will not be the ones paying the price if the outcome is bad. Taleb wrote:

…it’s time for a fundamental reform: Any person who works for a company that, regardless of its current financial health, would require a taxpayer-financed bailout if it failed should not get a bonus, ever. In fact, all pay at systemically important financial institutions — big banks, but also some insurance companies and even huge hedge funds — should be strictly regulated.

The issue, in Taleb’s opinion, is not the usual complaint of income inequality or overpay. Instead, he views bonuses as asymmetric incentives. They reward risks but do not punish the subsequent mistakes that cause “hidden risks to accumulate in the financial system and become a catalyst for disaster.” It’s a case of “heads, I win; tails, you lose.”

Bonuses encourage bankers to ignore the potential for Black Swan events, with the 2008 financial crisis being a prime (or rather, subprime) example. Rather than ignoring these events, banks should seek to minimize the harm caused.

Some career fields have a strict system of incentives and disincentives, both official and unofficial. Doctors get promotions and respect if they do their jobs well, and risk heavy penalties for medical malpractice. With the exception of experiments in which patients are fully informed of and consent to the risks, doctors don’t get a free pass for taking risks that cause harm to patients.

The same goes for military and security personnel. As Taleb wrote, “we trust the military and homeland security personnel with our lives, yet we don’t give them lavish bonuses. They get promotions and the honor of a job well done if they succeed, and the severe disincentive of shame if they fail.”

Hammurabi and his advisors were unconcerned with complex laws and legalese. Instead, they wanted the Code to produce results and to be understandable by everyone. And Hammurabi understood how incentives work — a lesson we’d be well served to learn.

When you align incentives of everyone in both positive and negative ways, you create a system that takes care of itself. Taleb describes Law 229 of Hammurabi’s Code as “the best risk-management rule ever.” Although barbaric to modern eyes, it took into account certain truisms. Builders typically know more about construction than their clients do and can take shortcuts in ways that aren’t obvious. After completing construction, a builder can walk away with a little extra profit, while the hapless client is unknowingly left with an unsafe house.

The little extra profit that builders can generate is analogous to the bonus system in some of today’s industries. It rewards those who take unwise risks, trick their customers, and harm other people for their own benefit. Hammurabi’s system had the opposite effect; it united the interests of the person getting paid and the person paying. Rather than the builder being motivated to earn as much profit as possible and the homeowner being motivated to get a safe house, they both shared the latter goal.

The Code illustrates the efficacy of using self-preservation as an incentive. We feel safer in airplanes that are flown by a person and not by a machine because, in part, we believe that pilots want to protect their own lives along with ours.

When we lack an incentive to protect ourselves, we are far more likely to risk the safety of other people. This is why bankers are willing to harm their customers if it means the bankers get substantial bonuses. And why male doctors prescribed contraceptive pills to millions of female patients in the 1960s, without informing them of the risks (which were high at the time). This is why companies that market harmful products, such as fast food and tobacco, are content to play down the risks. Or why the British initiative to reduce the population of Indian cobras by compensating those who caught the snakes had the opposite effect. Or why Wells Fargo employees opened millions of fake accounts to reach sales targets.

Incentives backfire when there are no negative consequences for those who exploit them. External incentives are based on extrinsic motivation, which easily goes awry.

When we have real skin in the game—when we have upsides and downsides—we care about outcomes in a way that we wouldn’t otherwise. We act in a different way. We take our time. We use second-order thinking and inversion. We look for evidence or a way to disprove it.

Four thousand years ago, the Babylonians understood the power of incentives, yet we seem to have since forgotten about the flaws in human nature that make it difficult to resist temptation.

The Probability Distribution of the Future

The best colloquial definition of risk may be the following:

“Risk means more things can happen than will happen.”

We found it through the inimitable Howard Marks, but it's a quote from Elroy Dimson of the London Business School. Doesn't that capture it pretty well?

Another way to state it is: If there were only one thing that could happen, how much risk would there be, except in an extremely banal sense? You'd know the exact probability distribution of the future. If I told you there was a 100% probability that you'd get hit by a car today if you walked down the street, you simply wouldn't do it. You wouldn't call walking down the street a “risky gamble” right? There's no gamble at all.

But the truth is that in practical reality, there aren't many 100% situations to bank on. Way more things can happen than will happen. That introduces great uncertainty into the future, no matter what type of future you're looking at: An investment, your career, your relationships, anything.

How do we deal with this in a pragmatic way? The investor Howard Marks starts it this way:

Key point number one in this memo is that the future should be viewed not as a fixed outcome that’s destined to happen and capable of being predicted, but as a range of possibilities and, hopefully on the basis of insight into their respective likelihoods, as a probability distribution.

This is the most sensible way to think about the future: A probability distribution where more things can happen than will happen. Knowing that we live in a world of great non-linearity and with the potential for unknowable and barely understandable Black Swan events, we should never become too confident that we know what's in store, but we can also appreciate that some things are a lot more likely than others. Learning to adjust probabilities on the fly as we get new information is called Bayesian updating.

But.

Although the future is certainly a probability distribution, Marks makes another excellent point in the wonderful memo above: In reality, only one thing will happen. So you must make the decision: Are you comfortable if that one thing happens, whatever it might be? Even if it only has a 1% probability of occurring? Echoing the first lesson of biology, Warren Buffett stated that “In order to win, you must first survive.” You have to live long enough to play out your hand.

Which leads to an important second point: Uncertainty about the future does not necessarily equate with risk, because risk has another component: Consequences. The world is a place where “bad outcomes” are only “bad” if you know their (rough) magnitude. So in order to think about the future and about risk, we must learn to quantify.

It's like the old saying (usually before something terrible happens): What's the worst that could happen? Let's say you propose to undertake a six month project that will cost your company $10 million, and you know there's a reasonable probability that it won't work. Is that risky?

It depends on the consequences of losing $10 million, and the probability of that outcome. It's that simple! (Simple, of course, does not mean easy.) A company with $10 billion in the bank might consider that a very low-risk bet even if it only had a 10% chance of succeeding.

In contrast, a company with only $10 million in the bank might consider it a high-risk bet even if it only had a 10% of failing. Maybe five $2 million projects with uncorrelated outcomes would make more sense to the latter company.

In the real world, risk = probability of failure x consequences. That concept, however, can be looked at through many lenses. Risk of what? Losing money? Losing my job? Losing face? Those things need to be thought through. When we observe others being “too risk averse,” we might want to think about which risks they're truly avoiding. Sometimes risk is not only financial. 

***

Let's cover one more under-appreciated but seemingly obvious aspect of risk, also pointed out by Marks: Knowing the outcome does not teach you about the risk of the decision.

This is an incredibly important concept:

If you make an investment in 2012, you’ll know in 2014 whether you lost money (and how much), but you won’t know whether it was a risky investment – that is, what the probability of loss was at the time you made it.

To continue the analogy, it may rain tomorrow, or it may not, but nothing that happens tomorrow will tell you what the probability of rain was as of today. And the risk of rain is a very good analogue (although I’m sure not perfect) for the risk of loss.

How many times do we see this simple dictum violated? Knowing that something worked out, we argue that it wasn't that risky after all. But what if, in reality, we were simply fortunate? This is the Fooled by Randomness effect.

The way to think about it is the following: The worst thing that can happen to a young gambler is that he wins the first time he goes to the casinoHe might convince himself he can beat the system.

The truth is that most times we don't know the probability distribution at all. Because the world is not a predictable casino game — an error Nassim Taleb calls the Ludic Fallacy — the best we can do is guess.

With intelligent estimations, we can work to get the rough order of magnitude right, understand the consequences if we're wrong, and always be sure to never fool ourselves after the fact.

If you're into this stuff, check out Howard Marks' memos to his clients, or check out his excellent book, The Most Important Thing. Nate Silver also has an interesting similar idea about the difference between risk and uncertainty. And lastly, another guy that understands risk pretty well is Jason Zweig, who we've interviewed on our podcast before.

***

If you liked this article you'll love:

Nassim Taleb on the Notion of Alternative Histories — “The quality of a decision cannot be solely judged based on its outcome.”

The Four Types of Relationships — As Seneca said, “Time discovers truth.”

The Psychology of Risk and Reward

The Psychology of Risk and Reward

An excerpt from The Aspirational Investor: Taming the Markets to Achieve Your Life's Goals that I think you'd enjoy.

Most of us have a healthy understanding of risk in the short term.

When crossing the street, for example, you would no doubt speed up to avoid an oncoming car that suddenly rounds the corner.

Humans are wired to survive: it’s a basic instinct that takes command almost instantly, enabling our brains to resolve ambiguity quickly so that we can take decisive action in the face of a threat.

The impulse to resolve ambiguity manifests itself in many ways and in many contexts, even those less fraught with danger. Glance at the (above) picture for no more than a couple of seconds. What do you see?

Some observers perceive the profile of a young woman with flowing hair, an elegant dress, and a bonnet. Others see the image of a woman stooped in old age with a wart on her large nose. Still others—in the gifted minority—are able to see both of the images simultaneously.

What is interesting about this illusion is that our brains instantly decide what image we are looking at, based on our first glance. If your initial glance was toward the vertical profile on the left-hand side, you were all but destined to see the image of the elegant young woman: it was just a matter of your brain interpreting every line in the picture according to the mental image that you already formed, even though each line can be interpreted in two different ways. Conversely, if your first glance fell on the central dark horizontal line that emphasizes the mouth and chin, your brain quickly formed an image of the older woman.

Regardless of your interpretation, your brain wasn’t confused. It simply decided what the picture was and filled in the missing pieces. Your brain resolved ambiguity and extracted order from conflicting information.

What does this have to do with decision making? Every bit of information can be interpreted differently according to our perspective. Ashvin Chhabra directs us to investing. I suggest you reframe this in the context of decision making in general.

Every trade has a seller and a buyer: your state of mind is paramount. If you are in a risk-averse mental framework, then you are likely to interpret a further fall in stocks as additional confirmation of your sell bias. If instead your framework is positive, you will interpret the same event as a buying opportunity.

The challenge of investing is compounded by the fact that our brains, which excel at resolving ambiguity in the face of a threat, are less well equipped to navigate the long term intelligently. Since none of us can predict the future, successful investing requires planning and discipline.

Unfortunately, when reason is in apparent conflict with our instincts—about markets or a “hot stock,” for example—it is our instincts that typically prevail. Our “reptilian brain” wins out over our “rational brain,” as it so often does in other facets of our lives. And as we have seen, investors trade too frequently, and often at the wrong time.

One way our brains resolve conflicting information is to seek out safety in numbers. In the animal kingdom, this is called “moving with the herd,” and it serves a very important purpose: helping to ensure survival. Just as a buffalo will try to stay with the herd in order to minimize its individual vulnerability to predators, we tend to feel safer and more confident investing alongside equally bullish investors in a rising market, and we tend to sell when everyone around us is doing the same. Even the so-called smart money falls prey to a herd mentality: one study, aptly titled “Thy Neighbor’s Portfolio,” found that professional mutual fund managers were more likely to buy or sell a particular stock if other managers in the same city were also buying or selling.

This comfort is costly. The surge in buying activity and the resulting bullish sentiment is self-reinforcing, propelling markets to react even faster. That leads to overvaluation and the inevitable crash when sentiment reverses. As we shall see, such booms and busts are characteristic of all financial markets, regardless of size, location, or even the era in which they exist.

Even though the role of instinct and human emotions in driving speculative bubbles has been well documented in popular books, newspapers, and magazines for hundreds of years, these factors were virtually ignored in conventional financial and economic models until the 1970s.

This is especially surprising given that, in 1951, a young PhD student from the University of Chicago, Harry Markowitz, published two very important papers. The first, entitled “Portfolio Selection,” published in the Journal of Finance, led to the creation of what we call modern portfolio theory, together with the widespread adoption of its important ideas such as asset allocation and diversification. It earned Harry Markowitz a Nobel Prize in Economics.

The second paper, entitled “The Utility of Wealth” and published in the prestigious Journal of Political Economy, was about the propensity of people to hold insurance (safety) and to buy lottery tickets at the same time. It delved deeper into the psychological aspects of investing but was largely forgotten for decades.

The field of behavioral finance really came into its own through the pioneering work of two academic psychologists, Amos Tversky and Daniel Kahneman, who challenged conventional wisdom about how people make decisions involving risk. Their work garnered Kahneman the Nobel Prize in Economics in 2002. Behavioral finance and neuroeconomics are relatively new fields of study that seek to identify and understand human behavior and decision making with regard to choices involving trade-offs between risk and reward. Of particular interest are the human biases that prevent individuals from making fully rational financial decisions in the face of uncertainty.

As behavioral economists have documented, our propensity for herd behavior is just the tip of the iceberg. Kahneman and Tversky, for example, showed that people who were asked to choose between a certain loss and a gamble, in which they could either lose more money or break even, would tend to choose the double down (that is, gamble to avoid the prospect of losses), a behavior the authors called “loss aversion.” Building on this work, Hersh Shefrin and Meir Statman, professors at the University of Santa Clara Leavey School of Business, have linked the propensity for loss aversion to investors’ tendency to hold losing investments too long and to sell winners too soon. They called this bias the disposition effect.

The lengthy list of behaviorally driven market effects often converge in an investor’s tale of woe. Overconfidence causes investors to hold concentrated portfolios and to trade excessively, behaviors that can destroy wealth. The illusion of control causes investors to overestimate the probability of success and underestimate risk because of familiarity—for example, causing investors to hold too much employer stock in their 401(k) plans, resulting in under-diversification. Cognitive dissonance causes us to ignore evidence that is contrary to our opinions, leading to myopic investing behavior. And the representativeness bias leads investors to assess risk and return based on superficial characteristics—for example, by assuming that shares of companies that make products you like are good investments.

Several other key behavioral biases come into play in the realm of investing. Framing can cause investors to make a decision based on how the question is worded and the choices presented. Anchoring often leads investors to unconsciously create a reference point, say for securities prices, and then adjust decisions or expectations with respect to that anchor. This bias might impede your ability to sell a losing stock, for example, in the false hope that you can earn your money back. Similarly, the endowment bias might lead you to overvalue a stock that you own and thus hold on to the position too long. And regret aversion may lead you to avoid taking a tough action for fear that it will turn out badly. This can lead to decision paralysis in the wake of a market crash, even though, statistically, it is a good buying opportunity.

Behavioral finance has generated plenty of debate. Some observers have hailed the field as revolutionary; others bemoan the discipline’s seeming lack of a transcendent, unifying theory. This much is clear: behavioral finance treats biases as mistakes that, in academic parlance, prevent investors from thinking “rationally” and cause them to hold “suboptimal” portfolios.

But is that really true? In investing, as in life, the answer is more complex than it appears. Effective decision making requires us to balance our “reptilian brain,” which governs instinctive thinking, with our “rational brain,” which is responsible for strategic thinking. Instinct must integrate with experience.

Put another way, behavioral biases are nothing more than a series of complex trade-offs between risk and reward. When the stock market is taking off, for example, a failure to rebalance by selling winners is considered a mistake. The same goes for a failure to add to a position in a plummeting market. That’s because conventional finance theory assumes markets to be inherently stable, or “mean-reverting,” so most deviations from the historical rate of return are viewed as fluctuations that will revert to the mean, or self-correct, over time.

But what if a precipitous market drop is slicing into your peace of mind, affecting your sleep, your relationships, and your professional life? What if that assumption about markets reverting to the mean doesn’t hold true and you cannot afford to hold on for an extended period of time? In both cases, it might just be “rational” to sell and accept your losses precisely when investment theory says you should be buying. A concentrated bet might also make sense, if you possess the skill or knowledge to exploit an opportunity that others might not see, even if it flies in the face of conventional diversification principles.

Of course, the time to create decision rules for extreme market scenarios and concentrated bets is when you are building your investment strategy, not in the middle of a market crisis or at the moment a high-risk, high-reward opportunity from a former business partner lands on your desk and gives you an adrenaline jolt. A disciplined process for managing risk in relation to a clear set of goals will enable you to use the insights offered by behavioral finance to your advantage, rather than fall prey to the common pitfalls. This is one of the central insights of the Wealth Allocation Framework. But before we can put these insights to practical use, we need to understand the true nature of financial markets.

Five Techniques to Improve Your Luck

Luck

It isn't enough to be good. You need luck.

We tend to think that smart people make good decisions and stupid people make bad decisions and that luck plays very little role. That is until we're one of those smart people who has a bad outcome because of luck.

You can't ignore luck and you really can't plan for it. Yet much of life is the combination, to varying degrees, of skill and luck. This continuum is also what makes watching sports fun. The most talented team doesn't always win, luck plays a role.

However elusive, luck is something that we can cultivate. While we can't control it, we can improve it. In How to Get Lucky: 13 techniques for discovering and taking advantage of life's good breaks, Max Gunther shows us how.

It turns out that lucky people characteristically organize their lives in such a way that they are in position to experience good luck and to avoid bad luck.

Technique 1: Acknowledge The Role of Luck

When losers lose, they blame luck. When winners win, it's because they were smart.

Via How to Get Lucky:

If you want to be a winner, you must stay keenly aware of the role luck plays in your life. When a desired outcome is brought about by luck, you must acknowledge that fact. Don’t try to tell yourself the outcome came about because you were smart. Never confuse luck with planning. If you do that, you all but guarantee that your luck, in the long run, will be bad.

When you see that luck plays a role, you're more likely to be aware that the situation can change. You don't expect things to continue, no that's for the people who don't acknowledge the role of luck because they mix up planning and luck.

Via How to Get Lucky:

The process begins when a good result occurs once or a few times. The loser studies it, ascribes it to planning, and concludes that the same planning will produce the same result in the future. And the loser loses again.

The lucky personality avoids getting trapped in that way. This isn’t to say he or she avoids taking risks. Quite the contrary, as we will see later. What it does mean is that the lucky personality, entering a situation and perceiving it to be ruled or heavily influenced by luck, deliberately stays light-footed, ready to jump this way or that as events unfold.

[…]

Planning may be more important than luck in much of what you do. The trick is to know what kind of situation you are in at any given time. Can you rely on your own or others’ planning, or will the outcome be determined by luck?

Technique 2: Find the Fast Flow

The idea here is to be where things are happening and surround yourself with a lot of people and interactions. The theory being that if you're a hermit, nothing will ever happen.

Via How to Get Lucky:

The lucky personality gets to know everybody in sight: the rich and the poor, the famous, the humble, the sociable and even the friendless and the cranky.

When you meet these people, use these tips to quickly build rapport.

You never want to become isolated. Make contact with people and get involved. Never sit on the sidelines.

Via How to Get Lucky:

Eric Wachtel, a New York management consultant and executive recruiter, has watched literally hundreds of men and women climbing career ladders. In his observation, people who get dead-ended are very often people who allow themselves to become isolated.

… The worst thing you can do is withdraw from the network of friendships and acquaintanceships at home and at work. If you aren’t in the network, nobody is ever going to steer anything your way.”

People make things happen. Not necessarily friends, just contacts. But for this to happen people need to know what you're trying to do – or where you want to go. Few things make us happier than helping others with lucky breaks.

In the words of Eric Wachtel, the consultant recruiter mentioned above: “It really is very pleasant to pick up the phone and say, ‘Hey, Charlie, there’s a job opening that sounds as if it might be your kind of thing.’”

Via How to Get Lucky:

Consistently lucky people are nearly always to be found in the fast flow. I never met one who was a recluse or even reclusive.

Technique 3: Risk Spooning

You have to invite things to happen. This means you have to stick your neck out.

Via How to Get Lucky:

There are two ways to be an almost sure loser in life. One is to take goofy risks; that is, risks that are out of proportion to the rewards being sought. And the other is to take no risks at all. Lucky people characteristically avoid both extremes. They cultivate the technique of taking risks in carefully measured spoonfuls.

Here is what generally happens in life. Some person sticks their neck out and the speculation pays off. They become rich and famous. Newspapers interview the person, asking them “how can we do the same thing you did?” And the newfound sage replies not that he got lucky, no, but rather that he was smart and hard working and those sorts of things. And we eat this stuff up.

In part this is because culturally we hate the gambler. Largely because we don't like that we can't take risks ourselves. The gambler represents what we are not. It's this motivated reasoning that makes it easy to find ways to dislike him.

Via How to Get Lucky:

It is essential to take risks. Examine the life of any lucky man or woman, and you are all but certain to find that he or she was willing, at some point, to take a risk. Without that willingness, hardly anything interesting is likely to happen to you.

[…]

[T]he need to take risks extends into all areas of life. Falling in love, for instance. If you want to experience the joys of such a relationship, you must be willing to take the possible hurts, too. You must be willing to make an emotional commitment that has the capacity to wound you. But it is exactly like playing a lottery: If you don’t bet, you are not in position to win.

Risk — smart risk — is a key element to getting lucky. Going to the track and betting on the 99-1 payoff is just stupid.

Technique 4: Run Cutting

“Don't push your luck.” My parents used to repeat that ancient maxim after I scored a 30-minute curfew extension and rather than be happy with that, I tried to push it longer.

Via How to Get Lucky:

As nearly all lucky people realize instinctively or learn through experience, runs of luck always end sooner than you wish. Sometimes they are long runs; much more often they are short. Since you can never tell in advance when a given run is going to end, the only sensible thing to do is preserve your gains by jumping off early in the game. Always assume the run is going to be short. Never try to ride a run to its very peak. Don’t push your luck.

The key here is to always assume that you're in the average case.

Via How to Get Lucky:

The simplest way to illustrate this is to calculate the mathematics of probability in tossing a coin. If you toss it 1,024 times, the odds are there will be one long run in which heads comes up nine times in a row. But there will be thirty-two short runs in which heads comes up four times in a row. Which is the way to bet?

On the short runs, of course.

[…]

Always cut runs short. Sure, there will be times when you regret doing this. A run will continue without you, and you will be left enviously watching all the happy players who stayed aboard. But statistically, such gloomy outcomes are not likely to happen often.

One of the problems is that long runs of luck are available.

Via How to Get Lucky:

One problem is that long, high runs of luck make news and get talked about. If you go to a racetrack and have a so-so day, you will forget it quickly. But if you have one of those days when every horse runs for your benefit, you will undoubtedly bore your friends with the story for a long time. We hear more about big wins than about the vastly more common little wins. This can delude us into thinking the big wins are more attainable than they really are. We think: “Well, if all these stories are true, maybe there’s a big win waiting out there for me.”

Casinos publicize big wins that are usually the result of long runs of luck. They do this for two reasons. First, it's a good story and we think that we can win more than we actually can. Second, it encourages people who are winning, to keep those bets riding so they can be one of the big winners. Of course, the odds are with the casino so the longer you play the more likely luck goes to odds. And the odds favor the house.

We never know how long luck will last but we do know that short runs of luck are much more common than long runs of luck.

Technique 5: Luck Selection

At what point should you “cut your losses?”

Via How to Get Lucky:

As you enter any new venture – an investment, a job, a love affair – you cannot know how it will work out. No matter how carefully you lay your plans, you cannot know how those plans will be affected by the unforeseeable and uncontrollable events that we call luck. If the luck is good, then you stay with the venture and enjoy it. But what if the luck is bad? What if the bottom drops out of the stock market? Or the seemingly limitless promise of that new job vanishes in a corporate upheaval? Or your love affair sours when a rival suddenly appears?

The lucky reaction is to wait a short time and see if the problems can be fixed or will go away, and then, if the answer is no, bail out. Cut losses short. This is what lucky people habitually do. To put it another way, they have the ability to select their own luck. Hit with bad luck, they discard it, freeing themselves to seek better luck in another venture.

The inability to cut losses is one of the traits of the born loser according to psychiatrists Stanley Block and Samuel Correnti in their book Psyche, Sex, and Stocks.

Sunk costs are hard to overcome, in part because it often involves confessing that you were wrong.

Via How to Get Lucky:

It is hard because it requires a kind of pessimism, or unsentimental realism, that doesn’t come naturally to many. What makes it still harder is that there are times when, in retrospect, you wish you hadn’t applied it.

How to Get Lucky: 13 techniques for discovering and taking advantage of life's good breaks goes on to explore 7 other techniques to cultivate your luck.

Miracles Happen — The Simple Heuristic That Saved 150 Lives

"In an uncertain world, statistical thinking and risk communication alone are not sufficient. Good rules of thumb are essential for good decisions."
“In an uncertain world, statistical thinking and risk communication alone are not sufficient. Good rules of thumb are essential for good decisions.”

Three minutes after taking off from LaGuardia airport in New York City, US Airways Flight 1549 ran into a flock of Canada geese. At 2800 feet, passengers and crew heard loud bangs as the geese collided with the engines rendering them both inoperable.

Gerd Gigerenzer picks up the story in his book Risk Savvy: How to Make Good Decisions:

When it dawned on the passengers that they were gliding toward the ground , it grew quiet on the plane. No panic, only silent prayer. Captain Chesley Sullenberger called air traffic control: “Hit birds. We’ve lost thrust in both engines. We’re turning back towards LaGuardia.”

But landing short of the airport would have catastrophic consequences, for passengers, crew , and the people living below. The captain and the copilot had to make a good judgment. Could the plane actually make it to LaGuardia, or would they have to try something more risky, such as a water landing in the Hudson River? One might expect the pilots to have measured speed, wind, altitude, and distance and fed this information into a calculator. Instead, they simply used a rule of thumb:

Fix your gaze on the tower: If the tower rises in your windshield, you won’t make it.

No estimation of the trajectory of the gliding plane is necessary. No time is wasted. And the rule is immune to calculation errors. In the words of copilot Jeffrey Skiles: “It’s not so much a mathematical calculation as visual, in that when you are flying in an airplane, things that— a point that you can’t reach will actually rise in your windshield. A point that you are going to overfly will descend in your windshield.” This time the point they were trying to reach did not descend but rose. They went for the Hudson.

In the cabin, the passengers were not aware of what was going on in the cockpit. All they heard was: “This is the captain: Brace for impact.” Flight attendants shouted: “Heads down! Stay down!” Passengers and crew later recalled that they were trying to grasp what death would be like, and the anguish of their kids, husbands, and wives. Then the impact happened, and the plane stopped. When passengers opened the emergency doors, sunlight streamed in. Everyone got up and rushed toward the openings. Only one passenger headed to the overhead bin to get her carry-on but was immediately stopped. The wings of the floating but slowly sinking plane were packed with people in life jackets hoping to be rescued. Then they saw the ferry coming. Everyone survived.

All this happened within the three minutes between the geese hitting the plane and the ditch in the river. During that time, the pilots began to run through the dual-engine failure checklist, a three-page list designed to be used at thirty thousand feet, not at three thousand feet: turn the ignition on, reset flight control computer, and so on. But they could not finish it. Nor did they have time to even start on the ditching checklist. While the evacuation was underway, Skiles remained in the cockpit and went through the evacuation checklist to safeguard against potential fire hazards and other dangers. Sullenberger went back to check on passengers and left the cabin only after making sure that no one was left behind. It was the combination of teamwork, checklists, and smart rules of thumb that made the miracle possible.

***

Say what? They used a heuristic?

Heuristics enable us to make fast, highly (but not perfectly) accurate, decisions without taking too much time and searching for information. Heuristics allow us to focus on only a few pieces of information and ignore the rest.

“Experts,” Gigerenzer writes, “often search for less information than novices do.”

We do the same thing, intuitively, to catch a baseball — the gaze heuristic.

Fix your gaze on an object, and adjust your speed so that the angle of gaze remains constant.

Professionals and amateurs alike rely on this rule.

… If a fly ball comes in high, the player fixates his eyes on the ball, starts running, and adjusts his running speed so that the angle of gaze remains constant. The player does not need to calculate the trajectory of the ball. To select the right parabola, the player’s brain would have to estimate the ball’s initial distance, velocity, and angle, which is not a simple feat. And to make things more complicated, real-life balls do not fly in parabolas . Wind, air resistance, and spin affect their paths. Even the most sophisticated robots or computers today cannot correctly estimate a landing point during the few seconds a ball soars through the air. The gaze heuristic solves this problem by guiding the player toward the landing point, not by calculating it mathematically . That’s why players don’t know exactly where the ball will land, and often run into walls and over the stands in their pursuit.

The gaze heuristic is an example of how the mind can discover simple solutions to very complex problems.

***

The broader point of Gigerenzer's book is that while rational thinking works well for risks, you need a combination of rational and heuristic thinking to make decisions under uncertainty.

Certainty Is an Illusion

We all try to avoid uncertainty, even if it means being wrong. We take comfort in certainty and we demand it of others, even when we know it's impossible.

Gerd Gigerenzer argues in Risk Savvy: How to Make Good Decisions that life would be pretty dull without uncertainty.

If we knew everything about the future with certainty, our lives would be drained of emotion. No surprise and pleasure, no joy or thrill— we knew it all along. The first kiss, the first proposal, the birth of a healthy child would be about as exciting as last year’s weather report. If our world ever turned certain, life would be mind-numbingly dull.

***
The Illusion of Certainty

We demand certainty of others. We ask our bankers, doctors, and political leaders (among others) to give it to us. What they deliver, however, is the illusion of certainty. We feel comfortable with this.

Many of us smile at old-fashioned fortune-tellers. But when the soothsayers work with computer algorithms rather than tarot cards, we take their predictions seriously and are prepared to pay for them. The most astounding part is our collective amnesia: Most of us are still anxious to see stock market predictions even if they have been consistently wrong year after year.

Technology changes how we see things – it amplifies the illusion of certainty.

When an astrologer calculates an expert horoscope for you and foretells that you will develop a serious illness and might even die at age forty-nine, will you tremble when the date approaches? Some 4 percent of Germans would; they believe that an expert horoscope is absolutely certain.

But when technology is involved, the illusion of certainty is amplified. Forty-four percent of people surveyed think that the result of a screening mammogram is certain. In fact, mammograms fail to detect about ten percent of cancers, and the younger the women being tested, the more error-prone the results, because their breasts are denser.

“Not understanding a new technology is one thing,” Gigerenzer writes, “believing that it delivers certainty is another.”

It's best to remember Ben Franklin: “In this world nothing can be said to be certain, except death and taxes.”

***
The Security Blanket

Where does our need for certainty come from?

People with a high need for certainty are more prone to stereotypes than others and are less inclined to remember information that contradicts their stereotypes. They find ambiguity confusing and have a desire to plan out their lives rationally. First get a degree, a car, and then a career, find the most perfect partner, buy a home, and have beautiful babies. But then the economy breaks down, the job is lost, the partner has an affair with someone else, and one finds oneself packing boxes to move to a cheaper place. In an uncertain world, we cannot plan everything ahead. Here, we can only cross each bridge when we come to it, not beforehand. The very desire to plan and organize everything may be part of the problem, not the solution. There is a Yiddish joke: “Do you know how to make God laugh? Tell him your plans.”

To be sure, illusions have their function. Small children often need security blankets to soothe their fears. Yet for the mature adult, a high need for certainty can be a dangerous thing. It prevents us from learning to face the uncertainty pervading our lives. As hard as we try, we cannot make our lives risk-free the way we make our milk fat-free.

At the same time, a psychological need is not entirely to blame for the illusion of certainty. Manufacturers of certainty play a crucial role in cultivating the illusion. They delude us into thinking that our future is predictable, as long as the right technology is at hand.

***
Risk and Uncertainty

Two magnificently dressed young women sit upright on their chairs, calmly facing each other. Yet neither takes notice of the other. Fortuna, the fickle, wheel-toting goddess of chance, sits blindfolded on the left while human figures desperately climb, cling to, or tumble off the wheel in her hand. Sapientia, the calculating and vain deity of science, gazes into a hand-mirror, lost in admiration of herself. These two allegorical figures depict a long-standing polarity: Fortuna brings good or bad luck, depending on her mood, but science promises certainty.

Fortuna, the wheel-toting goddess of chance (left), facing Sapientia, the divine goddess of science (right).
Fortuna, the wheel-toting goddess of chance (left), facing Sapientia, the divine goddess of science (right).

This sixteenth -century woodcut was carved a century before one of the greatest revolutions in human thinking, the “probabilistic revolution,” colloquially known as the taming of chance. Its domestication began in the mid-seventeenth century. Since then, Fortuna’s opposition to Sapientia has evolved into an intimate relationship, not without attempts to snatch each other’s possessions. Science sought to liberate people from Fortuna’s wheel, to banish belief in fate, and replace chances with causes. Fortuna struck back by undermining science itself with chance and creating the vast empire of probability and statistics. After their struggles, neither remained the same: Fortuna was tamed, and science lost its certainty.

I explain more on the difference between risk and uncertainty here, but this diagram helps simplify things.
certainty_risk_uncertainty

***
The value of heuristics

The twilight of uncertainty comes in different shades and degrees. Beginning in the seventeenth century, the probabilistic revolution gave humankind the skills of statistical thinking to triumph over Fortuna, but these skills were designed for the palest shade of uncertainty, a world of known risk, in short, risk. I use this term for a world where all alternatives, consequences, and probabilities are known. Lotteries and games of chance are examples. Most of the time, however, we live in a changing world where some of these are unknown: where we face unknown risks, or uncertainty. The world of uncertainty is huge compared to that of risk. … In an uncertain world, it is impossible to determine the optimal course of action by calculating the exact risks. We have to deal with “unknown unknowns.” Surprises happen. Even when calculation does not provide a clear answer, however, we have to make decisions. Thankfully we can do much better than frantically clinging to and tumbling off Fortuna’s wheel. Fortuna and Sapientia had a second brainchild alongside mathematical probability, which is often passed over: rules of thumb, known in scientific language as heuristics.

***
How decisions change based on risk/uncertainty

When making decisions, the two sets of mental tools are required:
1. RISK: If risks are known, good decisions require logic and statistical thinking.
2. UNCERTAINTY: If some risks are unknown, good decisions also require intuition and smart rules of thumb.

Most of the time we need a combination of the two.

***

Risk Savvy: How to Make Good Decisions is a great read throughout.

12