Tag: Probability

Philip Tetlock on The Art and Science of Prediction

Philip Tetlock small

This is the sixth episode of The Knowledge Project, a podcast aimed at acquiring wisdom through interviews with fascinating people to gain insights into how they think, live, and connect ideas.

***

On this episode, I'm happy to have Philip Tetlock, professor at the University of Pennsylvania. He's the co-leader of The Good Judgement Project, which is a multi-year forecasting study. He's also the author of Superforecasting: The Art and Science of Prediction and Expert Political Judgment: How Good Is It? How Can We Know?

The subject of this interview is how we can get better at the art and science of prediction. We dive into what makes some people better at making predictions and how we can learn to improve our ability to guess the future. I hope you enjoy the conversation as much as I did.

***

Listen

***

Show Notes

Transcript:
A complete transcript is available for members.

Books Mentioned

Mental Model: Misconceptions of Chance

Misconceptions of Chance

We expect the immediate outcome of events to represent the broader outcomes expected from a large number of trials. We believe that chance events will immediately self-correct and that small sample sizes are representative of the populations from which they are drawn. All of these beliefs lead us astray.

***

 

Our understanding of the world around us is imperfect and when dealing with chance our brains tend to come up with ways to cope with the unpredictable nature of our world.

“We tend,” writes Peter Bevelin in Seeking Wisdom, “to believe that the probability of an independent event is lowered when it has happened recently or that the probability is increased when it hasn't happened recently.”

In short, we believe an outcome is due and that chance will self-correct.

The problem with this view is that nature doesn't have a sense of fairness or memory. We only fool ourselves when we mistakenly believe that independent events offer influence or meaningful predictive power over future events.

Furthermore we also mistakenly believe that we can control chance events. This applies to risky or uncertain events.

Chance events coupled with positive reinforcement or negative reinforcement can be a dangerous thing. Sometimes we become optimistic and think our luck will change and sometimes we become overly pessimistic or risk-averse.

How do you know if you're dealing with chance? A good heuristic is to ask yourself if you can lose on purpose. If you can't you're likely far into the chance side of the skill vs. luck continuum. No matter how hard you practice, the probability of chance events won't change.

“We tend,” writes Nassim Taleb in The Black Swan, “to underestimate the role of luck in life in general (and) overestimate it in games of chance.”

We are only discussing independent events. If events are dependent, where the outcome depends on the outcome of some other event, all bets are off.

 

***

Misconceptions of Chance

Daniel Kahneman coined the term misconceptions of chance to describe the phenomenon of people extrapolating large-scale patterns to samples of a much smaller size. Our trouble navigating the sometimes counterintuitive laws of probability, randomness, and statistics leads to misconceptions of chance.

Kahneman found that “people expect that a sequence of events generated by a random process will represent the essential characteristics of that process even when the sequence is short.”

In the paper Belief in the Law of Small Numbers, Kahneman and Tversky reflect on the results of an experiment, where subjects were instructed to generate a random sequence of hypothetical tosses of a fair coin.

They [the subjects] produce sequences where the proportion of heads in any short segment stays far closer to .50 than the laws of chance would predict. Thus, each segment of the response sequence is highly representative of the “fairness” of the coin.

Unsurprisingly, the same nature of errors occurred when the subjects, instead of being asked to generate sequences themselves, were simply asked to distinguish between random and human generated sequences. It turns out that when considering tosses of a coin for heads or tails people regard the sequence H-T-H-T-T-H to be more likely than the sequence H-H-H-T-H-T, which does not appear random, and also more likely than the sequence H-H-H-H-T-H. In reality, each one of those sequences has the exact same probability of occurring. This is a misconception of chance.

The aspect that most of us find so hard to grasp about this case is that any pattern of the same length is just as likely to occur in a random sequence. For example, the odds of getting 5 tails in a row are 0.03125 or simply stated 0.5 (the odds of a specific outcome at each trial) to the power of 5 (number of trials).

The same probability rule applies for getting the specific sequences of HHTHT or THTHT – where each sequence is obtained by once again taking 0.5 (the odds of a specific outcome at each trial) to the power of 5 (number of trials), which equals 0.03125.

This probability is true for sequences – but it implies no relation between the odds of a specific outcome at each trial and the representation of the true proportion within these short sequences.

Yet it’s still surprising. This is because people expect that the single event odds will be reflected not only in the proportion of events as a whole but also in the specific short sequences we encounter. But this is not the case. A perfectly alternating sequence is just as extraordinary as a sequence with all tails or all heads.

In comparison, “a locally representative sequence,” Kahneman writes, in Thinking, Fast and Slow, “deviates systematically from chance expectation: it contains too many alternations and too few runs. Another consequence of the belief in local representativeness is the well-known gambler’s fallacy.”

***

Gambler’s Fallacy

There is a specific variation of the misconceptions of chance that Kahneman calls the Gambler’s fallacy (elsewhere also called the Monte Carlo fallacy).

The gambler's fallacy implies that when we come across a local imbalance, we expect that the future events will smoothen it out. We will act as if every segment of the random sequence must reflect the true proportion and, if the sequence has deviated from the population proportion, we expect the imbalance to soon be corrected.

Kahneman explains that this is unreasonable – coins, unlike people, have no sense of equality and proportion:

The heart of the gambler's fallacy is a misconception of the fairness of the laws of chance. The gambler feels that the fairness of the coin entitles him to expect that any deviation in one direction will soon be cancelled by a corresponding deviation in the other. Even the fairest of coins, however, given the limitations of its memory and moral sense, cannot be as fair as the gambler expects it to be.

He illustrates this with an example of the roulette wheel and our expectations when a reasonably long sequence of repetition occurs.

After observing a long run of red on the roulette wheel, most people erroneously believe that black is now due, presumably because the occurrence of black will result in a more representative sequence than the occurrence of an additional red.

In reality, of course, roulette is a random, non-evolving process, in which the chance of getting a red or a black will never depend on the past sequence. The probabilities restore after each run, yet we still seem to take the past moves into account.

Contrary to our expectations, the universe does not keep accounting of a random process so streaks are not necessarily tilted towards the true proportion. Your chance of getting a red after a series of blacks will always be equal to that of getting another red as long as the wheel is fair.

The gambler’s fallacy need not to be committed inside the casino only. Many of us commit it frequently by thinking that a small, random sample will tend to correct itself.

For example, assume that the average IQ at a specific country is known to be 100. And for the purposes of assessing intelligence at a specific district, we draw a random sample of 50 persons. The first person in our sample happens to have an IQ of 150. What would you expect the mean IQ to be for the whole sample?

The correct answer is (100*49 + 150*1)/50 = 101. Yet without knowing the correct answer, it is tempting to say it is still 100 – the same as in the country as a whole.

According to Kahneman and Tversky such expectation could only be justified by the belief that a random process is self-correcting and that the sample variation is always proportional. They explain:

Idioms such as “errors cancel each other out” reflect the image of an active self-correcting process. Some familiar processes in nature obey such laws: a deviation from a stable equilibrium produces a force that restores the equilibrium.

Indeed, this may be true in thermodynamics, chemistry and arguably also economics. These, however, are false analogies. It is important to realize that the laws governed by chance are not guided by principles of equilibrium and the number of random outcomes in a sequence do not have a common balance.

“Chance,” Kahneman writes in Thinking, Fast and Slow, “is commonly viewed as a self-correcting process in which a deviation in one direction induces a deviation in the opposite direction to restore the equilibrium. In fact, deviations are not “corrected” as a chance process unfolds, they are merely diluted.”

 

***

The Law of Small Numbers

Misconceptions of chance are not limited to gambling. In fact, most of us fall for them all the time because we intuitively believe (and there is a whole best-seller section at the bookstore to prove) that inferences drawn from small sample sizes are highly representative of the populations from which they are drawn.

By illustrating people's expectations of random heads and tails sequences, we already established that we have preconceived notions of what randomness looks like. This, coupled with the unfortunate tendency to believe in self-correcting process in a random sample, generates expectations about sample characteristics and representativeness, which are not necessarily true. The expectation that the patterns and characteristics within a small sample will be representative of the population as a whole is called the law of small numbers.

Consider the sequence:

1, 2, 3, _, _, _

What do you think are the next three digits?

The task almost seems laughable, because the pattern is so familiar and obvious – 4,5,6. However, there is an endless variation of different algorithms that would still fit the first three numbers, such as the Fibonacci sequence (5, 8, 13), a repeated sequence (1,2,3), a random sequence (5,8,2) and many others. Truth is, in this case there simply is not enough information to say what the rules governing this specific sequence are with any reliability.

The same rule applies to sampling problems – sometimes we feel we have gathered enough data to tell a real pattern from an illusion. Let me illustrate this fallacy with yet another example.

Imagine that you face a tough decision between investing in the development of two different product opportunities. Let’s call them Product A or Product B. You are interested in which product would appeal to the majority of the market, so you decide to conduct customer interviews. Out of the first five pilot interviews, four customers show a preference for Product A. While the sample size is quite small, given the time pressure involved, many of us would already have some confidence in concluding that the majority of customers would prefer Product A.

However, a quick statistical test will tell you that the probability of a result just as extreme is in fact 3/8, assuming that there is no preference among customers at all. This in simple terms means that if customers had no preference between Products A and B, you would still expect 3 customer samples out of 8 to have four customers vouching for Product A.

Basically, a study of such size has little to no predictive validity – these results could easily be obtained from a population with no preference for one or the other product. This, of course, does not mean that talking to customers is of no value. Quite the contrary – the more random cases we examine, the more reliable and accurate the results of the true proportion will be. If we want absolute certainty we must be prepared for a lot of work.

There will always be cases where a guesstimate based on a small sample will be enough because we have other critical information guiding the decision-making process or we simply do not need a high degree of confidence. Yet rather than assuming that the samples we come across are always perfectly representative, we must treat random selection with the suspicion it deserves. Accepting the role imperfect information and randomness play in our lives and being actively aware of what we don’t know already makes us better decision makers.

13 Practical Ideas That Have Helped Me Make Better Decisions

This article is a collaboration between Mark Steed and myself. He did most of the work. Mark was a participant at the last Re:Think Decision Making event as well as a member of the Good Judgment Project. I asked him to put together something on making better predictions. This is the result.

We all face decisions. Sometimes we think hard about a specific decision, other times, we make decisions without thinking. If you've studied the genre you’ve probably read Taleb, Tversky, Kahneman, Gladwell, Ariely, Munger, Tetlock, Mauboussin and/or Thaler. These pioneers write a lot about “rationality” and “biases”.

Rationality dictates the selection of the best choice among however many options. Biases of a cognitive or emotional nature creep in and are capable of preventing the identification of the “rational” choice. These biases can exist in our DNA or can be formed through life experiences. The mentioned authors consider biases extensively, and, lucky for us, their writings are eye-opening and entertaining.

Rather than rehash what brighter minds have discussed, I’ll focus on practical ideas that have helped me make better decisions. I think of this as a list of “lessons learned (so far)” from my work in asset management and as a forecaster for the Good Judgment Project. I’ve held back on submitting this given the breadth and depth of the FS readers, but, rather than expect perfection, I wanted to put something on the table because I suspect many of you have useful ideas that will help move the conversation forward.

1. This is a messy business. Studying decision science can easily motivate self-loathing. There are over one-hundred cognitive biases that might prevent us from making calculated and “rational” decisions. What, you can’t create a decision tree with 124 decision nodes, complete with assorted probabilities in split seconds? I asked around, and it turns out, not many people can. Since there is no way to eliminate all the potential cognitive biases and I don’t possess the mental faculties of Dr. Spock or C-3PO, I might as well live with the fact that some decisions will be more elegant than others.

2. We live and work in dynamic environments. Dynamic environments adapt. The opposite of dynamic environments are static environments. Financial markets, geopolitical events, team sports, etc. are examples of dynamic “environments” because relationships between agents evolve and problems are often unpredictable. Changes from one period are conditional on what happened the previous period. Casinos are more representative of static environments. Not casinos necessarily, but the games inside. If you play Roulette, your odds of winning are always the same and it doesn’t matter what happened the previous turn.

3. Good explanatory models are not necessarily good predictive models. Dynamic environments have a habit of desecrating rigid models. While blindly following an elegant model may be ill-advised, strong explanatory models are excellent guideposts when paired with sound judgment and intuition. Just as I’m not comfortable with the automatic pilot flying a plane without a human in the cockpit, I’m also not comfortable with a human flying a plane without the help of technology. It has been said before, people make models better and models make people better.

4. Instinct is not always irrational.  The rule of thumb, otherwise known as heuristics, provide better results than more complicated analytical techniques. Gerd Gigerenzer, is the thought leader and his book Risk Savvy: How to Make Good Decisions is worth reading. Most literature despises heuristics, but he asserts intuition proves superior because optimization is sometimes mathematically impossible or exposed to sampling error. He often uses the example of Harry Markowitz, who won a Nobel Prize in Economics in 1990 for his work on Modern Portfolio Theory. Markowitz discovered a method for determining the “optimal” mix of assets. However, Markowitz himself did not follow his Nobel prize-winning mean-variance theory but instead used a 1/N heuristic by spreading his dollars equally across N number of investments. He concluded that his 1/N strategy would perform better than a mean-optimization application unless the mean-optimization model had 500 years to compete.  Our intuition is more likely to be accurate if it is preceded by rigorous analysis and introspection. And simple rules are more effective at communicating winning strategies in complex environments. When coaching a child’s soccer team, it is far easier teaching a few basic principles, than articulating the nuances of every possible situation.

5. Decisions are not evaluated in ways that help us reduce mistakes in the future. Our tendency is to only critique decisions where the desired outcome was not achieved while uncritically accepting positive outcomes even if luck, or another factor, produced the desired result. At the end of the day I understand all we care about are results, but good processes are more indicative of future success than good results.

6. Success is ill-defined. In some cases this is relatively straightforward. If the outcome is binary, either it did, or did not happen, success is easy to identify. But this is more difficult in situations where the outcome can take a range of potential values, or when individuals differ on what the values should be.

7. We should care a lot more about calibration. Confidence, not just a decision, should be recorded (and to be clear, decisions should be recorded). Next time you have a major decision, ask yourself how confident you are that the desired outcome will be achieved. Are you 50% confident? 90%? Write it down. This helps with calibration. For all decisions in which you are 50% confident, half should be successes. And you should be right nine out of ten times for all decisions in which you are 90% confident. If you are 100% confident, you should never be wrong. If you don’t know anything about a specific subject then you should be no more confident than a coin flip. It’s amazing how we will assign high confidence to an event we know nothing about. Turns out this idea is pretty helpful. Let’s say someone brings an idea to you and you know nothing about it. Your default should be 50/50; you might as well flip a coin. Then you just need to worry about the costs/payouts.

8. Probabilities are one thing, payouts are another. You might feel 50/50 about your chances but you need to know your payouts if you are right. This is where the expected value comes in handy. It’s the probability of being right multiplied by the payout if you are right, plus the probability of being wrong multiplied by the cost. E= .50(x) + .50(y). Say someone on your team has an idea for a project and you decided there is a 50% chance that it succeeds and, if it does, you double your money, if it doesn’t, you lose what you invested. If the project required $10mm, then the expected outcome is calculated as .50*20 + .50*0 = 10, or $10mm. If you repeat this process a number of times, approving only projects with a 2:1 payout and 50% probability of success you would likely end up with the same amount you started with. Binary outcomes that have a 50/50 probability should have a double-or-nothing payout. This is even more helpful given #7 above. If you were tracking this employee’s calibration you would have a sense as to whether their forecasts are accurate. As a team member or manager, you would want to know if a specific employee is 90% confident all the time but only 50% accurate. More importantly, you would want to know if a certain team member is usually right when they express 90% or 100% confidence. Use a Brier Score to track colleagues but provide an environment to encourage discussion and openness.

9. We really are overconfident. Starting from the assumption that we are probably only 50% accurate is not a bad idea. Phil Tetlock, a professor at UPenn, Team Leader for the Good Judgment Project and author of Expert Political Judgment: How Good Is It? How Can We Know?, suggested political pundits are about 53% accurate regarding political forecasts while CXO Advisory tracks investment gurus and finds they are, in aggregate, about 48% accurate. These are experts making predictions about their core area of expertise. Consider the rate of divorce in the U.S., currently around 40%-50%, as additional evidence that sometimes we don’t know as much as we think. Experts are helpful in explaining a specific discipline but they are less helpful in dynamic environments. If you need something fixed, like a car, a clock or an appliance then experts can be very helpful. Same for tax and accounting advice. It's not because this stuff is simple, it's because the environment is static.

10. Improving estimations of probabilities and payouts is about polishing our 1) subject matter expertise and 2) cognitive processing abilities. Learning more about a given subject reduces uncertainty and allows us to move from the lazy 50/50 forecast. Say you travel to Arizona and get stung by a scorpion. Rather than assume a 50% probability of death you can do a quick internet search and learn no one has died from a scorpion bite in Arizona since the 1960s. Overly simplistic, but, you get the picture. Second, data needs to be interpreted in a cogent way. Let’s say you work in asset management and one of your portfolio managers has made three investments that returned -5%, -12% and 22%. What can you say about the manager (other than two of the three investments lost money)? Does the information allow you to claim the portfolio manager is a bad manager? Does the information allow you to claim you can confidently predict his/her average rate of return? Unless you’ve had some statistics, it might not be entirely clear what clinical conclusions you can draw. What if you flipped a coin three times and came up with tails on two of them? That wouldn’t seem so strange. Two-thirds is the same as 66%. If you tossed the coin one-hundred times and got 66 tails, that would be a little more interesting. The more observations, the higher our confidence should be. A 95% confidence interval for the portfolio manager’s average return would be a range between -43% and 45%. Is that enough to take action?

11. Bayesian analysis is more useful than we think. Bayesian thinking helps direct given false/true positives and false/true negatives. It’s the probability of a hypothesis given some observed data. For example, what’s the likelihood of X (this new hire will place in the top 10% of the firm) given Y (they graduated from an Ivy League school)? A certain percentage of employees are top-performing employees, some Ivy League grads will be top-performers (others not) and some non-Ivy League grads will be top-performers (others not). If I’m staring at a random employee trying to guess whether they are a top-performing employee all I have are the starting odds, and, if only the top 10% qualify, I know my chances are 1 in 10. But I can update my odds if supplied information regarding their education. Here’s another example. What is the likelihood a project will be successful (X) given it missed one of the first two milestones (Y)?. There are lots of helpful resources online if you want to learn more but think of it this way (hat tip to Kalid Azad at Better Explained); original odds x the evidence adjustment = your new odds. The actual equation is more complicated but that is the intuition behind it. Bayesian analysis has its naysayers. In the examples provided, the prior odds of success are known, or could easily be obtained, but this isn’t always true. Most of the time subjective prior probabilities are required and this type of tomfoolery is generally discouraged. There are ways around that, but no time to explain it here.

12. A word about crowds. Is there a wisdom of crowds? Some say yes, others say no. My view is that crowds can be very useful if individual members of the crowd are able to vote independently or if the environment is such that there are few repercussions for voicing disagreement. Otherwise, I think signaling effects from seeing how others are “voting” is too much evolutionary force to overcome with sheer rational willpower. Our earliest ancestors ran when the rest of the tribe ran. Not doing so might have resulted in an untimely demise.

13. Analyze your own motives. Jonathan Haidt, author of The Righteous Mind: Why Good People Are Divided by Politics and Religion, is credited with teaching that logic isn’t used to find truth, it’s used to win arguments. Logic may not be the only source of truth (and I have no basis for that claim). Keep this in mind as it has to do with the role of intuition in decision making.

Just a few closing thoughts.

We are pretty hard on ourselves. My process is to make the best decisions I can, realizing not all of them will be optimal. I have a method to track my decisions and to score how accurate I am. Sometimes I use heuristics, but I try to keep those to within my area of competency, as Munger says. I don’t do lists of pros and cons because I feel like I’m just trying to convince myself, either way.

If I have to make a big decision, in an unfamiliar area, I try to learn as much as I can about the issue on my own and from experts, assess how much randomness could be present, formulate my thesis, look for contradictory information, try and build downside protection (risking as little as possible) and watch for signals that may indicate a likely outcome. Many of my decisions have not worked out, but most of them have. As the world changes, so will my process, and I look forward to that.

Have something to say? Become a member: join the slack conversation and chat with Mark directly.

The Lucretius Problem

Lucretius_Rome

It's always good to re-read books and to dip back into them periodically. When reading a new book, I often miss out on crucial information (especially books that are hard to categorize with one descriptive sentence). When you come back to a book after reading hundreds of others you can't help but make new connections with the old book and see it anew.

It has been a while since I read Anti-fragile. In the past I've talked about an Antifragile Way of Life, Learning to Love Volatility, the Definition of Antifragility , Antifragile life of economy, and the Noise and the Signal.

But upon re-reading Antifragile I came across the Lucretius Problem and I thought I'd share an excerpt. (Titus Lucretius Carus was a Roman poet and philosopher, best-known for his poem On the Nature of Things). Taleb writes:

Indeed, our bodies discover probabilities in a very sophisticated manner and assess risks much better than our intellects do. To take one example, risk management professionals look in the past for information on the so-called ​worst-case scenario ​and use it to estimate future risks – this method is called “stress testing.” They take the worst historical recession, the worst war, the worst historical move in interest rates, or the worst point in unemployment as an exact estimate for the worst future outcome​. But they never notice the following inconsistency: this so-called worst-case event, when it happened, exceeded the worst [known] case at the time.

I have called this mental defect the Lucretius problem, after the Latin poetic philosopher who wrote that the fool believes that the tallest mountain in the world will be equal to the tallest one he has observed. We consider the biggest object of any kind that we have seen in our lives or hear about as the largest item that can possibly exist. And we have been doing this for millennia.

Taleb brings up an interesting point, which is that our documented history can blind us. All we know is what we have been able to record.

We think because we have sophisticated data collecting techniques that we can capture all the data necessary to make decisions. We think we can use our current statistical techniques to draw historical trends using historical data without acknowledging the fact that past data recorders had fewer tools to capture the dark figure of unreported data. We also overestimate the validity of what has been recorded before and thus the trends we draw might tell a different story if we had the dark figure of unreported data.

Taleb continues:

The same can be seen in the Fukushima nuclear reactor, which experienced a catastrophic failure in 2011 when a tsunami struck. It had been built to withstand the worst past historical earthquake, with the builders not imagining much worse— and not thinking that the worst past event had to be a surprise, as it had no precedent. Likewise, the former chairman of the Federal Reserve, Fragilista Doctor Alan Greenspan, in his apology to Congress offered the classic “It never happened before.” Well, nature, unlike Fragilista Greenspan, prepares for what has not happened before, assuming worse harm is possible.

So what do we do and how do we deal with the blindness?

Taleb provides an answer which is to develop layers of redundancy to act as a buffer against oneself. We overvalue what we have recorded and assume it tells us the worst and best possible outcomes. Redundant layers are a buffer against our tendency to think what has been recorded is a map of the whole terrain. An example of a redundant feature could be a rainy day fund which acts as an insurance policy against something catastrophic such as a job loss that allows you to survive and fight another day.

Antifragile is a great book to read and you might learn something about yourself and the world you live in by reading it or in my case re-reading it.

Fooled By Randomness

fooled by randomness

I don't want you to make the same mistake I did.

I waited too long before reading Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets by Nassim Taleb. He wrote the book before the Black Swan and Antifragile, which propelled him into intellectual celebrity. Interestingly, Fooled by Randomness contains semi-explored gems of the ideas that would later go on to become the best-selling books The Black Swan and Antifragile.

***
Hindsight Bias

Part of the argument that Fooled by Randomness presents is that when we look back at things that have happened we see them as less random than they actually were.

It is as if there were two planets: the one in which we actually live and the one, considerably more deterministic, on which people are convinced we live. It is as simple as that: Past events will always look less random than they were (it is called the hindsight bias). I would listen to someone’s discussion of his own past realizing that much of what he was saying was just backfit explanations concocted ex post by his deluded mind.

***
The Courage of Montaigne

Writing on Montaigne as the role model for the modern thinker, Taleb also addresses his courage:

It certainly takes bravery to remain skeptical; it takes inordinate courage to introspect, to confront oneself, to accept one’s limitations— scientists are seeing more and more evidence that we are specifically designed by mother nature to fool ourselves.

***
Probability

Fooled by Randomness is about probability, not in a mathematical way but as skepticism.

In this book probability is principally a branch of applied skepticism, not an engineering discipline. …

Probability is not a mere computation of odds on the dice or more complicated variants; it is the acceptance of the lack of certainty in our knowledge and the development of methods for dealing with our ignorance. Outside of textbooks and casinos, probability almost never presents itself as a mathematical problem or a brain teaser. Mother nature does not tell you how many holes there are on the roulette table , nor does she deliver problems in a textbook way (in the real world one has to guess the problem more than the solution).

Outside of textbooks and casinos, probability almost never presents itself as a mathematical problem” which is fascinating given how we tend to solve problems. In decisions under uncertainty, I discussed how risk and uncertainty are different things, which creates two types of ignorance.

Most decisions are not risk-based, they are uncertainty-based and you either know you are ignorant or you have no idea you are ignorant. There is a big distinction between the two. Trust me, you'd rather know you are ignorant.

***
Randomness Disguised as Non-Randomness

The core of the book is about luck that we understand as skill or “randomness disguised as non-randomness (that is determinism).”

This problem manifests itself most frequently in the lucky fool, “defined as a person who benefited from a disproportionate share of luck but attributes his success to some other, generally very precise, reason.”

Such confusion crops up in the most unexpected areas, even science, though not in such an accentuated and obvious manner as it does in the world of business. It is endemic in politics, as it can be encountered in the shape of a country’s president discoursing on the jobs that “he” created, “his” recovery, and “his predecessor’s” inflation.

These lucky fools are often fragilistas — they have no idea they are lucky fools. For example:

[W]e often have the mistaken impression that a strategy is an excellent strategy, or an entrepreneur a person endowed with “vision,” or a trader a talented trader, only to realize that 99.9% of their past performance is attributable to chance, and chance alone. Ask a profitable investor to explain the reasons for his success; he will offer some deep and convincing interpretation of the results. Frequently, these delusions are intentional and deserve to bear the name “charlatanism.”

This does not mean that all success is luck or randomness. There is a difference between “it is more random than we think” and “it is all random.”

Let me make it clear here : Of course chance favors the prepared! Hard work, showing up on time, wearing a clean (preferably white) shirt, using deodorant, and some such conventional things contribute to success— they are certainly necessary but may be insufficient as they do not cause success. The same applies to the conventional values of persistence, doggedness and perseverance: necessary, very necessary. One needs to go out and buy a lottery ticket in order to win. Does it mean that the work involved in the trip to the store caused the winning? Of course skills count, but they do count less in highly random environments than they do in dentistry.

No, I am not saying that what your grandmother told you about the value of work ethics is wrong! Furthermore, as most successes are caused by very few “windows of opportunity,” failing to grab one can be deadly for one’s career. Take your luck!

That last paragraph connects to something Charlie Munger once said: “Really good investment opportunities aren't going to come along too often and won't last too long, so you've got to be ready to act. Have a prepared mind.

Taleb thinks of success in terms of degrees, so mild success might be explained by skill and labour but outrageous success “is attributable variance.”

***
Luck Makes You Fragile

One thing Taleb hits on that really stuck with me is that “that which came with the help of luck could be taken away by luck (and often rapidly and unexpectedly at that). The flipside, which deserves to be considered as well (in fact it is even more of our concern), is that things that come with little help from luck are more resistant to randomness.” How Antifragile.

Taleb argues this is the problem of induction, “it does not matter how frequently something succeeds if failure is too costly to bear.”

***
Noise and Signal

We are confused between noise and signal.

…the literary mind can be intentionally prone to the confusion between noise and meaning, that is, between a randomly constructed arrangement and a precisely intended message. However, this causes little harm; few claim that art is a tool of investigation of the Truth— rather than an attempt to escape it or make it more palatable. Symbolism is the child of our inability and unwillingness to accept randomness; we give meaning to all manner of shapes; we detect human figures in inkblots.

All my life I have suffered the conflict between my love of literature and poetry and my profound allergy to most teachers of literature and “critics.” The French thinker and poet Paul Valery was surprised to listen to a commentary of his poems that found meanings that had until then escaped him (of course, it was pointed out to him that these were intended by his subconscious).

If we're concerned about situations where randomness is confused with non randomness should we also be concerned with situations where non randomness is mistaken for randomness, which would result in signal being ignored?

First, I am not overly worried about the existence of undetected patterns. We have been reading lengthy and complex messages in just about any manifestation of nature that presents jaggedness (such as the palm of a hand, the residues at the bottom of Turkish coffee cups, etc.). Armed with home supercomputers and chained processors, and helped by complexity and “chaos” theories, the scientists, semiscientists, and pseudoscientists will be able to find portents. Second, we need to take into account the costs of mistakes; in my opinion, mistaking the right column for the left one is not as costly as an error in the opposite direction. Even popular opinion warns that bad information is worse than no information at all.

If you haven't yet, pick up a copy of Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets. Don't make the same mistake I did and wait to read this important book.

(image via)

Leonard Mlodinow: The Three Laws of Probability

"These three laws, simple as they are, form much of the basis of probability theory. Properly applied, they can give us much insight into the workings of nature and the everyday world. "
“These three laws, simple as they are, form much of the basis of probability theory. Properly applied, they can give us much insight into the workings of nature and the everyday world.”

 

In his book, The Drunkard's Walk, Leonard Mlodinow outlines the three key “laws” of probability.

The first law of probability is the most basic of all. But before we get to that, let's look at this question.

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
Which is more probable?

Linda is a bank teller.
Linda is a bank teller and is active in the feminist movement.

To Kahneman and Tversky's surprise, 87 percent of the subjects in the study believed that the probability of Linda being a bank teller and active in the feminist movement was a higher probability than the probability that Linda is a bank teller.

1. The probability that two events will both occur can never be greater than the probability that each will occur individually.

This is the conjunction fallacy.

Mlodinow explains:

Why not? Simple arithmetic: the chances that event A will occur = the chances that events A and B will occur + the chance that event A will occur and event B will not occur.

The interesting thing that Kahneman and Tversky discovered was that we don't tend to make this mistake unless we know something about the subject.

“For example,” Mlodinow muses, “suppose Kahneman and Tversky had asked which of these statements seems most probable:”

Linda owns an International House of Pancakes franchise.
Linda had a sex-change operation and is now known as Larry.
Linda had a sex-change operation, is now known as Larry, and owns an International House of Pancakes franchise.

In this case it's unlikely you would choose the last option.

Via The Drunkard's Walk:

If the details we are given fit our mental picture of something, then the more details in a scenario, the more real it seems and hence the more probable we consider it to be—even though any act of adding less-than-certain details to a conjecture makes the conjecture less probable.

Or as Kahneman and Tversky put it, “A good story is often less probable than a less satisfactory… [explanation].”

2. If two possible events, A and B, are independent, then the probability that both A and B will occur is equal to the product of their individual probabilities.

Via The Drunkard's Walk:

Suppose a married person has on average roughly a 1 in 50 chance of getting divorced each year. On the other hand, a police officer has about a 1 in 5,000 chance each year of being killed on the job. What are the chances that a married police officer will be divorced and killed in the same year? According to the above principle, if those events were independent, the chances would be roughly 1⁄50 × 1⁄5,000, which equals 1⁄250,000. Of course the events are not independent; they are linked: once you die, darn it, you can no longer get divorced. And so the chance of that much bad luck is actually a little less than 1 in 250,000.

Why multiply rather than add? Suppose you make a pack of trading cards out of the pictures of those 100 guys you’ve met so far through your Internet dating service, those men who in their Web site photos often look like Tom Cruise but in person more often resemble Danny DeVito. Suppose also that on the back of each card you list certain data about the men, such as honest (yes or no) and attractive (yes or no). Finally, suppose that 1 in 10 of the prospective soul mates rates a yes in each case. How many in your pack of 100 will pass the test on both counts? Let’s take honest as the first trait (we could equally well have taken attractive). Since 1 in 10 cards lists a yes under honest, 10 of the 100 cards will qualify. Of those 10, how many are attractive? Again, 1 in 10, so now you are left with 1 card. The first 1 in 10 cuts the possibilities down by 1⁄10, and so does the next 1 in 10, making the result 1 in 100. That’s why you multiply. And if you have more requirements than just honest and attractive, you have to keep multiplying, so . . . well, good luck.

And there are situations where probabilities should be added. That's the next law.

“These occur when we want to know the chances of either one event or another occurring, as opposed to the earlier situation, in which we wanted to know the chance of one event and another event happening.”

3. If an event can have a number of different and distinct possible outcomes, A, B, C, and so on, then the probability that either A or B will occur is equal to the sum of the individual probabilities of A and B, and the sum of the probabilities of all the possible outcomes (A, B, C, and so on) is 1 (that is, 100 percent).

Via The Drunkard's Walk:

When you want to know the chances that two independent events, A and B, will both occur, you multiply; if you want to know the chances that either of two mutually exclusive events, A or B, will occur, you add. Back to our airline: when should the gate attendant add the probabilities instead of multiplying them? Suppose she wants to know the chances that either both passengers or neither passenger will show up. In this case she should add the individual probabilities, which according to what we calculated above, would come to 55 percent.

These three simple laws form the basis of probability. “Properly applied,” Mlodinow writes, “they can give us much insight into the workings of nature and the everyday world.” We use them all the time, we just don't use them properly.