Tag: Anchoring

A Simple Checklist to Improve Decisions

We owe thanks to the publishing industry. Their ability to take a concept and fill an entire category with a shotgun approach is the reason that more people are talking about biases.

Unfortunately, talk alone will not eliminate them but it is possible to take steps to counteract them. Reducing biases can make a huge difference in the quality of any decision and it is easier than you think.

In a recent article for Harvard Business Review, Daniel Kahneman (and others) describe a simple way to detect bias and minimize its effects in the most common type of decisions people make: determining whether to accept, reject, or pass on a recommendation.

The Munger two-step process for making decisions is a more complete framework, but Kahneman's approach is a good way to help reduce biases in our decision-making.

If you're short on time here is a simple checklist that will get you started on the path towards improving your decisions:

Preliminary Questions: Ask yourself

1. Check for Self-interested Biases

  • Is there any reason to suspect the team making the recommendation of errors motivated by self-interest?
  • Review the proposal with extra care, especially for overoptimism.

2. Check for the Affect Heuristic

  • Has the team fallen in love with its proposal?
  • Rigorously apply all the quality controls on the checklist.

3. Check for Groupthink

  • Were there dissenting opinions within the team?
  • Were they explored adequately?
  • Solicit dissenting views, discreetly if necessary.
  • Challenge Questions: Ask the recommenders

4. Check for Saliency Bias

  • Could the diagnosis be overly influenced by an analogy to a memorable success?
  • Ask for more analogies, and rigorously analyze their similarity to the current situation.

5. Check for Confirmation Bias

  • Are credible alternatives included along with the recommendation?
  • Request additional options.

6. Check for Availability Bias

  • If you had to make this decision again in a year’s time, what information would you want, and can you get more of it now?
  • Use checklists of the data needed for each kind of decision.

7. Check for Anchoring Bias

  • Do you know where the numbers came from? Can there be
  • …unsubstantiated numbers?
  • …extrapolation from history?
  • …a motivation to use a certain anchor?
  • Reanchor with figures generated by other models or benchmarks, and request new analysis.

8. Check for Halo Effect

  • Is the team assuming that a person, organization, or approach that is successful in one area will be just as successful in another?
  • Eliminate false inferences, and ask the team to seek additional comparable examples.

9. Check for Sunk-Cost Fallacy, Endowment Effect

  • Are the recommenders overly attached to a history of past decisions?
  • Consider the issue as if you were a new CEO.
  • Evaluation Questions: Ask about the proposal

10. Check for Overconfidence, Planning Fallacy, Optimistic Biases, Competitor Neglect

  • Is the base case overly optimistic?
  • Have the team build a case taking an outside view; use war games.

11. Check for Disaster Neglect

  • Is the worst case bad enough?
  • Have the team conduct a premortem: Imagine that the worst has happened, and develop a story about the causes.

12. Check for Loss Aversion

  • Is the recommending team overly cautious?
  • Realign incentives to share responsibility for the risk or to remove risk.

If you're looking to dramatically improve your decision making here is a great list of books to get started:

Nudge: Improving Decisions About Health, Wealth, and Happiness by Richard H. Thaler and Cass R. Sunstein

Think Twice: Harnessing the Power of Counterintuition by Michael J. Mauboussin

Think Again: Why Good Leaders Make Bad Decisions and How to Keep It from Happening to You by Sydney Finkelstein, Jo Whitehead, and Andrew Campbell

Predictably Irrational: The Hidden Forces That Shape Our Decisions by Dan Ariely

Thinking, Fast and Slow by Daniel Kahneman

Judgment and Managerial Decision Making by Max Bazerman

Scientifically Proven Ways to Increase Tips in the Service Industry

Tipping is a $40 billion dollar industry in the United States. Yet from the traditional economic perspective, which sees us as rational agents operating in our own interest, tipping waiters, barbers, taxi drivers and other service workers is crazy.

Many food servers depend on tips to make their living. Understanding the variables that affect tipping behavior can make a huge difference on income. Of course, a lot of the variables that influence tipping behavior are outside the servers's control. For example, customers tend to leave larger tips when the weather is pleasant, when there is desirable background music, and when the restaurant is elegant or in an urban area. Food quality has a large impact. Research also suggests that tipping behavior is affected by customers’ characteristics, including size of the dining party, amount of alcohol consumed, customers’ gender and ethnic background, and method of payment.

However, research also indicates that servers can control some variables that influence tipping behavior.

If you're a food server try—and if you're a customer watch out for—the following:

Have your restaurant prime the customer with examples of what a tip would be at 20%
This study examined the role of gratuity guidelines on tipping behavior in restaurants. When diners were finished with their meals, they were given checks that either did or did not include calculated examples informing them what various percentages of their bill would amount to. Results indicated that parties who received the gratuity examples left significantly higher tips than did those receiving no examples. These results and their implications are discussed.

Introduce yourself by name
…wait staff who introduced themselves by name received an average tip of $5.44, or 23 percent of the total bill, and those who used no name were tipped on average about $3.49, or 15 percent of the total bill.

Squat when you first get to the table
Walters should squat down next to the table upon their first trip to the station, Lynn recommended, based on his study at an unnamed casual Mexican restaurant in Houston. Those who squatted received an average tip that was 20 percent higher — $6.40 as compared with $5.18 — than those who stood on their initial visit to the station.

Smile
Servers should give their customers a large smile — with mouth open and teeth showing — while they work.

Approach The Table Often
Walt staff should visit their tables more often, Lynn suggested as another technique, based on a study conducted in the 1970s at a Chicago steak house. Those who approached their tables — just to check up on the customers and not to deliver anything — two times or more received average tips that were 15.6 percent of the total bill, while those who approached the table only to deliver something or take something away received tips that were on average 13.8 percent of the total bill.

Touch your customers
He recommend they touch them lightly on their shoulders when delivering the bill or when returning with change or a credit card. His study for the technique was conducted at a Bennigan's in Houston. Those who touched their customers for an average of two to four seconds received an average tip of 18 percent, or $3.20, of the total bill, and those who didn't received a 12-percent average tip, or $2.52.

Give your customers something extra, like a mint
Another controversial technique Lynn recommended is for wait staff to give their customers after-dinner mints, based on his study at a Philadelphia restaurant. Servers who left their customers mints received tips that were on average about 28 percent, or $5.98, of the total bill, and those who left no mints received average tips of 19 percent, or $4.64, of the total bill.

“The technique didn't work at steak houses with a [per-person] check average over $30,” Lynn said. He added that leaving mints for customers at upscale restaurants seems to have no impact on tips. “But at casual restaurants, it does increase tips,” he insisted. “It works because customers got something free, so they want to repay their servers.”

Compliment your customers
The present study examined the role of ingratiation on tipping behavior in restau- rants. In the study, 2 female food servers waited on 94 couples eating dinner, and either complimented or did not compliment the couples on their dinner selections. Results indicated that food servers received significantly higher tips when complimenting their customers than when not complimenting them. These results and their implications are discussed.

Other ideas? Write “thank-you” or simply draw a smily-face on the bill (to create a likeable impression); Print a picture of someone smiling on the bill, or the American Flag (something most people would associate with happiness); Make paying by Visa the default; If you're a guy give the bill to the woman and if you're a woman give the bill a guy; and Tell customers the weather is supposed to be nice tomorrow (take away a subconscious worry).

If you're interested in learning more about gratuity's try reading Keep the Change: A Clueless Tipper's Quest to Become the Guru of the Gratuity. The best book out there on working in a restaurant is Kitchen Confidential.

Sources
– Ingratiation and Gratuity: The Effect of Complimenting Customers on Tipping Behavior in Restaurants
– Tip gratuity scale in server's favor with simple techniques
– http://www.allacademic.com/meta/p_mla_apa_research_citation/3/0/7/3/0/p307300_index.html

How Williams Sonoma Inadvertently Sold More Bread Machines

Paying attention to what your customers and clients see can be a very effective way to increase your influence and, subsequently, your business.

Steve Martin, co-author of Yes! 50 Secrets from the Science of Persuasion, tells the story:

A few years ago a well-known US kitchen retailer released its latest bread-making machine. Like any company bringing a new and improved product to market, it was excited about the extra sales revenues the product might deliver. And, like most companies, it was a little nervous about whether it had done everything to get its product launch right.

It needn’t have worried. Within a few weeks, sales had almost doubled. Surprisingly, though, it wasn’t the new product that generated the huge sales growth but an older model.

Yet there was no doubt about the role that its brand new product had played in persuading customers to buy its older and cheaper version.

Persuasion researchers suggest that when people consider a particular set of choices, they often favour alternatives that are ‘compromise choices’. That is, choices that compromise between what is needed at a minimum and what they could possibly spend at a maximum.

A key factor that often drives compromise choices is price. In the case of the bread-making machine, when customers saw the newer, more expensive product, the original, cheaper product immediately seemed a wiser, more economical and attractive choice in comparison.

Paying attention to what your customers and clients see first can be a very effective way to increase your influence and, subsequently, your business. It is useful to remember that high- end and high-priced products provide two crucial benefits. Firstly, they often serve to meet the needs of customers who are attracted to high-price offerings. A second, and perhaps less recognised benefit is that the next-highest options are often seen as more attractively priced.

Bars and hotels often present wine lists in the order of their cheapest (most often the house wine) first. But doing so might mean that customers may never consider some of the more expensive and potentially more profitable wines towards the end of the list. The ‘compromise’ approach suggests that reversing the order and placing more expensive wines at the top of the list would immediately make the next most expensive wines a more attractive choice — potentially increasing sales.

Original source: http://www.babusinesslife.com/Tools/Persuasion/How-compromise-choices-can-make-you-money.html

Michael Mauboussin: Getting More out of Your Brain

As Warren Buffett says, temperament is more important that IQ, because otherwise you take all the horsepower of your brain and dramatically reduce it. “A lot of people start out with 400-horsepower motors but only get a hundred horsepower of output,” he said. “It’s way better to have a 200-horsepower motor and get it all into output.”

With that in mind, Michael Mauboussin, the very first guest on our podcast, writes about five pitfalls to avoid that reduce the horsepower of your brain.

Five Pitfalls to Avoid

1. Overconfidence

The first pitfall that Mauboussin mentions is overconfidence. Acting as though you are smarter than you are is a recipe for disaster.

Researchers have found that people consistently overrate their abilities, knowledge, and skill. This is especially true in areas outside of their expertise. For example, professional securities analysts and money managers were presented with ten requests for information that they were unlikely to know (e.g., total area of Lake Michigan in square miles). They were asked to respond to each question with both an answer and a “confidence range”—high and low boundaries within which they were 90% certain that the true number resides. On average, the analysts choose ranges wide enough to accommodate the correct answer only 64% of the time. Money managers were even less successful at 50%.

Edward Russo and Paul Schoemaker, in their book “Managing Overconfidence,” argue that this confidence quiz measures how well we recognize the gap between what we think we know and what we do know. They point out that good decision-making means knowing the limits of your knowledge. Warren Buffett echoes the point with his circle of competence concept. He argues that investors should define their circle of competence, or area of expertise, and stay within it. Overconfidence in our expertise can lead to poor decisions. In the words of Will Rogers, “It’s not what we don’t know that gets us into trouble, it’s what we know that ain’t so.”

Mauboussin suggests you know your circle of competence and not over-estimate your abilities. To compensate for inevitable error in this, he suggests adding a margin of safety. Another idea is to do the work necessary to have an opinion and test it with other people by seeking feedback.

2. Anchoring and Adjustment

This is when we are influenced by data or information, even if it's not relevant, that impacts our future judgments. “In considering a decision,” Mauboussin writes, “we often give disproportionate weight to the first information we receive. As a result, initial impressions, ideas, estimates, or data anchor our subsequent thoughts.”

A good decision process helps counter this by ensuring you look at decisions from various angles, consider a wide variety of sources, start from zero, and adapt to reality.

3. Improper Framing

I'll let Mauboussin explain this before I take over.

People’s decisions are affected by how a problem, or set of circumstances, is presented. Even the same problem framed in different—and objectively equal—ways can cause people to make different choices. One example is what Richard Thaler calls “mental accounting.” Say an investor buys a stock at $50 per share that surges to $100. Many investors divide the value of the stock into two distinct parts—the initial investment and the quick profit. And each is treated differently—the original investment with caution and the profit portion with considerably less discipline. Thaler and Eric Johnson call this “the house money effect.”

This effect is not limited to individuals. Hersh Shefrin documents how the committee in charge of Santa Clara University’s endowment portfolio succumbed to this effect. Because of strong market performance, the endowment crossed a preordained absolute level (the frame) ahead of the time line set by the university president. The result? The university took some of the “house money” and added riskier investment classes to its portfolio, including venture capital, hedge funds, and private placements. Classic economic theory assumes frame independence: all money is treated the same. But empirical evidence shows that the frame indeed shapes decisions.

One of the most significant insights from prospect theory is that people exhibit significant aversion to losses when making choices between risky outcomes, no matter how small the stakes. In fact, Kahneman and Tversky find that a loss has about two and a half times the impact of the gain the same size. In other words, people feel a lot worse about losses of given size than they feel good about a gain of similar magnitude. This leads to loss aversion.

To describe this loss aversion, Shefrin and Meir Statman coined the term “disposition effect,” which they amusingly suggest is shorthand for “predisposition toward get-evenitis.” Since it is difficult for investors to make peace with their losses, they tend to sell their winners too soon and hold on to their losers too long. This is because they don’t want to take a loss on a stock. They want to at least get even despite the fact that the original rationale for purchasing the stock no longer appears valid. This is an important insight for all investors, including those that adopt the expectations investing approach.

This is often influenced by incentives as well, because we tend to see the world through our incentives and not how it really is.

Consider this example from Charlie Munger of how Lloyd's Insurance rewarded people:

They were paid a percentage of the gross volume that went through. And paying everybody a percentage of the gross, when what you're really interested in is the net, is a system — given the natural bias of human beings toward doing what's in their own interest even though it has terrible consequences for other people — that really did Lloyd's in.

People were rewarded for doing stupid things because from their frame of reference they acted rationally. It's hard to see the reality of a system you are a part of. To counter this bias, you need to step outside the system. This can be done by adjusting time frames (thinking about the long term and short term), considering second and third order effects, looking at the issue in different ways, and seeking out conflicting opinions.

4. Irrational Escalation of a Commitment

Sometimes we just keep digging when the best thing to do is cut our losses. Past decisions create what economists call sunk costs, which are past investments that cannot be recovered. And if we're the ones who made the decisions for those investments, we're likely to rationalize that the future is brighter. Otherwise, we'd have to admit that we were wrong and that's painful.

While the irrational escalation of a commitment can sometimes pay off, it's not a probabilistic way to think about things.

Sticking to a good decision process is a good way to avoid irrational escalation of a commitment. Other ideas include considering only future benefits and looking inside yourself to see if you're trying to avoid admitting a mistake. As Daniel Dennett says, “you should become a connoisseur of your own mistakes.”

5. Confirmation Trap

This comes from justifying your point of view and is one we can easily observe in others and fail to see in ourselves (again, this comes down to relativity.)

Mauboussin writes:

Investors tend to seek out information that supports their existing point of view while avoiding information that contradicts their opinion. This trap not only affects where investors go for information but also how they interpret the information they receive—too much weight is given to confirming evidence and not enough to disconfirming evidence. Investors often fall into the confirmation trap after making an investment decision. For example, once investors purchase a stock, they seek evidence that confirms their thesis and dismiss or discount information that disconfirms it. This leads to a loss of objectivity.

It's this loss of objectivity that kills us. The way to counter this is to “ask questions and conduct analysis that challenges your most cherished and firmly held beliefs” and seek out disconfirming evidence, something Charles Darwin mastered.

***

Still Curious? Follow your brain over to Grey Thinking.

Mental Model: Anchoring

We often pay attention to irrelevant information. This happens because we develop estimates by starting with an initial anchor that is based on whatever information is provided and adjust from the anchor (sometimes our adjustments are not sufficient). This is called anchoring.

More problematic perhaps is that the existence of an anchor leads people to think of information consistent with that anchor (commitment and consistency) rather than access information that is inconsistent with that anchor.

Anchoring is commonly observed in real estate and the stock market. Many BUYERS tend to negotiate based on the listed price of a house — and many SELLERS tend to determine the list priced based on adjusting their purchase price.

Some interesting points on anchoring: (1) Experts and non-experts are affected similarly by an anchor; (2) Anchoring-adjustment may occur in any task requiring a numerical response, provided an initial estimate is available; and (3) One study of particular importance for investors, by Joyce and Biddle (1981), found support for the presence of the anchoring effect among practicing auditors of major accounting firms.

* * *

Anchoring and adjustment was first theorized by Tversky and Kahneman. The pair demonstrated that when asked to guess the percentage of African nations which are members of the UN, people who were first asked “was it more or less than 35%” guessed lower values than those who had been asked if it was more or less than 65%. Subjects were biased by the number 45 or 65 and this had a meaningful influence on their judgment. Over time this bias has been shown in numerous experiments. Interestingly, paying participants based on their accuracy did not reduce the magnitude of the anchoring effect.

The power of anchoring can be explained by the confirmation heuristic and by the limitations of our own mind. We selectively access hypothesis-consistent information without realizing it. Availability may also play a role in anchoring.

There are numerous examples of anchoring in everyday life:

  • Children are tracked by schools that categorize them by ability at an early age and based on this initial “anchor” teachers derive expectations. Teachers tend to expect children assigned to the lower group to achieve little and have much higher expectations of children in the top group (for more info see Darley and Gross, 1983). Malcolm Gladwell talks more about anchoring in his book outliers.
  • First impressions are a form of anchoring.
  • Minimum payments on credit card bills.
  • Posted interest rates at Banks.
  • Prices on a menu in restaurants.
  • Race can also be an anchor with respect to our expectations (Duncan, 1976)

* * *

Heuristic and Biases: The Psychology of Intuitive Judgment offers:

“To examine this heuristic, Tversky and Kahneman (1974) developed a paradigm in which participants are given an irrelevant number and asked if the answer to the question is greater or less than that value. After this comparative assessment, participants provide an absolute answer. Countless experiments have shown that people's absolute answers are influenced by initial comparison with the irrelevant anchor. People estimate that Gandhi lived to be roughly 67 years old, for example, if they first decided whether he died before or after the age of 140, but only 50years old if they first decided whether he died before or after the age of 9.

Anchoring effects have traditionally been interpreted as a result of insufficient adjustment from an irrelevant value, but recent evidence casts doubt on this account. Instead, anchoring effects observed in the standard paradigm appear to be produced by the increased accessibility of anchor consistent information.

* * *

In Judgment and Decision Making, David Hardman says:

Anchoring effects have been observed in a variety of domains including pricing, negotiation, legal judgment, lotteries and gambles, probability estimates, and general knowledge. In one of these studies, Northcraft and Neale (1987) demonstrated anchoring effects in the pricing estimates of estate agents…

Despite the robustness of the anchoring effect, there has been little agreement as to the true nature of the underlying processes. One theory that has been proposed is that of selective anchoring (Mussweilier and Strack, 1997). According to this account, the comparative question task activates information into memory that is subsequently more accessible when making an absolute judgment….

Epley (2004) listed four findings that are consistent with the selective memory account: (1) People attend to shared features between the anchor and target more than to unique features; (2) Completion of a standard anchoring task speeds identification of words consistent with implications of an anchor value rather than words inconsistent with it; (3) The size of anchoring effects can be influenced by altering the hypothesis tested in the comparative assessment (for example, asking whether the anchor is less than a target value has a different effect to asking whether it is more than a target value); (4) People with greater domain knowledge are less susceptible to the effects of irrelevant anchors.

* * *

In Fooled by Randomness, Nicholas Taleb writes:

Anchoring to a number is the reason people do not react to their total wealth, but rather to differences of wealth from whatever number they are currently anchored to. This is in major conflict with economic theory, as according to economists, someone with $1 million in the bank would be more satisfied than if he had $500 thousand but this is not necessarily the case.

* * *

Tversky and Kahneman (1974)

In many situations, people make estimates by starting from an initial value that is adjusted to yield the final answer. The initial value, or starting point, may be suggested by the formulation of the problem, or it may be the result of a partial computation. In either case, adjustments are typically insufficient (Slovic & Lichtenstein, 1971). That is, different starting points yield different estimates, which are biased toward the initial values. We call this phenomenon anchoring.

* * *

Russell Fuller writes:

Psychologists have documented that when people make quantitative estimates, their estimates may be heavily influenced by previous values of the item. For example, it is not an accident that a used car salesman always starts negotiating with a high price and then works down. The salesman is trying to get the consumer anchored on the high price so that when he offers a lower price, the consumer will estimate that the lower price represents a good value. Anchoring can cause investors to under-react to new information.

Anchoring is a Farnam Street Mental Model.

Hindsight Bias

hindsight bias

“Judgments about what is good and what is bad, what is worthwhile and what is a waste of talent, what is useful and what is less so, are judgments that seldom can be made in the present. They can safely be made only by posterity.”Tulving

***

Hindsight bias occurs when we look backward in time and see events are more predictable than they were at the time a decision was made. This bias, also known as the “knew-it-all-along effect,” typically involves those annoying “I told you so” people who never really told you anything.

For instance, consider driving in the car with your partner and coming to a T in the road. Your partner decides to turn right and 4 miles down the road when you realize you are lost you think “I knew we should have taken that left.”

Hindsight bias can offer a number of benefits in the short run. For instance, it can be flattering to believe that our judgment is better than it actually is. And, of course, hindsight bias allows us to participate in one of our favorite pastimes — criticizing the decisions of others for their lack of foresight.

Aside from helping aid in a more objective reflection of decisions, hindsight bias also has several practical implications. For example, consider someone asked to review a paper but knows the results of the previous review from someone else? Or a physician asked for a second opinion after knowing the results of the first. The results of these actions will likely be biased by some degree. Once we know an outcome it becomes easy to find some plausible explanation.

Hindsight bias helps us become less accountable for our decisions, less critical of ourselves, and over-confident in our ability to make decisions.

One of the most interesting things I discovered when researching hindsight bias was the impact on our legal system and the perceptions of jurors.

* * *

Harvard Professor Max Bazerman offers:

The processes that give rise to anchoring and overconfidence are also at play with the hindsight bias. According to this explanation, knowledge of an event's outcome works as an anchor by which individuals interpret their prior judgments of the event's likelihood. Due to the selective accessibility of the confirmatory information during information retrieval, adjustments to anchors are inadequate. Consequently, hindsight knowledge biases our perceptions of what we remember knowing in foresight. Furthermore, to the extent that various pieces of data about the event vary in support of actual outcome, evidence that is consistent with the known outcome may become cognitively more salient and thus more available in memory. This tendency will lead an individual to justify a claimed foresight in view of “the facts provided.” Finally, the relevance of a particular piece of that may later be judged important to the extent to which it is representative of the final observed outcome.

In Cognitive Illusions, Rudiger Pohl offered the following explanations of hindsight bias:

Most prominent among the proposes explanations are cognitive accounts which assume that hindsight bias results from an inability to ignore the solution. Among the early approaches are the following three: (1) Fischhoff (1975) assumed an immediate and irreversible assimilation of the solution into one's knowledge base. As a consequence, the reconstructed estimate will be biased towards the solution. (2) Tversky and Kahneman (1974) proposed a cognitive heuristic for the anchoring effected, named anchoring and insufficient adjustment. The same mechanism may apply here, if the solution is assumed to serve as an “anchor” in the reconstruction process. The reconstruction starts from this anchor and is then adjusted in the direction of one's knowledge base. However, this adjustment process may stop too early, for example at the point where the first plausible value is reached, thus ending to a biased reconstruction. (3) Hell (1988) argued that the relative trace strengths of the regional estimate and of the solution might predict the amount of hindsight bias. The stronger the trace strength of the solution relative to that of the original estimate, the larger hindsight bias should be.

Pohl also offers an evolutionary explanation of hindsight bias:

Finally, some authors argued that hindsight bias is not necessarily a bothersome consequence of a “faulty” information process system, but that is may rather represent an unavoidable by-product of an evolutionary evolved function, namely adaptive learning. According to this view, hindsight bias is seen as the consequence of our most valuable ability to update previously held knowledge. This may be seen as a necessary process in order to prevent memory overload and thus to maintain normal cognitive functioning. Besides, updating allows us to keep our knowledge more coherent and to draw better inferences.

Ziva Junda, in social cognition, offers the following explanation of why hindsight bias occurs:

Preceding events take on new meaning and importance as they are made to cohere with the known outcome. Now that we know the our friends have filed for divorce, any ambiguous behavior we have seen is reinterpreted as indicative of tension, any disagreement gains significance, and any signs of affection seem irrelevant. It now seems obvious that their marriage was doomed from the start…Moreover, having adjusted our interpretations in light of current knowledge, it is difficult to imagine how things could have happened differently.

When making likelihood judgments, we often rely on the availability heuristic: The more difficult it is for us to imagine an outcome, the more unlikely it seems. Therefore, the difficulty we experience imagining how things might have turned out differently makes us all the more convinced that the outcomes that did occur were bound to have occurred.

Hindsight bias has large implications for criminal trials. In Jury Selection Hale Starr and Mark McCormick offer the following:

The effects of hindsight bias – which result in being held to a higher standard – are most critical for both criminal and civil defendants. The defense is more susceptible to the hindsight bias since their actions are generally the ones being evaluated fro reasonableness in foresight-foreseeability. When jurors perceive that the results of particular actions were “reasonably” more likely after the outcome is known, defendants are judged as having been capable of knowing more than they knew at the time the action was taken and therefore as capable of preventing the “bad” outcome.

In post-verdict surveys jurors unknowingly demonstrate some of the effects of hindsight bias:

“I can't understand why the managers didn't try to get more information or use the information they had available. They should have known there would be safety problems at the plant”.

“The defendants should have known people would remove the safety shield around the tire. There should have been warnings so people wouldn't do that”

“Even though he was a kid, he should have known that once he showed the others who had been drinking that he had a gun, things would get out of hand. He should have known guns invited violence.

Jurors influenced by the hindsight bias look at the evidence presented and determine that the defendants knew or should have known their actions were unsafe, unwise, or created a dangerous situation. Hindsight bias often results in the judgment that the event was “an accident or tragedy waiting to happen.”

* * *

Protection Against Hindsight Bias

In Principles of Forecasting, Jon Scott Armstrong, offers the following advice on how to protect yourself:

The surest protection against (hindsight bias) is disciplining ourselves to make explicit predictions, showing what we did in fact know (sounds like a decision journal). That record can also provide us with some protection against those individuals who are wont to second guess us, producing exaggerated claims of what we should have known (and perhaps should have told them). If these observers look to this record, it may show them that we are generally less proficient as forecaster than they would like while protecting us against charges of having blown a particular assignment. Having an explicit record can also protect us against overconfidence in our own forecasting ability: If we feel that we “knew all along” what was going to happen, then it is natural enough to think that we will have similar success in the future. Unfortunately, an exaggerated perception of a surprise-free past maybe portend a surprised-full future.

Documenting the reasons we made a forecast makes it possible for us to know not only how well the forecast did, but also where it went astray. For example, subsequent experiences may show that we used wrong (or misunderstood) inputs. In that case, we can, in principle, rerun the forecasting process with better inputs and assess the accuracy of our (retrospectively) revised forecasts. Perhaps we did have the right theory and procedures, but were applying them to a mistaken picture of then-current conditions…Of course inputs are also subject to hindsight bias, hence we need to record them explicitly as well. The essence of making sense out of outcome knowledge is reinterpreting the processes and conditions that produced the reported event.

Hindsight Bias is part of the Farnam Street latticework of mental models.