Just In Time Information Gathering

We’re becoming more like factories.

Just-in-time is a production strategy aimed at, among other things, reducing the need for excess inventory. Parts are supplied only when needed in the amount needed. While it makes a business more capital efficient, it also makes it more fragile.

We’ve adopted a similar strategy for information gathering. We’re so consumed by noise and busy work that the only time we really seek out signal is when we need it the most: right before we make a decision.

This creates a host of problems.

The worst time to look for information is when we need it to make a decision. When we do that we’re more likely to see what’s unique and miss the historical context. We’re also more likely to be biased by what is available. And searching for information at the time of need is an indication that you have no idea what you’re doing.

“Pattern recognition,” says Alice Schroeder, “creates an impulse always to connect new knowledge to old and to primarily be interested in new knowledge that genuinely builds on the old.” It helps knowledge snowball.

If we can’t connect the current situation to something we already understand, we might reason that it is not in our circle of competence and thus we shouldn’t be drawing conclusions. If we can, however, connect it to something we previously understood then we’re less likely to draw conclusions on the basis of “this time is different.”

There is merit to thinking about information gathering as a logistics problem.

This Time is Different

When we look at situations we’re always looking for what’s unique. We should, however, give more thought to similarities.

“This time is different” could be the 4 most costly words ever spoken. It’s not the words that are costly so much as the conclusions they encourage us to draw.

We incorrectly think that differences are more valuable than similarities.

After all, anyone can see what’s the same but it takes true insight to see what’s different, right? We’re all so busy trying to find differences that we forget to pay attention to what is the same.

Imagine sitting in a meeting where people are about to make the same mistake they made last year on the same decision. Let’s say, for example, Jack has a history of messing up the annual tax returns. He’s a good guy. He’s nice to everyone. In fact, he buys everyone awesome Christmas presents. But the last three years—the period he’s been in charge of tax returns—have been nothing short of a disaster causing more work for you and your department.

The assignment for the tax return comes up and Jack is once again nominated.

Before you have a chance to voice your concerns, one of your co-workers speaks up: “I know Jack has dropped the ball on this assignment in the past but I think this time is different. He’s been working hard to make sure he’s better organized.”

That’s all it takes. Conversation over — everyone is focused on what’s unique about this time and it’s unlikely, despite ample evidence, that you’ll be able to convince them otherwise.

In part, people want to believe in Jack because he’s a nice guy. In part, we’re all focused on why this time is different and we’ll ignore evidence to the contrary.

Focusing on what’s different makes it easy to forget historical context. We lose touch with the base rate. We only see the evidence that supports our view (confirmation bias). We become optimistic and overconfident.

History provides context. And what history shows us is that no matter how unique things are today there are a lot of similarities with the past.

Consider investors and the dotcom bubble. Collectively people saw this as unprecedented and unique, a massive transformation that was unparalleled.

That reasoning, combined with a blindness to what was the same about this situation and previous ones, encouraged us to draw conclusions that proved costly. We reasoned that everything would change and everyone who owned internet companies would prosper and the old non-internet companies would quickly go into bankruptcy.

All of a sudden profits didn’t matter. Nor did revenue. They would come in time, we hoped. Market share mattered no matter how costly it was to acquire.

More than that, if you didn’t buy now you’d miss out. These companies would take over the world and you’d be left out.

We got so caught up in what was different that we forgot to see what was the same.

And there were historical parallels: Automobiles, Radio, Television, and Airplanes to name a few. At the time these innovations completely transformed the world as well. You can consider them the dotcoms of yesteryear.

And how did these massively transformational industries end up for investors?

At one time there were allegedly over 70 different auto manufacturing operations in the United States. Only 3 of them survived (and a few of those even required government funds.)

If you catch yourself reasoning based on “this time is different” remember that you are probably speculating. While you may be right, odds are, this time is not different. You just haven’t looked for the similarities.

I have a free weekly newsletter jam-packed with awesomeness including stuff that never makes the blog or twitter. Over 20k people love it. Read what you’ve been missing.

Avoiding Ignorance

This is a continuation of two types of ignorance.

You can’t deal with ignorance if you can’t recognize its presence. If you’re suffering from primary ignorance it means you probably failed to consider the possibility of being ignorant or you found ways not to see that you were ignorant.

You’re ignorant and unaware, which is worse than being ignorant and aware.

The best way to avoid this, suggests Joy and Zeckhauser, is to raise self-awareness.

Ask yourself regularly: “Might I be in a state of consequential ignorance here?”

They continue:

If the answer is yes, the next step should be to estimate base rates. That should also be the next step if the starting point is recognized ignorance.

Of all situations such as this, how often has a particular outcome happened. Of course, this is often totally subjective.

and its underpinnings are elusive. It is hard to know what the sample of relevant past experiences has been, how to draw inferences from the experience of others, etc. Nevertheless, it is far better to proceed to an answer, however tenuous, than to simply miss (primary ignorance) or slight (recognized ignorance) the issue. Unfortunately, the assessment of base rates is challenging and substantial biases are likely to enter.

When we don’t recognize ignorance the base rate is extremely underestimated. When we do recognize ignorance, we face “duelling biases; some will lead to underestimates of base rates and others to overestimates.”

Three biases come into play while estimating base rates: overconfidence, salience, and selection biases.

So we are overconfident in our estimates. We estimate things that are salient – that is, “states with which (we) have some experience or that are otherwise easily brought to mind.” And “there is a strong selection bias to recall or retell events that were surprising or of great consequence.”

Our key lesson is that as individuals proceed through life, they should always be on the lookout for ignorance. When they do recognize it, they should try to assess how likely they are to be surprised—in other words, attempt to compute the base rate. In discussing this assessment, we might also employ the term “catchall” from statistics, to cover the outcomes not specifically addressed.

It’s incredibly interesting to view literature through the lens of human decision making.

Crime and Punishment is particularly interesting as a study of primary ignorance. Raskolnikov deploys his impressive intelligence to plan the murder, believing, in his ignorance, that he has left nothing to chance. In a series of descriptions not for the squeamish or the faint-hearted, the murderer’s thoughts are laid bare as he plans the deed. We read about his skills in strategic inference and his powers of prediction about where and how he will corner his victim; his tactics at developing complementary skills (what is the precise manner in which he will carry the axe?; what strategies will help him avoid detection) are revealed.

But since Raskolnikov is making decisions under primary ignorance, his determined rationality is tightly “bounded.” He “construct[s] a simplified model of the real situation in order to deal with it; … behaves rationally with respect to this model, [but] such behavior is not even approximately optimal with respect to the real world” (Simon 1957). The second-guessing, fear, and delirium at the heart of Raskolnikov’s thinking as he struggles to gain a foothold in his inner world show the impact of a cascade of Consequential Amazing Development’s (CAD), none predicted, none even contemplated. Raskolnikov anticipated an outcome in which he would dispatch the pawnbroker and slip quietly out of her apartment. He could not have possibly predicted that her sister would show up, a characteristic CAD that challenges what Taleb (2012) calls our “illusion of predictability.”

Joy and Zeckhauser argue we can draw two conclusions.

First, we tend to downplay the role of unanticipated events, preferring instead to expect simple causal relationships and linear developments. Second, when we do encounter a CAD, we often counter with knee-jerk, impulsive decisions, the equivalent of Raskolnikov committing a second impetuous murder.

Bringing you Farnam Street took thousands of dollars and nearly 1,500 hours in 2013. If you find any value in it, I’d greatly appreciate your support with a modest donation. For extra Karma points, become a member with a monthly donation.

References: Ignorance: Lessons from the Laboratory of Literature (Joy and Zeckhauser).

A Decision-Making Magic Trick

Two important nuggets from an interview with Chip Heath, co-author of Decisive (more here), on improving our ability to make better decisions:

A decision-making magic trick

The closest thing to a decision-making magic trick that I’ve found is the question, “What would you advise your best friend to do if they were in your situation?” So often when I ask that question, people blurt out an answer and their eyes get wide. They’re shocked at how easy it is when you just imagine you’re advising someone else.

This time isn’t so different

businesses make decisions about mergers and acquisitions that are hundreds of millions of dollars, and to the senior leader it seems like, “Well this is a different situation than the last acquisition we made.” And yet in that room making the decision is a set of people who have probably seen a dozen acquisitions, but they don’t take the time to do even the equivalent of the three-out-of-five-stars rating that we would get from Amazon.com.

The kind of decisions that senior people make always present themselves as though they are completely different than anything else. Big decisions are subtle in a way because they all seem to come one at a time. The advantage of smaller decisions is we realize we are in a repeated situation where we’re going to see the same thing a lot.

Yet lots of big decisions end up having that property as well. If we take the time to move our mental spotlight around, we can always find other decisions that are similar to this one. We think the chief financial officer we’re trying to hire is in a unique position in the company, and that this is a unique position in time with unique demands. But the fact is, we’ve made other senior hires and we know how that process goes. Stepping back and taking in the broader context is just as useful for senior leaders as it is for the frontline worker who’s making a decision on the 35th mortgage application or the 75th customer complaint.

The 12 cognitive biases that prevent you from being rational

i09 produced a great overview of some cognitive biases.

First, the difference between cognitive biases and logical fallacies:

A logical fallacy is an error in logical argumentation (e.g. ad hominem attacks, slippery slopes, circular arguments, appeal to force, etc.). A cognitive bias, on the other hand, is a genuine deficiency or limitation in our thinking — a flaw in judgment that arises from errors of memory, social attribution, and miscalculations (such as statistical errors or a false sense of probability).

Confirmation Bias

We love to agree with people who agree with us. It’s why we only visit websites that express our political opinions, and why we mostly hang around people who hold similar views and tastes.

Ingroup Bias

Ultimately, the ingroup bias causes us to overestimate the abilities and value of our immediate group at the expense of people we don’t really know.

Gambler’s Fallacy

We tend to put a tremendous amount of weight on previous events, believing that they’ll somehow influence future outcomes.

Post-Purchase Rationalization (aka commitment and consistency bias)

[A] kind of built-in mechanism that makes us feel better after we make crappy decisions, especially at the cash register. Also known as Buyer’s Stockholm Syndrome, it’s a way of subconsciously justifying our purchases — especially expensive ones.

Neglecting Probability (aka Insensitivity To Base Rates)

It’s the same phenomenon that makes us worry about getting killed in an act of terrorism as opposed to something far more probable, like falling down the stairs or accidental poisoning.

Observational Selection Bias (Availability Bias?)

This is that effect of suddenly noticing things we didn’t notice that much before — but we wrongly assume that the frequency has increased.

Status-Quo Bias

We humans tend to be apprehensive of change, which often leads us to make choices that guarantee that things remain the same, or change as little as possible.

Negativity Bias

People tend to pay more attention to bad news — and it’s not just because we’re morbid. Social scientists theorize that it’s on account of our selective attention and that, given the choice, we perceive negative news as being more important or profound.

Bandwagon Effect (aka social proof)

Though we’re often unconscious of it, we love to go with the flow of the crowd.

Projection Bias

We tend to assume that most people think just like us — though there may be no justification for it. This cognitive shortcoming often leads to a related effect known as the false consensus bias where we tend to believe that people not only think like us, but that they also agree with us.

The Current Moment Bias

We humans have a really hard time imagining ourselves in the future and altering our behaviors and expectations accordingly.

Anchoring Effect

Also known as the relativity trap, this is the tendency we have to compare and contrast only a limited set of items. It’s called the anchoring effect because we tend to fixate on a value or number that in turn gets compared to everything else.

Still curious? Check out the Farnam Street Latticework of Mental Models.

Three Things to Consider in Order To Make an Effective Prediction

Michael Mauboussin commenting on Daniel Kahneman:

When asked which was his favorite paper of all-time, Daniel Kahneman pointed to “On the Psychology of Prediction,” which he co-authored with Amos Tversky in 1973. Tversky and Kahneman basically said that there are three things to consider in order to make an effective prediction: the base rate, the individual case, and how to weight the two. In luck-skill language, if luck is dominant you should place most weight on the base rate, and if skill is dominant then you should place most weight on the individual case. And the activities in between get weightings that are a blend.

In fact, there is a concept called the “shrinkage factor” that tells you how much you should revert past outcomes to the mean in order to make a good prediction. A shrinkage factor of 1 means that the next outcome will be the same as the last outcome and indicates all skill, and a factor of 0 means the best guess for the next outcome is the average. Almost everything interesting in life is in between these extremes.

To make this more concrete, consider batting average and on-base percentage, two statistics from baseball. Luck plays a larger role in determining batting average than it does in determining on-base percentage. So if you want to predict a player’s performance (holding skill constant for a moment), you need a shrinkage factor closer to 0 for batting average than for on-base percentage.

I’d like to add one more point that is not analytical but rather psychological. There is a part of the left hemisphere of your brain that is dedicated to sorting out causality. It takes in information and creates a cohesive narrative. It is so good at this function that neuroscientists call it the “interpreter.”

Now no one has a problem with the suggestion that future outcomes combine skill and luck. But once something has occurred, our minds quickly and naturally create a narrative to explain the outcome. Since the interpreter is about finding causality, it doesn’t do a good job of recognizing luck. Once something has occurred, our minds start to believe it was inevitable. This leads to what psychologists call “creeping determinism” – the sense that we knew all along what was going to happen. So while the single most important concept is knowing where you are on the luck-skill continuum, a related point is that your mind will not do a good job of recognizing luck for what it is.

Mauboussin is the author of The Success Equation: Untangling Skill and Luck in Business, Sports, and Investing.

Mental Model: Bias From Insensitivity To Base Rates

Our insensitivity to base rates emanates from the representativeness heuristic and is a common psychological bias.

From Smart Choices: A Practical Guide to Making Better Decisions:

Donald Jones is either a librarian or a salesman. His personality can best be described as retiring. What are the odds that he is a librarian?

When we use this little problem in seminars, the typical response goes something like this: “Oh, it’s pretty clear that he’s a librarian. It’s much more likely that a librarian will be retiring; salesmen usually have outgoing personalities. The odds that he’s a librarian must be at least 90 percent.” Sounds good, but it’s totally wrong.

The trouble with this logic is that it neglects to consider that there are far more salesmen than male librarians. In fact, in the United States, salesmen outnumber male librarians 100 to 1. Before you even considered the fact that Donald Jones is “retiring,” therefore, you should have assigned only a 1 percent chance that Jones is a librarian. That is the base rate.

Now, consider the characteristic “retiring.” Suppose half of all male librarians are retiring, whereas only 5 percent of salesmen are. That works out to 10 retiring salesmen for every retiring librarian — making the odds that Jones is a librarian closer to 10 percent than to 90 percent. Ignoring the base rate can lead you wildly astray.

* * *

Charlie Munger, instructs us how to think about base rates with an example of an employee who got caught for stealing, claiming she’s never done it before and will never do it again:

You find an isolated example of a little old lady in the See’s Candy Company, one of our subsidiaries, getting into the till. And what does she say? “I never did it before, I’ll never do it again. This is going to ruin my life. Please help me.” And you know her children and her friends, and she’d been around 30 years and standing behind the candy counter with swollen ankles. When you’re an old lady it isn’t that glorious a life. And you’re rich and powerful and there she is: “I never did it before, I’ll never do it again.” Well how likely is it that she never did it before? If you’re going to catch 10 embezzlements a year, what are the chances that any one of them — applying what Tversky and Kahneman called base rate information — will be somebody who only did it this once? And the people who have done it before and are going to do it again, what are they all going to say? Well in the history of the See’s Candy Company they always say, “I never did it before, and I’m never going to do it again.” And we cashier them. It would be evil not to, because terrible behavior spreads (Greshams law).

* * *

Max Bazerman, in Judgment in Managerial Decision Making, writes:

(Our tendency to ignore base rates) is even stronger when the specific information is vivid and compelling, as Kahneman and Tversky illustrated in one study from 1972. Participants were given a brief description of a person who enjoyed puzzles and was both mathematically inclined and introverted. Some participants were told that this description was selected from a set of seventy engineers and thirty lawyers. Others were told that the description came from a list of thirty engineers and seventy lawyers. Next, participants were asked to estimate the probability that the person described was an engineer. Even though people admitted that the brief description did not offer a foolproof means of distinguishing lawyers from engineers, most tended to believe the description was of an engineer. Their assessments were relatively impervious to differences in base rates of engineers (70 percent versus 30 percent of the sample group.)

Participants do use base-rate data correctly when no other information is provided. In the absence of a personal description, people use the base rates sensibly and believe that a person picked at random from a group made up mostly of lawyers is most likely to be a lawyer. Thus, people understand the relevance of base-rate information, but tend to disregard such data when individuating data are also available.

Ignoring base rates has many unfortunate implications. … Similarly, unnecessary emotional distress is caused in the divorce process because of the failure of couples to create prenuptial agreements that facilitate the peaceful resolution of a marriage. The suggestion of a prenuptial agreement is often viewed as a sign of bad faith. However, in far too many cases, the failure to create prenuptial agreements occurs when individuals approach marriage with the false belief that the high base rate for divorce does not apply to them.

* * *

Of course, this applies to investing as well. This conversation with Sanjay Bakshi speaks to this:

One of the great lessons from studying history is to do with “base rates”. “Base rate” is a technical term of describing odds in terms of prior probabilities. The base rate of having a drunken-driving accident is higher than those of having accidents in a sober state.

So, what’s the base rate of investing in IPOs? When you buy a stock in an IPO, and if you flip it, you make money if it’s a hot IPO. If it’s not a hot IPO, you lose money. But what’s the base rate – the averaged out experience – the prior probability of the activity of subscribing for IPOs – in the long run?

If you do that calculation, you’ll find that the base rate of IPO investing (in fact, it’s not even investing … it’s speculating) sucks! [T]hat’s the case, not just in India, but in every market, in different time periods.

When you evaluate whether smoking is good for you or not, if you look at the average experience of 1,000 smokers and compare them with a 1,000 non-smokers, you’ll see what happens.

People don’t do that. They get influenced by individual stories like a smoker who lived till he was 95. Such a smoker will force many people to ignore base rates, and to focus on his story, to fool themselves into believing that smoking can’t be all that bad for them.

What is the base rate of investing in leveraged companies in bull markets?

This is what you learn by studying history. You know that the base rate of investing in an airline business sucks. There’s this famous joke about how to become a millionaire. You start with a billion, and then you buy an airline. That applies very well in this business. It applies in so many other businesses.

Take the paper industry as an example. Averaged out returns on capital for paper industry are bad for pretty good reasons. You are selling a commodity. It’s an extremely capital intensive business. There’s a lot of over-capacity. And if you understand microeconomics, you really are a price taker. There’s no pricing power for you. Extreme competition in such an environment is going to cause your returns on capital to be below what you would want to have.

It’s not hard to figure this out (although I took a while to figure it out myself). Look at the track record of paper companies around the world, and the airline companies around the world, or the IPOs around the world, or the textile companies around the world. Sure, there’ll be exceptions. But we need to focus on the average experience and not the exceptional ones. The metaphor I like to use here is that of a pond. You are the fisherman. If you want to catch a lot of fish, then you must go to a pond where there’s a lot of fish. You don’t want to go to fish in a pond where there’s very little fish. You may be a great fisherman, but unless you go to a pond where there’s a lot of fish, you are not going to find a lot of fish.

So one of the great lessons from studying history is to see what has really worked well and what has turned out to be a disaster – and to learn from both.

‘Insensitivity To Base Rates’ is part of the Farnam Street Latticework of Mental Models.

You are not special. You are not exceptional.

Wellesley High English teacher David McCullough Jr. tells graduates “You are not special. You are not exceptional.”

“You are not special. You are not exceptional. …. You’ve been pampered, cosseted, doted upon, helmeted, bubble wrapped … feted and fawned over and called sweetie pie. … Your galaxy is not the centre of the universe. In fact, astrophysicists ensure us the universe has not centre. Therefore you cannot be it. ”

Still curious? If you’re interested in commencement speeches, these three are worth checking out: 1) David Foster Wallace — The Truth With A Whole Lot Of Rhetorical Bullshit Pared Away; 2) Neil Gaiman – One Of The Best Commencement Speeches Ever; and 3) Michael Lewis — Don’t Eat Fortune’s Cookie.