Farnam Street helps you make better decisions, innovate, and avoid stupidity.
With over 400,000 monthly readers and more than 93,000 subscribers to our popular weekly digest, we've become an online intellectual hub.
Farnam Street helps you make better decisions, innovate, and avoid stupidity.
Human beings are, in large part, driven by the admiration of their peers.
We seek to satisfy a deep biological need by acting in such a way that we feel praise and adulation; for our wealth, our success, our skills, our looks. It could be anything. The trait we are admired for matters less than the admiration itself. The admiration is the token we dance for. We feel envy when others are getting more tokens than us, and we pity ourselves when we’re not getting any.
There’s nothing inherently wrong with this. The pursuit of (deserved) admiration causes us to drive and accomplish. It’s a part of the explanation for why the human world has moved along so far from where it started — we’re willing to do extraordinary things that are extraordinarily difficult, like starting a company from scratch, inventing a new and better product, solving some ridiculously complicated theorem, or conquering unknown territory.
This is all well and good.
The problems come when we start compromising our own standards, those we have set for ourselves, in order to earn admiration. False, undeserved admiration.
To continue reading (1264 words) you must be a Farnam Street member. (Current members can log-in here.)
A shovel is just a shovel. You shovel things with it. You can break up weeds and dirt. (You can also whack someone with it.) I’m not sure I’ve seen a shovel used for much else.
Modern technological tools aren’t really like that.
What is an iPhone, functionally? Sure, it’s a got the phone thing down, but it’s also a GPS, a note-taker, an emailer, a text messager, a newspaper, a video-game device, a taxi-calling service, a flashlight, a web browser, a library, a book…you get the point. It does a lot.
This all seems pretty wonderful. To perform those functions 20 years ago, you needed a map and a sense of direction, a notepad, a personal computer, a cell phone, an actual newspaper, a Playstation, a phone and the willingness to talk to a person, an actual flashlight, an actual library, an actual book…you get the point. Basically, as Mark Andressen puts it, the world is being eaten by software. One simple (looking) device and a host of software can perform the functions served by a bunch of big clunky tools of the past.
So far, we’ve been convinced that usage of the New Tools is mostly “upside,” that our embrace of them should be wholehearted. Much of this is for good reason. Do you remember how awful using a map was? Yuck.
The problem is that our New Tools are winning the battle of attention. We’ve gotten to the point where the tools use us as much as we use them. This new reality means we need to re-examine our relationship with our New Tools.
To continue reading (1886 words) you must be a Farnam Street member. (Current members can log-in here.)
The fact is, if you don’t find it reasonable that prices should reflect relative scarcity,
then fundamentally you don’t accept the market economy,
because this is about as close to the essence of the market as you can find.
— Joseph Heath
Inevitably, when the price of a good or service rises rapidly, there follows an accusation of price-gouging. The term carries a strong moral admonition on the price-gouger, in favor of the price-gougee. Gas shortages are a classic example. With a local shortage of gasoline, gas stations will tend to mark up the price of gasoline to reflect the supply issue. This is usually rewarded with cries of unfairness. But does that really make sense?
In his excellent book Economics Without Illusions, Joseph Heath argues that it doesn’t.
In fact, this very scenario is market pricing reacting just as it should. With gasoline in short supply, the market price rises too so that those who need gasoline have it available, and those who simply want it do not. The price system ensures that everyone makes their choice correctly. If you’re willing to pay up, you pay up. If you’re not, you make alternative arrangements – drive less, use less heat, etc. This is exactly what market pricing is for – to give us a reference as we make our choices. But it’s still hard for many well-intentioned people to understand. Let’s think it through a little, with Heath’s help.
To continue reading you must be a Farnam Street member or purchase a copy. (Current members can log-in here.)
If you don’t want a membership but you do want to read this article, a copy is available for purchase.
The Narrative Fallacy
A typical biography starts by describing the subject’s young life, trying to show how the ultimate painting began as just a sketch. In Walter Isaacson’s biography of Steve Jobs, for example, Isaacson determines that Jobs’s success was determined to a great degree by the childhood influence of his father, a careful, detailed-oriented engineer and craftsman – Paul Jobs would carefully craft the backs of fences and cabinets even if no one would see – who Jobs later found out was not his biological father. The combination of his adoption and his craftsman father planted the seeds of Steve’s adult personality: his penchant for design detail, his need to prove himself, his messianic zeal. The recent movie starring Michael Fassbender especially plays up the latter cause; Jobs’s feeling of abandonment drove his success. Fassbender’s emotional portrayal earned him an Oscar nomination.
Nassim Taleb describes a memorable experience of a similar type in his book The Black Swan. He’s in Rome having an animated discussion with a professor who has read Nassim’s first book Fooled by Randomness, parts of which promote the idea that our mind creates more cause-and-effect links than reality would support. The professor proceeds to congratulate Nassim on his great luck by being born in Lebanon:
…had you grown up in a Protestant society where people are told that efforts are linked to rewards and individual responsibility is emphasized, you would never have seen the world in such a manner. You were able to see luck and separate cause and effect because of your Eastern Orthodox Mediterranean heritage.
These types of stories strike a deep chord: They give us deep, affecting reasons on which to hang our understanding of reality. They help us make sense of our own lives. And, most importantly, they frequently cause us to believe we can predict the future. The problem is, most of them are a sham.
To continue reading you must be a member or purchase a copy. (Current members can log-in here.)
If you don’t want a membership but you do want to read this article, a copy is available here for purchase.
At the Daily Journal Meeting (held March 25th 2015), Munger answered a question on Obamacare:
Of course the system of medical care, as evolved under the United States, has much wrong with it.
On the other hand, it has much that’s good about it. All the new drugs and devices, and new operations, medicine has taken more territory in my lifetime than it took in the whole previous history of mankind. It’s just amazing what’s been done.
A lot of it is obvious and simple, like inoculating the children against infantile paralysis, scraping the tartar off your teeth so you don’t wear plates when you’re 55 years old, and so on. People now take those benefits for granted, but I lived in a world where a lot of children died. Every city had a tuberculosis sanitarium, and half the people who got tuberculosis died. It’s amazing how well medicine has worked.
On the other hand, compared to the best it can possibly be, the American system is pretty peculiar. It’s very hard to fix. One kind of insanity is to say, “We’ll pay you so much a month for taking care of the people, and everything you save is yours.”
That is the system the government uses in dealing with the convalescent homes. That’s a great name, a convalescent home. You convalesce in heaven. You don’t convalesce them at home. [laughs] It’s attempting to have a euphemistic name.
That creates huge incentives to delay care and keep the money. The government has strict rules, compliance systems, and so forth. If we didn’t have that system, the cost of taking care of the old people in convalescent homes would be 10 times what it is. It was the only feasible solution.
The rest of the world is going in that direction, because the costs just keep rising and rising and rising.
If the government is going to pay A anything he wants for selling services to B, who doesn’t have to pay anything, of course the system is going to create a lot of unnecessary tests, unnecessary costs, unnecessary procedures, unnecessary interventions.
Psychiatrists that keep talking to a patient forever and ever with no improvement, of course that system is going to cause problems. The alternative system also causes problems.
Add the fact you’ve got politicians and add the fact you’ve got existing players who are enormously rich and powerful, who lobby you like crazy. A state legislature, now, is just 19 percent or whatever it is of GDP going to the medical system, imagine what the lobbying is like.
We get these Rube Goldberg systems. We get a lot of abuse of various kinds. There’s hardly an ethical drug company that hasn’t created multiple gross abuses, which are in substance growing through the bribery of doctors, which, of course, is illegal.
You have all these ethical companies. Ethical meaning it’s the designation of a drug company that has patented drugs. They’ve all committed big follies. The device makers of anything have been worse. There’s been a lot of abuse and craziness, and the costs, of course, just keep rising and rising.
That’s in a system that every child has been the greatest achiever in the history of the world. It’s very complicated. I think it will get addressed more because…We probably will end up with systems that are more like we do with the convalescent homes.
If you look at medicine, what’s happening is that more and more they’re going to a system where they pay somebody X dollars and everything they save, they keep. That system has some chance of controlling the cost. If you go into a great medical school hospital today, and you’re within a day of dying of some obvious thing like advanced cancer, the admitting physician is very likely to ask for a test of your cholesterol or any other damn thing. All the bills go to the government. As long as the incentives allow that, people will do it and they’ll rationalize their behavior. Something has to be done along that and more than is now being done.
I think the drift will be more in the direction of the block care. I don’t see any other system that would have controlled cost in the convalescent homes.
By the way, your doctor can’t just walk by every bed in the convalescent home and send the bill to the government. That’s not allowed by the law. But if you transfer the patient into a hospital, he can walk by the bed five times every day and send a $45 bill to the government.
If the incentives are wrong, the behavior will be wrong. I guarantee it. Not by everybody, but by enough of a percentage that you won’t like the system.
I think that’s enough on a subject that’s so difficult. I think we can see where it’s going. We may end up with a whole system that’s…In the Netherlands, they have a system where the same people are giving a free system to everybody and a concierge system to the others. It’s working pretty well.
To read the rest of my notes you must be a member. (Current members log-in here.)
To learn more about our membership program please visit this page. Or instantly sign up for a basic membership:
By signing up for a membership you’re helping us earn a living, making the free content better, and funding exclusive content and research.
If you love Farnam Street’s regular content, you’ll love our membership program.
“(History) offers a ridiculous spectacle of a fragment expounding the whole.”
— Will Durant in Our Oriental Heritage
“That’s another thing we’ve learned from your Nation,” said Mein Herr, “map-making. But we’ve carried it much further than you. What do you consider the largest map that would be really useful?”
“About six inches to the mile.”
“Only six inches!” exclaimed Mein Herr. “We very soon got to six yards to the mile. Then we tried a hundred yards to the mile. And then came the grandest idea of all! We actually made a map of the country, on the scale of a mile to the mile!”
“Have you used it much?” I enquired.
“It has never been spread out, yet,” said Mein Herr: “the farmers objected: they said it would cover the whole country, and shut out the sunlight! So we now use the country itself, as its own map, and I assure you it does nearly as well.
— Sylvie and Bruno Concluded
In 1931, in New Orleans, Louisiana, mathematician Alfred Korzybski presented a paper on mathematical semantics. To the non-technical reader, most of the paper reads like an abstruse argument on the relationship of mathematics to human language, and of both to physical reality. Important stuff certainly, but not necessarily immediately useful for the layperson.
However, in his string of arguments on the structure of language, Korzybski introduced and popularized the idea that the map is not the territory. In other words, the description of the thing is not the thing itself. The model is not reality. The abstraction is not the abstracted. This has enormous practical consequences.
A.) A map may have a structure similar or dissimilar to the structure of the territory.
B.) Two similar structures have similar ‘logical’ characteristics. Thus, if in a correct map, Dresden is given as between Paris and Warsaw, a similar relation is found in the actual territory.
C.) A map is not the actual territory.
D.) An ideal map would contain the map of the map, the map of the map of the map, etc., endlessly…We may call this characteristic self-reflexiveness.
Maps are necessary, but flawed. (By maps, we mean any abstraction of reality, including descriptions, theories, models, etc.) The problem with a map is not simply that it is an abstraction; we need abstraction. Lewis Carroll made that clear by having Mein Herr describe a map with the scale of one mile to one mile. Such a map would not have the problems that maps have, nor would it be helpful in any way.
(See Borges for another take.)
To solve this problem, the mind creates maps of reality in order to understand it, because the only way we can process the complexity of reality is through abstraction. But frequently, we don’t understand our maps or their limits. In fact, we are so reliant on abstraction that we will frequently use an incorrect model simply because we feel any model is preferable to no model. (Reminding one of the drunk looking for his keys under the streetlight because “That’s where the light is!”)
Even the best and most useful maps suffer from limitations, and Korzybski gives us a few to explore: (A.) The map could be incorrect without us realizing it; (B.) The map is, by necessity, a reduction of the actual thing, a process in which you lose certain important information; and (C.) A map needs interpretation, a process that can cause major errors. (The only way to truly solve the last would be an endless chain of maps-of-maps, which he called self-reflexiveness.)
With the aid of modern psychology, we also see another issue: the human brain takes great leaps and shortcuts in order to make sense of its surroundings. As Charlie Munger has pointed out, a good idea and the human mind act something like the sperm and the egg — after the first good idea gets in, the door closes. This makes the map-territory problem a close cousin of man-with-a-hammer tendency.
This tendency is, obviously, problematic in our effort to simplify reality. When we see a powerful model work well, we tend to over-apply it, using it in non-analogous situations. We have trouble delimiting its usefulness, which causes errors.
Let’s check out an example.
By most accounts, Ron Johnson was one the most successful and desirable retail executives by the summer of 2011. Not only was he handpicked by Steve Jobs to build the Apple Stores, a venture which had itself come under major scrutiny – one retort printed in Bloomberg magazine: “I give them two years before they’re turning out the lights on a very painful and expensive mistake.” – but he had been credited with playing a major role in turning Target from a K-Mart look-alike into the trendy-but-cheap Tar-zhey by the late 90’s and early 00’s.
Johnson’s success at Apple was not immediate, but it was undeniable. By 2011, Apple stores were by far the most productive in the world on a per-square-foot basis, and had become the envy of the retail world. Their sales figures left Tiffany’s in the dust. The gleaming glass cube on Fifth Avenue became a more popular tourist attraction than the Statue of Liberty. It was a lollapalooza, something beyond ordinary success. And Johnson had led the charge.
With that success, in 2011 Johnson was hired by Bill Ackman, Steven Roth, and other luminaries of the financial world to turn around the dowdy old department store chain JCPenney. The situation of the department store was dour: Between 1992 and 2011, the retail market share held by department stores had declined from 57% to 31%.
Their core position was a no-brainer though. JCPenney had immensely valuable real estate, anchoring malls across the country. Johnson argued that their physical mall position was valuable if for no other reason that people often parked next to them and walked through them to get to the center of the mall. Foot traffic was a given. Because of contracts signed in the 50’s, 60’s, and 70’s, the heyday of the mall building era, rent was also cheap, another major competitive advantage. And unlike some struggling retailers, JCPenney was making (some) money. There was cash in the register to help fund a transformation.
The idea was to take the best ideas from his experience at Apple; great customer service, consistent pricing with no markdowns and markups, immaculate displays, world-class products, and apply them to the department store. Johnson planned to turn the stores into little malls-within-malls. He went as far as comparing the ever-rotating stores-within-a-store to Apple’s “apps.” Such a model would keep the store constantly fresh, and avoid the creeping staleness of retail.
Johnson pitched his idea to shareholders in a series of trendy New York City meetings reminiscent of Steve Jobs’ annual “But wait, there’s more!” product launches at Apple. He was persuasive: JCPenney’s stock price went from $26 in the summer of 2011 to $42 in early 2012 on the strength of the pitch.
The idea failed almost immediately. His new pricing model (eliminating discounting) was a flop. The coupon-hunters rebelled. Much of his new product was deemed too trendy. His new store model was wildly expensive for a middling department store chain – including operating losses purposefully endured, he’d spent several billion dollars trying to effect the physical transformation of the stores. JCPenney customers had no idea what was going on, and by 2013, Johnson was sacked. The stock price sank into the single digits, where it remains two years later.
What went wrong in the quest to build America’s Favorite Store? It turned out that Johnson was using a map of Tulsa to navigate Tuscaloosa. Apple’s products, customers, and history had far too little in common with JCPenney’s. Apple had a rabid, young, affluent fan-base before they built stores; JCPenney’s was not associated with youth or affluence. Apple had shiny products, and needed a shiny store; JCPenney was known for its affordable sweaters. Apple had never relied on discounting in the first place; JCPenney was taking away discounts given prior, triggering massive deprival super-reaction.
In other words, the old map was not very useful. Even his success at Target, which seems like a closer analogue, was misleading in the context of JCPenney. Target had made small, incremental changes over many years, to which Johnson had made a meaningful contribution. JCPenney was attempting to reinvent the concept of the department store in a year or two, leaving behind the core customer in an attempt to gain new ones. This was a much different proposition. (Another thing holding the company back was simply its base odds: Can you name a retailer of great significance that has lost its position in the world and come back?)
The main issue was not that Johnson was incompetent. He wasn’t. He wouldn’t have gotten the job if he was. He was extremely competent. But it was exactly his competence and past success that got him into trouble. He was like a great swimmer that tried to tackle a grand rapid, and the model he used successfully in the past, the map that had navigated a lot of difficult terrain, was not the map he needed anymore. He had an excellent theory about retailing that applied in some circumstances, but not in others. The terrain had changed, but the old idea stuck.
One person who well understands this problem of the map and the territory is Nassim Taleb, author of the Incerto series – Antifragile , The Black Swan, Fooled by Randomness, and The Bed of Procrustes.
Taleb has been vocal about the misuse of models for many years, but the earliest and most vivid I can recall is his firm criticism of a financial model called Value-at Risk, or VAR. The model, used in the banking community, is supposed to help manage risk by providing a maximum potential loss within a given confidence interval. In other words, it purports to allow risk managers to say that, within 95%, 99%, or 99.9% confidence, the firm will not lose more than $X million dollars in a given day. The higher the interval, the less accurate the analysis becomes. It might be possible to say that the firm has $100 million at risk at any time at a 99% confidence interval, but given the statistical properties of markets, a move to 99.9% confidence might mean the risk manager has to state the firm has $1 billion at risk. 99.99% might mean $10 billion. As rarer and rarer events are included in the distribution, the analysis gets less useful. So, by necessity, the “tails” are cut off somewhere and the analysis is deemed acceptable.
Elaborate statistical models are built to justify and use the VAR theory. On its face, it seems like a useful and powerful idea; if you know how much you can lose at any time, you can manage risk to the decimal. You can tell your board of directors and shareholders, with a straight face, that you’ve got your eye on the till.
The problem, in Nassim’s words, is that:
A model might show you some risks, but not the risks of using it. Moreover, models are built on a finite set of parameters, while reality affords us infinite sources of risks.
In order to come up with the VAR figure, the risk manager must take historical data and assume a statistical distribution in order to predict the future. For example, if we could take 100 million human beings and analyse their height and weight, we could then predict the distribution of heights and weights on a different 100 million, and there would be a microscopically small probability that we’d be wrong. That’s because we have a huge sample size and we are analysing something with very small and predictable deviations from the average.
But finance does not follow this kind of distribution. There’s no such predictability. As Nassim has argued, the “tails” are fat in this domain, and the rarest, most unpredictable events have the largest consequences. Let’s say you deem a highly threatening event (for example, a 90% crash in the S&P 500) to have a 1 in 10,000 chance of occurring in a given year, and your historical data set only has 300 years of data. How can you accurately state the probability of that event? You would need far more data.
Thus, financial events deemed to be 5, or 6, or 7 standard deviations from the norm tend to happen with a certain regularity that nowhere near matches their supposed statistical probability. Financial markets have no biological reality to tie them down: We can say with a useful amount of confidence that an elephant will not wake up as a monkey, but we can’t say anything with absolute confidence in an Extremistan arena.
We see several issues with VAR as a “map,” then. The first that the model is itself a severe abstraction of reality, relying on historical data to predict the future. (As all financial models must, to a certain extent.) VAR does not say “The risk of losing X dollars is Y, within a confidence of Z.” (Although risk managers treat it that way). What VAR actually says is “the risk of losing X dollars is Y, based on the given parameters.” The problem is obvious even to the non-technician: The future is a strange and foreign place that we do not understand. Deviations of the past may not be the deviations of the future. Just because municipal bonds have never traded at such-and-such a spread to U.S. Treasury bonds does not mean that they won’t in the future. They just haven’t yet. Frequently, the models are blind to this fact.
In fact, one of Nassim’s most trenchant points is that on the day before whatever “worst case” event happened in the past, you would have not been using the coming “worst case” as your worst case, because it wouldn’t have happened yet.
Here’s an easy illustration. October 19, 1987, the stock market dropped by 22.61%, or 508 points on the Dow Jones Industrial Average. In percentage terms, it was then and remains the worst one-day market drop in U.S. history. It was dubbed “Black Monday.” (Financial writers sometimes lack creativity — there are several other “Black Monday’s” in history.) But here we see Nassim’s point: On October 18, 1987, what would the models use as the worst possible case? We don’t know exactly, but we do know the previous worst case was 12.82%, which happened on October 28, 1929. A 22.61% drop would have been considered so many standard deviations from the average as to be near impossible.
But the tails are very fat in finance – improbable and consequential events seem to happen far more often than they should based on naive statistics. There is also a severe but often unrecognized recursiveness problem, which is that the models themselves influence the outcome they are trying to predict. (To understand this more fully, check out our post on Complex Adaptive Systems.)
A second problem with VAR is that even if we had a vastly more robust dataset, a statistical “confidence interval” does not do the job of financial risk management. Says Taleb:
There is an internal contradiction between measuring risk (i.e. standard deviation) and using a tool [VAR] with a higher standard error than that of the measure itself.
I find that those professional risk managers whom I heard recommend a “guarded” use of the VAR on grounds that it “generally works” or “it works on average” do not share my definition of risk management. The risk management objective function is survival, not profits and losses. A trader according to the Chicago legend, “made 8 million in eight years and lost 80 million in eight minutes”. According to the same standards, he would be, “in general”, and “on average” a good risk manager.
This is like a GPS system that shows you where you are at all times, but doesn’t include cliffs. You’d be perfectly happy with your GPS until you drove off a mountain.
It was this type of naive trust of models that got a lot of people in trouble in the recent mortgage crisis. Backward-looking, trend-fitting models, the most common maps of the financial territory, failed by describing a territory that was only a mirage: A world where home prices only went up. (Lewis Carroll would have approved.)
This was navigating Tulsa with a map of Tatooine.
The logical response to all this is, “So what?” If our maps fail us, how do we operate in an uncertain world? This is its own discussion for another time, and Taleb has gone to great pains to try and address the concern. Smart minds disagree on the solution. But one obvious key must be building systems that are robust to model error.
The practical problem with a model like VAR is that the banks use it to optimize. In other words, they take on as much exposure as the model deems OK. And when banks veer into managing to a highly detailed, highly confident model rather than to informed common sense, which happens frequently, they tend to build up hidden risks that will un-hide themselves in time.
If one were to instead assume that there were no precisely accurate maps of the financial territory, they would have to fall back on much simpler heuristics. (If you assume detailed statistical models of the future will fail you, you don’t use them.)
In short, you would do what Warren Buffett has done with Berkshire Hathaway. Mr. Buffett, to our knowledge, has never used a computer model in his life, yet manages an institution half a trillion dollars in size by assets, a large portion of which are financial assets. How?
The approach requires not only assuming a future worst case far more severe than the past, but also dictates building an institution with a robust set of backup systems, and margins-of-safety operating at multiple levels. Extra cash, rather than extra leverage. Taking great pains to make sure the tails can’t kill you. Instead of optimizing to a model, accepting the limits of your clairvoyance.
The trade-off, of course, is short-run rewards much less great than those available under more optimized models. Speaking of this, Charlie Munger has noted:
Berkshire’s past record has been almost ridiculous. If Berkshire had used even half the leverage of, say, Rupert Murdoch, it would be five times its current size.
For Berkshire at least, the trade-off seems to have been worth it.
The salient point then is that in our march to simplify reality with useful models, of which Farnam Street is an advocate, we confuse the models with reality. For many people, the model creates its own reality. It is as if the spreadsheet comes to life. We forget that reality is a lot messier. The map isn’t the territory. The theory isn’t what it describes, it’s simply a way we choose to interpret a certain set of information. Maps can also be wrong, but even if they are essentially correct, they are an abstraction, and abstraction means that information is lost to save space. (Recall the mile-to-mile scale map.)
How do we do better? This is fodder for another post, but the first step is to realize that you do not understand a model, map, or reduction unless you understand and respect its limitations. We must always be vigilant by stepping back to understand the context in which a map is useful, and where the cliffs might lie. Until we do that, we are the turkey.