Tag: Nassim Taleb

Thought Experiment: How Einstein Solved Difficult Problems

“We live not only in a world of thoughts, but also in a world of things.
Words without experience are meaningless.”
— Vladimir Nabokov

***

The Basics

“All truly wise thoughts have been thought already thousands of times; but to make them truly ours, we must think them over again honestly, until they take root in our personal experience.”
— Johann Wolfgang von Goethe

***

Imagine a small town with a hard working barber. The barber shaves everyone in the town who does not shave themselves. He does not shave anyone who shaves themselves. So, who shaves the barber?

The ‘impossible barber’ is one classic example of a thought experiment – a means of exploring a concept, hypothesis or idea through extensive thought. When finding empirical evidence is impossible, we turn to thought experiments to unspool complex concepts.

In the case of the impossible barber, setting up an experiment to figure out who shaves him would not be feasible or even desirable. After all, the barber cannot exist. Thought experiments are usually rhetorical. No particular answer can or should be found.

The purpose is to encourage speculation, logical thinking and to change paradigms. Thought experiments push us outside our comfort zone by forcing us to confront questions we cannot answer with ease. They reveal that we do not know everything and some things cannot be known.

In a paper entitled Thought Experimentation of Presocratic Philosophy, Nicholas Rescher writes:

Homo sapiens is an amphibian who can live and function in two very different realms- the domain of actual facts which we can investigate in observational inquiry, and the domain of the imaginative projection which we can explore in thought through reasoning…A thought experiment is an attempt to draw instruction from a process of hypothetical reasoning that proceeding by eliciting the consequences of a hypothesis which, for anything that one actually knows to the contrary, may be false. It consists in reasoning from a supposition that is not accepted as true- perhaps even known to be false but is assumed provisionally in the interests of making a point or resolving a conclusion.

As we know from the narrative fallacy, complex information is best digested in the form of narratives and analogies. Many thought experiments make use of this fact to make them more accessible. Even those who are not knowledgeable about a particular field can build an understanding through thought experiments. The aim is to condense first principles into a form which can be understood through analysis and reflection. Some incorporate empirical evidence, looking at it from an alternative perspective.

The benefit of thought experiments (as opposed to aimless rumination) is their structure. In an organized manner, thought experiments allow us to challenge intellectual norms, move beyond the boundaries of ingrained facts, comprehend history, make logical decisions, foster innovative ideas, and widen our sphere of reference.

Despite being improbable or impractical, thought experiments should be possible, in theory.

The History of Thought Experiments

Thought experiments have a rich and complex history, stretching back to the ancient Greeks and Romans. As a mental model, they have enriched many of our greatest intellectual advances, from philosophy to quantum mechanics.

An early example of a thought experiment is Zeno’s narrative of Achilles and the tortoise, dating to around 430 BC. Zeno’s thought experiments aimed to deduce first principles through the elimination of untrue concepts.

In one instance, the Greek philosopher used it to ‘prove’ motion is an illusion. Known as the dichotomy paradox, it involves Achilles racing a tortoise. Out of generosity, Achilles gives the tortoise a 100m head start. Once Achilles begins running, he soon catches up on the head start. However, by that point, the tortoise has moved another 10m. By the time he catches up again, the tortoise will have moved further. Zeno claimed Achilles could never win the race as the distance between the pair would constantly increase.

In the 17th century, Galileo further developed the concept by using thought experiments to affirm his theories. One example is his thought experiment involving two balls (one heavy, one light) which are dropped from the Leaning Tower of Pisa. Prior philosophers had theorized the heavy ball would land first. Galileo claimed this was untrue, as mass does not influence acceleration. We will look at Galileo’s thought experiments in more detail later on in this post.

In 1814, Pierre Laplace explored determinism through ‘Laplace’s demon.’ This is a theoretical ‘demon’ which has an acute awareness of the location and movement of every single particle in existence. Would Laplace’s demon know the future? If the answer is yes, the universe must be linear and deterministic. If no, the universe is nonlinear and free will exists.

In 1897, the German term ‘Gedankenexperiment’ passed into English and a cohesive picture of how thought experiments are used worldwide began to form.

Albert Einstein used thought experiments for so of his most important discoveries. The most famous of this thought experiments was on a beam of light, which was made into a brilliant children's book. What would happen if you could catch up to a beam of light as it moved he asked himself? The answers lead him down a different path toward time, which lead to the special theory of relativity.

In On Thought Experiments, 19th-century Philosopher and physicist Ernst Mach writes that curiosity is an inherent human quality. We see this in babies, as they test the world around them and learn the principle of cause and effect. With time, our exploration of the world becomes more and more in depth. We reach a point where we can no longer experiment through our hands alone. At that point, we move into the realm of thought experiments.

Thought experiments are a structured manifestation of our natural curiosity about the world.

Mach writes:

Our own ideas are more easily and readily at our disposal than physical facts. We experiment with thought, so as to say, at little expense. This it shouldn’t surprise us that, oftentime, the thought experiment precedes the physical experiment and prepares the way for it… A thought experiment is also a necessary precondition for a physical experiment. Every inventor and every experimenter must have in his mind the detailed order before he actualizes it. Even if Stephenson knew the train, the rails and the steam engine from experience, he must have, nonetheless, have preconceived in his thoughts the combination of a train on wheels, driven by a steam engine, before he could have proceeded to the realization. No less did Galileo have to envisage, in his imagination, the arrangements for the investigation of gravity, before these were actualized. Even the beginner learns in experimenting than as insufficient preliminary estimate, or nonobservance of sources of error has for him no less tragic comic results than the proverbial ‘look before you leap’ does in practical life.

Mach compares thought experiments to the plans and images we form in our minds before commencing an endeavor. We all do this — rehearsing a conversation before having it, planning a piece of work before starting it, figuring out every detail of a meal before cooking it. Mach views this as an integral part of our ability to engage in complex tasks and to innovate creatively.

According to Mach, the results of some thought experiments can be so certain that it is unnecessary to physically perform it. Regardless of the accuracy of the result, the desired purpose has been achieved.

We will look at some key examples of thought experiments throughout this post, which will show why Mach’s words are so important. He adds:

It can be seen that the basic method of the thought experiment is just like that of a physical experiment, namely, the method of variation. By varying the circumstances (continuously, if possible) the range of validity of an idea (expectation) related to these circumstances is increased.

Although some people view thought experiments as pseudo-science, Mach saw them as valid and important for experimentation.

Types of Thought Experiment

“Can't you give me brains?” asked the Scarecrow.

“You do not need them. You are learning something every day. A baby has brains, but it does not know much. Experience is the only thing that brings knowledge, and the longer you are on earth the more experience you are sure to get.”
― L. Frank Baum, The Wonderful Wizard of Oz

***

Several key types of thought experiment have been identified:

  • Prefactual – Involving potential future outcomes. E.g. ‘What will X cause to happen?’
  • Counterfactual – Contradicting known facts. E.g. ‘If Y happened instead of X, what would be the outcome?’
  • Semi-factual – Contemplating how a different past could have lead to the same present. E.g. ‘If Y had happened instead of X, would the outcome be the same?’
  • Prediction– Theorising future outcomes based on existing data. Predictions may involve mental or computational models. E.g. ‘If X continues to happen, what will the outcome be in one year?’
  • Hindcasting– Running a prediction in reverse to see if it forecasts an event which has already happened. E.g. ‘X happened, could Y have predicted it?’
  • Retrodiction– Moving backwards from an event to discover the root cause. Retrodiction is often used for problem solving and prevention purposes. E.g. ‘What caused X? How can we prevent it from happening again?’
  • Backcasting – Considering a specific future outcome, then working forwards from the present to deduce its causes. E.g. ‘If X happens in one year, what would have caused it?’

Thought Experiments in Philosophy

“With our limited senses and consciousness, we only glimpse a small portion of reality. Furthermore, everything in the universe is in a state of constant flux. Simple words and thoughts cannot capture this flux or complexity. The only solution for an enlightened person is to let the mind absorb itself in what it experiences, without having to form a judgment on what it all means. The mind must be able to feel doubt and uncertainty for as long as possible. As it remains in this state and probes deeply into the mysteries of the universe, ideas will come that are more dimensional and real than if we had jumped to conclusions and formed judgments early on.”

― Robert Greene, Mastery

***

Thoughts experiments have been an integral part of philosophy since ancient times. This is in part due to philosophical hypotheses often being subjective and impossible to prove through empirical evidence.

Philosophers use thought experiments to convey theories in an accessible manner. With the aim of illustrating a particular concept (such as free will or mortality), philosophers explore imagined scenarios. The goal is not to uncover a ‘correct’ answer, but to spark new ideas.

An early example of a philosophical thought experiment is Plato’s Allegory of the Cave, which centers around a dialogue between Socrates and Glaucon (Plato’s brother.)

A group of people are born and live within a dark cave. Having spent their entire lives seeing nothing but shadows on the wall, they lack a conception of the world outside. Knowing nothing different, they do not even wish to leave the cave. At some point, they are lead outside and see a world consisting of much more than shadows.

“The frog in the well knows nothing of the mighty ocean.”

— Japanese Proverb

Plato used this to illustrate the incomplete view of reality most us have. Only by learning philosophy, Plato claimed, can we see more than shadows.

Upon leaving the cave, the people realize the outside world is far more interesting and fulfilling. If a solitary person left, they would want to others to do the same. However, if they return to the cave, their old life will seem unsatisfactory. This discomfort would become misplaced, leading them to resent the outside world. Plato used this to convey his (almost compulsively) deep appreciation for the power of educating ourselves. To take up the mantle of your own education and begin seeking to understand the world is the first step on the way out of the cave.

Moving from caves to insects, let’s take a look at a fascinating thought experiment from 20th-century philosopher Ludwig Wittgenstein. Imagine

Imagine a world where each person has a beetle in a box. In this world, the only time anyone can see a beetle is when they look in their own box. As a consequence, the conception of a beetle each individual has is based on their own. It could be that everyone has something different, or that the boxes are empty, or even that the contents are amorphous.

Wittgenstein uses the ‘Beetle in a Box’ thought experiment to convey his work on the subjective nature of pain. We can each only know what pain is to us, and we cannot feel another person’s agony. If people in the hypothetical world were to have a discussion on the topic of beetles, each would only be able to share their individual perspective. The conversation would have little purpose because each person can only convey what they see as a beetle. In the same way, it is useless for us to describe our pain using analogies (‘it feels like a red hot poker is stabbing me in the back’) or scales (‘the pain is 7/10.’)

Thought Experiments in Science

Although empirical evidence is usually necessary for science, thought experiments may be used to develop a hypothesis or to prepare for experimentation. Some hypotheses cannot be tested (e.g string theory) – at least, not given our current capabilities.

Theoretical scientists may turn to thought experiments to develop a provisional answer, often informed by Occam’s razor.

Nicholas Rescher writes:

In natural science, thought experiments are common. Think, for example, of Einstein’s pondering the question of what the world would look like if one were to travel along a ray of light. Think too of physicists’ assumption of a frictionlessly rolling body or the economists’ assumption of a perfectly efficient market in the interests of establishing the laws of descent or the principles of exchange, respectively…Ernst Mach [mentioned in the introduction] made the sound point that any sensibly designed real experiment should be preceded by a thought experiment that anticipates at any rate the possibility of its outcome.

In a paper entitled Thought Experiments in Scientific Reasoning, Andrew D. Irvine explains that thought experiments are a key part of science. They are in the same realm as physical experiments. Thought experiments require all assumptions to be supported by empirical evidence. The context must be believable, and it must provide useful answers to complex questions. A thought experiment must have the potential to be falsified.

Irvine writes:

Just as a physical experiment often has repercussions for its background theory in terms of confirmation, falsification or the like, so too will a thought experiment. Of course, the parallel is not exact; thought experiments…no do not include actual interventions within the physical environment.

In  Do All Rational Folks Think As We Do? Barbara D. Massey writes:

Often critique of thought experiments demands the fleshing out or concretizing of descriptions so that what would happen in a given situation becomes less a matter of guesswork or pontification. In thought experiments we tend to elaborate descriptions with the latest scientific models in mind…The thought experiment seems to be a close relative of the scientist’s laboratory experiment with the vital difference that observations may be made from perspectives which are in reality impossible, for example, from the perspective of moving at the speed of light…The thought experiment seems to discover facts about how things work within the laboratory of the mind.

One key example of a scientific thought experiment is Schrodinger’s cat.

Developed in 1935 by Edwin Schrodinger, Schrodinger's cat seeks to illustrate the counterintuitive nature of quantum mechanics in a more understandable manner.

Although difficult to present in a simplified manner, the idea is that of a cat which is neither alive nor dead, encased within a box. Inside the box is a Geiger counter and a small quantity of decaying radioactive material. The amount of radioactive material is small, and over a period time, it is equally probable it will decay or not. If it does decay, a tube of acid will smash and poison the cat. Without opening the box, it is impossible to know if the cat is alive or not.

Let's ignore the ethical implications and the fact that, if this were performed, the angry meowing of the cat would be a clue. Like most thought experiments, the details are arbitrary – it is irrelevant what animal it is, what kills it, or the time frame.

Schrodinger’s point was that quantum mechanics are indeterminate. When does a quantum system switch from one state to a different one? Can the cat be both alive and dead, and is that conditional on it being observed? What about the cat’s own observation of itself?

In Search of Schrodinger’s Cat, John Gribbin writes:

Nothing is real unless it is observed…there is no underlying reality to the world. “Reality,” in the everyday sense, is not a good way to think about the behavior of the fundamental particles that make up the universe; yet at the same time those particles seem to be inseparably connected into some invisible whole, each aware of what happens to the others.

Schrodinger himself wrote in Nature and The Greeks:

We do not belong to this material world that science constructs for us. We are not in it; we are outside. We are only spectators. The reason why we believe that we are in it that we belong to the picture, is that our bodies are in the picture. Our bodies belong to it. Not only my own body, but those of my friends, also of my dog and cat and horse, and of all the other people and animals. And this is my only means of communicating with them.

Another important early example of a scientific thought experiment is Galileo’s Leaning Tower of Pisa Experiment.

Galileo sought to disprove the prevailing belief that gravity is influenced by the mass of an object. Since the time of Aristotle, people had assumed that a 10g object would fall at 1/10th the speed of a 100g object. Oddly, no one is recorded as having tested this.

According to Galileo’s early biography (written in 1654), he dropped two objects from the Leaning Tower of Pisa to disprove the gravitational mass relation hypothesis. Both landed at the same time, ushering in a new understanding of gravity. It is unknown if Galileo performed the experiment itself, so it is regarded as a thought experiment, not a physical one. Galileo reached his conclusion through the use of other thought experiments.

Biologists use thought experiments, often of the counterfactual variety. In particular, evolutionary biologists question why organisms exist as they do today. For example, why are sheep not green? As surreal as the question is, it is a valid one. A green sheep would be better camouflaged from predators. Another thought experiment involves asking: why don’t organisms (aside from certain bacteria) have wheels? Again, the question is surreal but is still a serious one. We know from our vehicles that wheels are more efficient for moving at speed than legs, so why do they not naturally exist beyond the microscopic level?

Psychology and Ethics — The Trolley Problem

Picture the scene. You are a lone passerby in a street where a tram is running along a track. The driver has lost control of it. If the tram continues along its current path, the five passengers will die in the ensuing crash. You notice a switch which would allow the tram to move to a different track, where a man is standing. The collision would kill him but would save the five passengers. Do you press the switch?

This thought experiment has been discussed in various forms since the early 1900s. Psychologists and ethicists have discussed the trolley problem at length, often using it in research. It raises many questions, such as:

  • Is a casual observer required to intervene?
  • Is there a measurable value to human life? I.e. is one life less valuable than five?
  • How would the situation differ if the observer were required to actively push a man onto the tracks rather than pressing the switch?
  • What if the man being pushed were a ‘villain’? Or a loved one of the observer? How would this change the ethical implications?
  • Can an observer make this choice without the consent of the people involved?

Research has shown most people are far more willing to press a switch than to push someone onto the tracks. This changes if the man is a ‘villain’- people are then far more willing to push him. Likewise, they are reluctant if the person being pushed is a loved one. In

In Incognito: The Secret Lives of The Brain, David Eagleman writes that our brains have a distinctly different response to the idea of pushing someone and the idea of pushing a switch. When confronted with a switch, brain scans show that our rational thinking areas are activated. Changing pushing a switch to pushing a person and our emotional areas activate. Eagleman summarizes that:

People register emotionally when they have to push someone; when they only have to tip a lever, their brain behaves like Star Trek’s Mr. Spock.

The trolley problem is theoretical, but it does have real world implications. For example, the majority of people who eat meat would not be content to kill the animal themselves- they are happy to press the switch but not to push the man. Even those who do not consume meat tend to ignore the fact they are indirectly contributing to the deaths of animals due to production quotas, which mean the meat they would have eaten ends up getting wasted. They feel morally superior as they are not actively pushing anyone onto the tracks, yet are still like an observer who does not intervene in anyway. As we move towards autonomous vehicles, there may be real life instances of similar situations. Vehicles may be required to make utilitarian choices – such as swerving into a ditch and killing the driver to avoid a group of children.

Although psychology and ethics are separate fields, they often make use of the same thought experiments.

The Infinite Monkey Theorem and Mathematics

“Ford!” he said, “there's an infinite number of monkeys outside who want to talk to us about this script for Hamlet they've worked out.”

― Douglas Adams, The Hitchhiker's Guide to the Galaxy

***

The infinite monkey theorem is a mathematical thought experiment. The premise is that infinite monkeys with typewriters will, eventually, type the complete works of Shakespeare. Some versions involve infinite monkeys or a single work. Mathematicians use the monkey(s) as a representation of a device which produces letters at random.

In Fooled By Randomness, Nassim Taleb writes:

If one puts an infinite number of monkeys in front of (strongly built) typewriters, and lets them clap away, there is a certainty that one of them will come out with an exact version of the ‘Iliad.' Upon examination, this may be less interesting a concept than it appears at first: Such probability is ridiculously low. But let us carry the reasoning one step beyond. Now that we have found that hero among monkeys, would any reader invest his life's savings on a bet that the monkey would write the ‘Odyssey' next?

The infinite monkey theorem is intended to illustrate the idea that any issue can be solved through enough random input, in the manner a drunk person arriving home will eventually manage to fit their key in the lock even if they do it without much finesse. It also represents the nature of probability and the idea that any scenario is workable, given enough time and resources.

To learn more about thought experiments, consider reading The Pig That Wants to Be Eaten, The Infinite Tortoise or The Laboratory of the Mind.

The Probability Distribution of the Future

The best colloquial definition of risk may be the following:

“Risk means more things can happen than will happen.”

We found it through the inimitable Howard Marks, but it's a quote from Elroy Dimson of the London Business School. Doesn't that capture it pretty well?

Another way to state it is: If there were only one thing that could happen, how much risk would there be, except in an extremely banal sense? You'd know the exact probability distribution of the future. If I told you there was a 100% probability that you'd get hit by a car today if you walked down the street, you simply wouldn't do it. You wouldn't call walking down the street a “risky gamble” right? There's no gamble at all.

But the truth is that in practical reality, there aren't many 100% situations to bank on. Way more things can happen than will happen. That introduces great uncertainty into the future, no matter what type of future you're looking at: An investment, your career, your relationships, anything.

How do we deal with this in a pragmatic way? The investor Howard Marks starts it this way:

Key point number one in this memo is that the future should be viewed not as a fixed outcome that’s destined to happen and capable of being predicted, but as a range of possibilities and, hopefully on the basis of insight into their respective likelihoods, as a probability distribution.

This is the most sensible way to think about the future: A probability distribution where more things can happen than will happen. Knowing that we live in a world of great non-linearity and with the potential for unknowable and barely understandable Black Swan events, we should never become too confident that we know what's in store, but we can also appreciate that some things are a lot more likely than others. Learning to adjust probabilities on the fly as we get new information is called Bayesian updating.

But.

Although the future is certainly a probability distribution, Marks makes another excellent point in the wonderful memo above: In reality, only one thing will happen. So you must make the decision: Are you comfortable if that one thing happens, whatever it might be? Even if it only has a 1% probability of occurring? Echoing the first lesson of biology, Warren Buffett stated that “In order to win, you must first survive.” You have to live long enough to play out your hand.

Which leads to an important second point: Uncertainty about the future does not necessarily equate with risk, because risk has another component: Consequences. The world is a place where “bad outcomes” are only “bad” if you know their (rough) magnitude. So in order to think about the future and about risk, we must learn to quantify.

It's like the old saying (usually before something terrible happens): What's the worst that could happen? Let's say you propose to undertake a six month project that will cost your company $10 million, and you know there's a reasonable probability that it won't work. Is that risky?

It depends on the consequences of losing $10 million, and the probability of that outcome. It's that simple! (Simple, of course, does not mean easy.) A company with $10 billion in the bank might consider that a very low-risk bet even if it only had a 10% chance of succeeding.

In contrast, a company with only $10 million in the bank might consider it a high-risk bet even if it only had a 10% of failing. Maybe five $2 million projects with uncorrelated outcomes would make more sense to the latter company.

In the real world, risk = probability of failure x consequences. That concept, however, can be looked at through many lenses. Risk of what? Losing money? Losing my job? Losing face? Those things need to be thought through. When we observe others being “too risk averse,” we might want to think about which risks they're truly avoiding. Sometimes risk is not only financial. 

***

Let's cover one more under-appreciated but seemingly obvious aspect of risk, also pointed out by Marks: Knowing the outcome does not teach you about the risk of the decision.

This is an incredibly important concept:

If you make an investment in 2012, you’ll know in 2014 whether you lost money (and how much), but you won’t know whether it was a risky investment – that is, what the probability of loss was at the time you made it.

To continue the analogy, it may rain tomorrow, or it may not, but nothing that happens tomorrow will tell you what the probability of rain was as of today. And the risk of rain is a very good analogue (although I’m sure not perfect) for the risk of loss.

How many times do we see this simple dictum violated? Knowing that something worked out, we argue that it wasn't that risky after all. But what if, in reality, we were simply fortunate? This is the Fooled by Randomness effect.

The way to think about it is the following: The worst thing that can happen to a young gambler is that he wins the first time he goes to the casinoHe might convince himself he can beat the system.

The truth is that most times we don't know the probability distribution at all. Because the world is not a predictable casino game — an error Nassim Taleb calls the Ludic Fallacy — the best we can do is guess.

With intelligent estimations, we can work to get the rough order of magnitude right, understand the consequences if we're wrong, and always be sure to never fool ourselves after the fact.

If you're into this stuff, check out Howard Marks' memos to his clients, or check out his excellent book, The Most Important Thing. Nate Silver also has an interesting similar idea about the difference between risk and uncertainty. And lastly, another guy that understands risk pretty well is Jason Zweig, who we've interviewed on our podcast before.

***

If you liked this article you'll love:

Nassim Taleb on the Notion of Alternative Histories — “The quality of a decision cannot be solely judged based on its outcome.”

The Four Types of Relationships — As Seneca said, “Time discovers truth.”

The Green Lumber Fallacy: The Difference between Talking and Doing

“Clearly, it is unrigorous to equate skills at doing with skills at talking.”
— Nassim Taleb

***

Before we get to the meat, let's review an elementary idea in biology that will be relevant to our discussion.

If you're familiar with evolutionary theory, you know that populations of organisms are constantly subjected to “selection pressures” — the rigors of their environment which lead to certain traits being favored and passed down to their offspring and others being thrown into the evolutionary dustbin.

Biologists dub these advantages in reproduction “fitness” — as in, the famously lengthening of giraffe necks gave them greater “fitness” in their environment because it helped them reach high up, untouched leaves.

Fitness is generally a relative concept: Since organisms must compete for scarce resources, their fitnesses are measured in the sense of giving a reproductive advantage over one another.

Just as well, a trait that might provide great fitness in one environment may be useless or even disadvantageous in another. (Imagine draining a pond: Any fitness advantages held by a really incredible fish becomes instantly worthless without water.) Traits also relate to circumstance. An advantage at one time could be a disadvantage at another and vice versa.

This makes fitness an all-important concept in biology: Traits are selected for if they provide fitness to the organism within a given environment.

Got it? OK, let's get back to the practical world.

***

The Black Swan thinker Nassim Taleb has an interesting take on fitness and selection in the real world:  People who are good “doers” and people who are good “talkers” are often selected for different traits. Be careful not to mix them up.

In his book Antifragile, Taleb uses this idea to invoke a heuristic he'd once used when hiring traders on Wall Street:

The more interesting their conversation, the more cultured they are, the more they will be trapped into thinking that they are effective at what they are doing in real business (something psychologists call the halo effect, the mistake of thinking that skills in, say, skiing translate unfailingly into skills in managing a pottery workshop or a bank department, or that a good chess player would be a good strategist in real life).

Clearly, it is unrigorous to equate skills at doing with skills at talking. My experience of good practitioners is that they can be totally incomprehensible–they do not have to put much energy into turning their insights and internal coherence into elegant style and narratives. Entrepreneurs are selected to be doers, not thinkers, and doers do, they don't talk, and it would be unfair, wrong, and downright insulting to measure them in the talk department.

In other words, the selection pressures on an entrepreneur are very different from those on a corporate manager or bureaucrat: Entrepreneurs and risk takers succeed or fail not so much on their ability to talk, explain, and rationalize as their ability to get things done.

While the two can often go together, Nassim figured out that they frequently don't. We judge people as ignorant when it's really us who are ignorant.

When you think about it, there's no a priori reason great intellectualizing and great doing must go together: Being able to hack together an incredible piece of code gives you great fitness in the world of software development, while doing great theoretical computer science probably gives you better fitness in academia. The two skills don't have to be connected. Great economists don't usually make great investors.

But we often confuse the two realms.  We're tempted to think that a great investor must be fluent in behavioral economics or a great CEO fluent in Mckinsey-esque management narratives, but in the real world, we see this intuition constantly in violation.

The investor Walter Schloss worked from 9-5, barely left his office, and wasn't considered an entirely high IQ man, but he compiled one of the great investment records of all time. A young Mark Zuckerberg could hardly be described as a prototypical manager or businessperson, yet somehow built one of the most profitable companies in the world by finding others that complemented his weaknesses.

There are a thousand examples: Our narratives about the type of knowledge or experience we must have or the type of people we must be in order to become successful are often quite wrong; in fact, they border on naive. We think people who talk well can do well, and vice versa. This is simply not always so.

We won't claim that great doers cannot be great talkers, rationalizers, or intellectuals. Sometimes they are. But if you're seeking to understand the world properly, it's good to understand that the two traits are not always co-located. Success, especially in some “narrow” area like plumbing, programming, trading, or marketing, is often achieved by rather non-intellectual folks. Their evolutionary fitness doesn't come from the ability to talk, but do. This is part of reality.

***

Taleb calls this idea the Green Lumber Fallacy, after a story in the book What I Learned Losing a Million Dollars. Taleb describes it in Antifragile:

In one of the rare noncharlatanic books in finance, descriptively called What I Learned Losing a Million Dollars, the protagonist makes a big discovery. He remarks that a fellow named Joe Siegel, one of the most successful traders in a commodity called “green lumber,” actually thought it was lumber painted green (rather than freshly cut lumber, called green because it had not been dried). And he made it his profession to trade the stuff! Meanwhile the narrator was into grand intellectual theories and narratives of what caused the price of commodities to move and went bust.

It is not just that the successful expert on lumber was ignorant of central matters like the designation “green.” He also knew things about lumber that nonexperts think are unimportant. People we call ignorant might not be ignorant.

The fact that predicting the order flow in lumber and the usual narrative had little to do with the details one would assume from the outside are important. People who do things in the field are not subjected to a set exam; they are selected in the most non-narrative manager — nice arguments don't make much difference. Evolution does not rely on narratives, humans do. Evolution does not need a word for the color blue.

So let us call the green lumber fallacy the situation in which one mistakes a source of visible knowledge — the greenness of lumber — for another, less visible from the outside, less tractable, less narratable.

The main takeaway is that the real causative factors of success are often hidden from usWe think that knowing the intricacies of green lumber are more important than keeping a close eye on the order flow. We seduce ourselves into overestimating the impact of our intellectualism and then wonder why “idiots” are getting ahead. (Probably hustle and competence.)

But for “skin in the game” operations, selection and evolution don't care about great talk and ideas unless they translate into results. They care what you do with the thing more than that you know the thing. They care about actually avoiding risk rather than your extensive knowledge of risk management theories. (Of course, in many areas of modernity there is no skin in the game, so talking and rationalizing can be and frequently are selected for.)

As Taleb did with his hiring heuristic, this should teach us to be a little skeptical of taking good talkers at face value, and to be a little skeptical when we see “unexplainable” success in someone we consider “not as smart.” There might be a disconnect we're not seeing because we're seduced by narrative. (A problem someone like Lee Kuan Yew avoided by focusing exclusively on what worked.)

And we don't have to give up our intellectual pursuits in order to appreciate this nugget of wisdom; Taleb is right, but it's also true that combining the rigorous, skeptical knowledge of “what actually works” with an ever-improving theory structure of the world might be the best combination of all — selected for in many more environments than simple git-er-done ability, which can be extremely domain and environment dependent. (The green lumber guy might not have been much good outside the trading room.)

After all, Taleb himself was both a successful trader and the highest level of intellectual. Even he can't resist a little theorizing.

Frozen Accidents: Why the Future Is So Unpredictable

“Each of us human beings, for example, is the product of an enormously long
sequence of accidents,
any of which could have turned out differently.”
— Murray Gell-Mann

***

What parts of reality are the product of an accident? The physicist Murray Gell-Mann thought the answer was “just about everything.” And to Gell-Mann, understanding this idea was the the key to understanding how complex systems work.

Gell-Mann believed two things caused what we see in the world:

  1. A set of fundamental laws
  2. Random “accidents” — the little blips that could have gone either way, and had they, would have produced a very different kind of world.

Gell-Mann pulled the second part from Francis Crick, co-discoverer of the human genetic code, who argued that the code itself may well have been an “accident” of physical history rather than a uniquely necessary arrangement.

These accidents become “frozen” in time, and have a great effect on all subsequent developments; complex life itself is an example of something that did happen a certain way but probably could have happened other ways — we know this from looking at the physics.

This idea of fundamental laws plus accidents, and the non-linear second order effects they produce, became the science of complexity and chaos theory. Gell-Mann discussed the fascinating idea further in a 1996 essay on Edge:

Each of us human beings, for example, is the product of an enormously long sequence of accidents, any of which could have turned out differently. Think of the fluctuations that produced our galaxy, the accidents that led to the formation of the solar system, including the condensation of dust and gas that produced Earth, the accidents that helped to determine the particular way that life began to evolve on Earth, and the accidents that contributed to the evolution of particular species with particular characteristics, including the special features of the human species. Each of us individuals has genes that result from a long sequence of accidental mutations and chance matings, as well as natural selection.

Now, most single accidents make very little difference to the future, but others may have widespread ramifications, many diverse consequences all traceable to one chance event that could have turned out differently. Those we call frozen accidents.

These “frozen accidents” occur at every nested level of the world: As Gell-Mann points out, they are an outcome in physics (the physical laws we observe may be accidents of history); in biology (our genetic code is largely a byproduct of “advantageous accidents” as discussed by Crick); and in human history, as we'll discuss. In other words, the phenomenon hits all three buckets of knowledge.

Gell-Mann gives a great example of how this plays out on the human scale:

For instance, Henry VIII became king of England because his older brother Arthur died. From the accident of that death flowed all the coins, all the charters, all the other records, all the history books mentioning Henry VIII; all the different events of his reign, including the manner of separation of the Church of England from the Roman Catholic Church; and of course the whole succession of subsequent monarchs of England and of Great Britain, to say nothing of the antics of Charles and Diana. The accumulation of frozen accidents is what gives the world its effective complexity.

The most important idea here is that the frozen accidents of history have a nonlinear effect on everything that comes after. The complexity we see comes from simple rules and many, many “bounces” that could have gone in any direction. Once they go a certain way, there is no return.

This principle is illustrated wonderfully in the book The Origin of Wealth by Eric Beinhocker. The first example comes from 19th century history:

In the late 1800s, “Buffalo Bill” Cody created a show called Buffalo Bill's Wild West Show, which toured the United States, putting on exhibitions of gun fighting, horsemanship, and other cowboy skills. One of the show's most popular acts was a woman named Phoebe Moses, nicknamed Annie Oakley. Annie was reputed to have been able to shoot the head off of a running quail by age twelve, and in Buffalo Bill's show, she put on a demonstration of marksmanship that included shooting flames off candles, and corks out of bottles. For her grand finale, Annie would announce that she would shoot the end off a lit cigarette held in a man's mouth, and ask for a brave volunteer from the audience. Since no one was ever courageous enough to come forward, Annie hid her husband, Frank, in the audience. He would “volunteer,” and they would complete the trick together. In 1880, when the Wild West Show was touring Europe, a young crown prince (and later, kaiser), Wilhelm, was in the audience. When the grand finale came, much to Annie's surprise, the macho crown prince stood up and volunteered. The future German kaiser strode into the ring, placed the cigarette in his mouth, and stood ready. Annie, who had been up late the night before in the local beer garden, was unnerved by this unexpected development. She lined the cigarette up in her sights, squeezed…and hit it right on the target.

Many people have speculated that if at that moment, there had been a slight tremor in Annie's hand, then World War I might never have happened. If World War I had not happened, 8.5 million soldiers and 13 million civilian lives would have been saved. Furthermore, if Annie's hand had trembled and World War I had not happened, Hitler would not have risen from the ashes of a defeated Germany, and Lenin would not have overthrown a demoralized Russian government. The entire course of twentieth-century history might have been changed by the merest quiver of a hand at a critical moment. Yet, at the time, there was no way anyone could have known the momentous nature of the event.

This isn't to say that other big events, many bad, would not have precipitated in the 20th century. Almost certainly there would have been wars and upheavals.

But the actual course of history was in some part determined by small chance event which had no seeming importance when it happened. The impact of Wilhelm being alive rather than dead was totally non-linear. (A small non-event had a massively disproportionate effect on what happened later.)

This is why predicting the future, even with immense computing power, is an impossible task. The chaotic effects of randomness, with small inputs having disproportionate and massive effects, makes prediction a very difficult task. That's why we must appreciate the role of randomness in the world and seek to protect against it.

Another great illustration from The Origin of Wealth is a famous story in the world of technology:

[In 1980] IBM approached a small company with forty employees in Bellevue, Washington. The company, called Microsoft, was run by a Harvard dropout named bill Gates and his friend Paul Allen. IBM wanted to talk to the small company about creating a version of the programming language BASIC for the new PC. At their meeting, IBM asked Gates for his advice on what operating systems (OS) the new machine should run. Gates suggested that IBM talk to Gary Kildall of Digital Research, whose CP/M operating system had become the standard in the hobbyist world of microcomputers. But Kildall was suspicious of the blue suits from IBM and when IBM tried to meet him, he went hot-air ballooning, leaving his wife and lawyer to talk to the bewildered executives, along with instructions not to sign even a confidentiality agreement. The frustrated IBM executives returned to Gates and asked if he would be interested in the OS project. Despite never having written an OS, Gates said yes. He then turned around and license a product appropriately named Quick and Dirty Operating System, or Q-DOS, from a small company called Seattle Computer Products for $50,000, modified it, and then relicensed it to IBM as PC-DOS. As IBM and Microsoft were going through the final language for the agreement, Gates asked for a small change. He wanted to retain the rights to sell his DOS on non-IBM machines in a version called MS-DOS. Gates was giving the company a good price, and IBM was more interested in PC hardware than software sales, so it agreed. The contract was signed on August 12, 1981. The rest, as they say, is history. Today, Microsoft is a company worth $270 billion while IBM is worth $140 billion.

At any point in that story, business history could have gone a much different way: Kildall could have avoided hot-air ballooning, IBM could have refused Gates' offer, Microsoft could have not gotten the license for QDOS. Yet this little episode resulted in massive wealth for Gates and a long period of trouble for IBM.

Predicting the outcomes of a complex system must clear a pretty major hurdle: The prediction must be robust to non-linear “accidents” with a chain of unforeseen causation. In some situations this is doable: We can confidently rule out that Microsoft will not go broke in the next 12 months; the chain of events needed to take it under quickly is so low as to be negligible, no matter how you compute it. (Even IBM made it through the above scenario, although not unscathed.)

But as history rolls on and more “accidents” accumulate year by year, a “Fog of the Future” rolls in to obscure our view. In order to operate in such a world, we must learn that predicting is inferior to building systems that don't require prediction, as Mother Nature does. And if we must predict, must confine our predictions to areas with few variables that lie in our circle of competence, and understand the consequences if we're wrong.

If this topic is interesting to you, try exploring the rest of the Origin of Wealth, which discusses complexity in the economic realm in great (but readable) detail; also check out the rest of Murray Gell-Mann's essay on Edge. Gell-Mann also wrote a book on the topic called The Quark and the Jaguar which is worth checking out. The best writer on randomness and robustness in the face of an uncertain future, is of course Nassim Taleb, whom we have written about many times.

Nassim Taleb’s Life Advice: Be Careful of Life Advice

Nassim Taleb, the modern philosopher best known for his ideas on The Black Swan and Antifragility, gave his first commencement address this year, at American University in Beirut. (I suspect he's been asked in the past but declined.)

Like him or not, Taleb is a unique and uncompromising mind. He doesn't suffer any fools and doesn't sacrifice his principles for money or fame, so far as one can tell. He's willing to take tremendous personal heat if he thinks he's right. (Again, agree with him or not.) There's a certain honor in his approach that must be admired.

The most interesting part of his commencement is on the idea of life advice itself. Commencement speeches are, obviously, meant to pass advice from a wise (and famous) person to a younger generation. But Nassim goes in a bit of a different direction: He advises the students to be careful of common life advice, for if he had followed it, he'd have never become the unique and interesting person he became.

I hesitate to give advice because every major single piece of advice I was given turned out to be wrong and I am glad I didn’t follow them. I was told to focus and I never did. I was told to never procrastinate and I waited 20 years for The Black Swan and it sold 3 million copies. I was told to avoid putting fictional characters in my books and I did put in Nero Tulip and Fat Tony because I got bored otherwise. I was told to not insult the New York Times and the Wall Street Journal; the more I insulted them the nicer they were to me and the more they solicited Op-Eds. I was told to avoid lifting weights for a back pain and became a weightlifter: never had a back problem since.

If I had to relive my life I would be even more stubborn and uncompromising than I have been.

The truth is, much of the advice you receive as a young person will be pretty good. Saving money works. Marrying the right person works. Avoiding drugs works. Etc. The obvious stuff is worth following. (You don't always have to walk on your hands because everyone else walks on their feet.)

But there's a host of more subjective wisdom that, generally speaking, leads you to become a lot more like other people. “Common wisdom,” insofar as it's actually common, tends to reinforce cultural norms and values. If you want to lead a comfortable existence, that may work fine. But it won't create another Nassim Taleb, or another Steve Jobs, or another Richard Feynman. They, and many others, embraced what made them different.

Of course, many less successful people embraced their oddities, too. The silent grave is chock full of candidates. This isn't a “recipe for success” or some other nonsense — it's more complicated than simply being different. (The narrative fallacy is always right around the corner.)

But one has to suspect that a more interesting and honorable life is led by those who are a bit uncompromising on the important values like integrity, self-education, and moral courage. If you can offset that by being extremely compromising on the unimportant stuff, you may have a shot at living an interesting and different life with a heaping scoop of integrity.

You can read the rest of the commencement here. If you're still interested, check out a few other great commencement speeches.

Life Changing Books (New Guy Edition)

Back in 2013, I posted the Books that Changed my Life. In doing so, I was responding to a reader request to post up the books that “literally changed my life.”

Now that we have Jeff on board, I've asked him to do the same. Here are his choices, presented in a somewhat chronological order. As always, these lists leave off a lot of important books in the name of brevity.

Rich Dad, Poor Dad – Robert Kiyosaki

Before I get hanged for apostasy, let me explain. The list is about books that changed my life and this one absolutely did. I pulled this off my father's shelf and read it in high school, and it kicked off a lifelong interest in investments, business, and the magic of compound interest. That eventually led me to find Warren Buffett and Charlie Munger, affecting the path of my life considerably. With that said, I would probably not recommend you start here. I haven't re-read the book since high school and what I've learned about Kiyosaki doesn't make me want to recommend anything to you from him. But for better or worse, this book had an impact. Another one that probably holds up better is The Millionaire Next Door, which my father recommended when I was in high school and stuck with me for a long time too.

Buffett: Making of an American Capitalist/Buffett's Letters to Shareholders – Roger Lowenstein, Warren Buffett

These two and the next book are duplicates off Shane's list, but they are also probably the reason we know each other. Learning about Warren Buffett took the kid who liked “Rich Dad, Poor Dad” and watched The Apprentice, and might have been on a path to highly leveraged real estate speculation and who knows what else, and put him on a more sound path. I read this biography many times in college, and decided I wanted to emulate some of Buffett's qualities. (I actually now prefer The Snowball, by Alice Schroeder, but Lowenstein's came first and changed my life more.) Although I have a business degree, I learned a lot more from reading and applying the collected Letters to Shareholders.

Poor Charlie's Almanack – Peter Kaufman, Charlie Munger et al.

The Almanack is the greatest book I have ever read, and I knew it from the first time I read it. As Charlie says in the book, there is no going back from the multi-disciplinary approach. It would feel like cutting off your hands. I re-read this book every year in whole or in part, and so far, 8 years on, I haven't failed to pick up a meaningful new insight. Like any great book, it grows as you grow. I like to think I understand about 40% of it on a deep level now, and I hope to add a few percent every year. I literally cannot conceive of a world in which I didn't read this.

The Nurture Assumption – Judith Rich Harris

This book affected my thinking considerably. I noticed in the Almanack that Munger recommended this book and another, No Two Alike, towards the end. Once I read it, I could see why. It is a monument to clear and careful thinking. Munger calls the author Judith Rich Harris a combination of Darwin and Sherlock Holmes, and he's right. If this book doesn't change how you think about parenting, social development, peer pressure, education, and a number of other topics, then re-read it.

Filters Against Folly/Living within Limits – Garrett Hardin

Like The Nurture Assumption, these two books are brilliantly well thought-through. Pillars of careful thought. It wasn't until years after I read them that I realized Garrett Hardin was friends with, and in fact funded by, Charlie Munger. The ideas about overpopulation in Living within Limits made a deep impression on me, but the quality of thought in general hit me the hardest. Like the Almanack, it made me want to become a better and more careful thinker.

The Black Swan – Nassim Taleb

Who has read this and not been affected by it? Like many, Nassim's books changed how I think about the world. The ideas from The Black Swan and Fooled by Randomness about the narrative fallacy and the ludic fallacy cannot be forgotten, as well as the central idea of the book itself that rare events are not predictable and yet dominate our landscape. Also, Nassim's writing style made me realize deep, practical writing didn't have to be dry and sanitized. Like him or not, he wears his soul on his sleeve.

Good Calories, Bad Calories / Why We Get Fat: And What to do About it – Gary Taubes

I've been interested in nutrition since I was young, and these books made me realize most of what I knew was not very accurate. Gary Taubes is a scientific journalist of the highest order. Like Hardin, Munger, and Harris, he thinks much more carefully than most of his peers. Nutrition is a field that is still sort of growing up, and the quality of the research and thought shows it. Taubes made me recognize that nutrition can be a real science if it's done more carefully, more Feynman-like. Hopefully his NuSi initiative will help shove the field in the right direction.

The (Honest) Truth about Dishonesty – Dan Ariely

This book by Ariely was a game-changer in that it helped me realize the extent to which we rationalize our behavior in a million little ways. I had a lot of nights thinking about my own propensity for dishonesty and cheating after I read this one, and I like to think I'm a pretty moral person to start with. I had never considered how situational dishonesty was, but now that I do, I see it constantly in myself and others. There are also good sections on incentive-caused bias and social pressure that made an impact.

Sapiens – Yuval Noah Harrari

This is fairly new so I'm still digesting this book, and I have a feeling it will take many years. But Sapiens has a lot of (for me) deep insights about humanity and how we got here. I think Yuval is a very good thinker and an excellent writer. A lot of the ideas in this book will set some people off, and not in a good way. But that doesn't mean they're not correct. Highly recommended if you're open-minded and want to learn.

***

At the end of the day, what gets me excited is my Antilibrary, all the books I have on my shelf or on my Amazon wish list that I haven't read yet. The prospect of reading another great book that changes my life like these books did is an exciting quest.