Tag: Mental Models

Scientific Concepts We All Ought To Know

John Brockman's online scientific roundtable Edge.org does something fantastic every year: It asks all of its contributors (hundreds of them) to answer one meaningful question. Questions like What Have You Changed Your Mind About? and What is Your Dangerous Idea?

This year's was particularly awesome for our purposesWhat Scientific Term or Concept Ought To Be More Known?

The answers give us a window into over 200 brilliant minds, with the simple filtering mechanism that there's something they know that we should probably know, too. We wanted to highlight a few of our favorites for you.

***

From Steven Pinker, a very interesting thought on The Second Law of Thermodynamics (Entropy). This reminded me of the central thesis of The Origin of Wealth by Eric Beinhocker. (Which we'll cover in more depth in the future: We referenced his work in the past.)


The Second Law of Thermodynamics states that in an isolated system (one that is not taking in energy), entropy never decreases. (The First Law is that energy is conserved; the Third, that a temperature of absolute zero is unreachable.) Closed systems inexorably become less structured, less organized, less able to accomplish interesting and useful outcomes, until they slide into an equilibrium of gray, tepid, homogeneous monotony and stay there.

In its original formulation the Second Law referred to the process in which usable energy in the form of a difference in temperature between two bodies is dissipated as heat flows from the warmer to the cooler body. Once it was appreciated that heat is not an invisible fluid but the motion of molecules, a more general, statistical version of the Second Law took shape. Now order could be characterized in terms of the set of all microscopically distinct states of a system: Of all these states, the ones that we find useful make up a tiny sliver of the possibilities, while the disorderly or useless states make up the vast majority. It follows that any perturbation of the system, whether it is a random jiggling of its parts or a whack from the outside, will, by the laws of probability, nudge the system toward disorder or uselessness. If you walk away from a sand castle, it won’t be there tomorrow, because as the wind, waves, seagulls, and small children push the grains of sand around, they’re more likely to arrange them into one of the vast number of configurations that don’t look like a castle than into the tiny few that do.

The Second Law of Thermodynamics is acknowledged in everyday life, in sayings such as “Ashes to ashes,” “Things fall apart,” “Rust never sleeps,” “Shit happens,” You can’t unscramble an egg,” “What can go wrong will go wrong,” and (from the Texas lawmaker Sam Rayburn), “Any jackass can kick down a barn, but it takes a carpenter to build one.”

Scientists appreciate that the Second Law is far more than an explanation for everyday nuisances; it is a foundation of our understanding of the universe and our place in it. In 1915 the physicist Arthur Eddington wrote:

[…]

Why the awe for the Second Law? The Second Law defines the ultimate purpose of life, mind, and human striving: to deploy energy and information to fight back the tide of entropy and carve out refuges of beneficial order. An underappreciation of the inherent tendency toward disorder, and a failure to appreciate the precious niches of order we carve out, are a major source of human folly.

To start with, the Second Law implies that misfortune may be no one’s fault. The biggest breakthrough of the scientific revolution was to nullify the intuition that the universe is saturated with purpose: that everything happens for a reason. In this primitive understanding, when bad things happen—accidents, disease, famine—someone or something must have wanted them to happen. This in turn impels people to find a defendant, demon, scapegoat, or witch to punish. Galileo and Newton replaced this cosmic morality play with a clockwork universe in which events are caused by conditions in the present, not goals for the future. The Second Law deepens that discovery: Not only does the universe not care about our desires, but in the natural course of events it will appear to thwart them, because there are so many more ways for things to go wrong than to go right. Houses burn down, ships sink, battles are lost for the want of a horseshoe nail.

Poverty, too, needs no explanation. In a world governed by entropy and evolution, it is the default state of humankind. Matter does not just arrange itself into shelter or clothing, and living things do everything they can not to become our food. What needs to be explained is wealth. Yet most discussions of poverty consist of arguments about whom to blame for it.

More generally, an underappreciation of the Second Law lures people into seeing every unsolved social problem as a sign that their country is being driven off a cliff. It’s in the very nature of the universe that life has problems. But it’s better to figure out how to solve them—to apply information and energy to expand our refuge of beneficial order—than to start a conflagration and hope for the best.

Richard Nisbett (a social psychologist) has a great one — a concept we've hit on before but is totally underappreciated by most people: The Fundamental Attribution Error.

Modern scientific psychology insists that explanation of the behavior of humans always requires reference to the situation the person is in. The failure to do so sufficiently is known as the Fundamental Attribution Error. In Milgram’s famous obedience experiment, two-thirds of his subjects proved willing to deliver a great deal of electric shock to a pleasant-faced middle-aged man, well beyond the point where he became silent after begging them to stop on account of his heart condition. When I teach about this experiment to undergraduates, I’m quite sure I‘ve never convinced a single one that their best friend might have delivered that amount of shock to the kindly gentleman, let alone that they themselves might have done so. They are protected by their armor of virtue from such wicked behavior. No amount of explanation about the power of the unique situation into which Milgram’s subject was placed is sufficient to convince them that their armor could have been breached.

My students, and everyone else in Western society, are confident that people behave honestly because they have the virtue of honesty, conscientiously because they have the virtue of conscientiousness. (In general, non-Westerners are less susceptible to the fundamental attribution error, lacking as they do sufficient knowledge of Aristotle!) People are believed to behave in an open and friendly way because they have the trait of extroversion, in an aggressive way because they have the trait of hostility. When they observe a single instance of honest or extroverted behavior they are confident that, in a different situation, the person would behave in a similarly honest or extroverted way.

In actual fact, when large numbers of people are observed in a wide range of situations, the correlation for trait-related behavior runs about .20 or less. People think the correlation is around .80. In reality, seeing Carlos behave more honestly than Bill in a given situation increases the likelihood that he will behave more honestly in another situation from the chance level of 50 percent to the vicinity of 55-57. People think that if Carlos behaves more honestly than Bill in one situation the likelihood that he will behave more honestly than Bill in another situation is 80 percent!

How could we be so hopelessly miscalibrated? There are many reasons, but one of the most important is that we don’t normally get trait-related information in a form that facilitates comparison and calculation. I observe Carlos in one situation when he might display honesty or the lack of it, and then not in another for perhaps a few weeks or months. I observe Bill in a different situation tapping honesty and then not another for many months.

This implies that if people received behavioral data in such a form that many people are observed over the same time course in a given fixed situation, our calibration might be better. And indeed it is. People are quite well calibrated for abilities of various kinds, especially sports. The likelihood that Bill will score more points than Carlos in one basketball game given that he did in another is about 67 percent—and people think it’s about 67 percent.

Our susceptibility to the fundamental attribution error—overestimating the role of traits and underestimating the importance of situations—has implications for everything from how to select employees to how to teach moral behavior.

Cesar Hidalgo, author of what looks like an awesome book, Why Information Grows, wrote about Criticality, which is a very important and central concept to understanding complex systems:

In physics we say a system is in a critical state when it is ripe for a phase transition. Consider water turning into ice, or a cloud that is pregnant with rain. Both of these are examples of physical systems in a critical state.

The dynamics of criticality, however, are not very intuitive. Consider the abruptness of freezing water. For an outside observer, there is no difference between cold water and water that is just about to freeze. This is because water that is just about to freeze is still liquid. Yet, microscopically, cold water and water that is about to freeze are not the same.

When close to freezing, water is populated by gazillions of tiny ice crystals, crystals that are so small that water remains liquid. But this is water in a critical state, a state in which any additional freezing will result in these crystals touching each other, generating the solid mesh we know as ice. Yet, the ice crystals that formed during the transition are infinitesimal. They are just the last straw. So, freezing cannot be considered the result of these last crystals. They only represent the instability needed to trigger the transition; the real cause of the transition is the criticality of the state.

But why should anyone outside statistical physics care about criticality?

The reason is that history is full of individual narratives that maybe should be interpreted in terms of critical phenomena.

Did Rosa Parks start the civil rights movement? Or was the movement already running in the minds of those who had been promised equality and were instead handed discrimination? Was the collapse of Lehman Brothers an essential trigger for the Great Recession? Or was the financial system so critical that any disturbance could have made the trick?

As humans, we love individual narratives. We evolved to learn from stories and communicate almost exclusively in terms of them. But as Richard Feynman said repeatedly: The imagination of nature is often larger than that of man. So, maybe our obsession with individual narratives is nothing but a reflection of our limited imagination. Going forward we need to remember that systems often make individuals irrelevant. Just like none of your cells can claim to control your body, society also works in systemic ways.

So, the next time the house of cards collapses, remember to focus on why we were building a house of cards in the first place, instead of focusing on whether the last card was the queen of diamonds or a two of clubs.

The psychologist Adam Alter has another good one on a concept we all naturally miss from time to time, due to the structure of our mind. The Law of Small Numbers.

In 1832, a Prussian military analyst named Carl von Clausewitz explained that “three quarters of the factors on which action in war is based are wrapped in a fog of . . . uncertainty.” The best military commanders seemed to see through this “fog of war,” predicting how their opponents would behave on the basis of limited information. Sometimes, though, even the wisest generals made mistakes, divining a signal through the fog when no such signal existed. Often, their mistake was endorsing the law of small numbers—too readily concluding that the patterns they saw in a small sample of information would also hold for a much larger sample.

Both the Allies and Axis powers fell prey to the law of small numbers during World War II. In June 1944, Germany flew several raids on London. War experts plotted the position of each bomb as it fell, and noticed one cluster near Regent’s Park, and another along the banks of the Thames. This clustering concerned them, because it implied that the German military had designed a new bomb that was more accurate than any existing bomb. In fact, the Luftwaffe was dropping bombs randomly, aiming generally at the heart of London but not at any particular location over others. What the experts had seen were clusters that occur naturally through random processes—misleading noise masquerading as a useful signal.

That same month, German commanders made a similar mistake. Anticipating the raid later known as D-Day, they assumed the Allies would attack—but they weren’t sure precisely when. Combing old military records, a weather expert named Karl Sonntag noticed that the Allies had never launched a major attack when there was even a small chance of bad weather. Late May and much of June were forecast to be cloudy and rainy, which “acted like a tranquilizer all along the chain of German command,” according to Irish journalist Cornelius Ryan. “The various headquarters were quite confident that there would be no attack in the immediate future. . . . In each case conditions had varied, but meteorologists had noted that the Allies had never attempted a landing unless the prospects of favorable weather were almost certain.” The German command was mistaken, and on Tuesday, June 6, the Allied forces launched a devastating attack amidst strong winds and rain.

The British and German forces erred because they had taken a small sample of data too seriously: The British forces had mistaken the natural clustering that comes from relatively small samples of random data for a useful signal, while the German forces had mistaken an illusory pattern from a limited set of data for evidence of an ongoing, stable military policy. To illustrate their error, imagine a fair coin tossed three times. You’ll have a one-in-four chance of turning up a string of three heads or tails, which, if you make too much of that small sample, might lead you to conclude that the coin is biased to reveal one particular outcome all or almost all of the time. If you continue to toss the fair coin, say, a thousand times, you’re far more likely to turn up a distribution that approaches five hundred heads and five hundred tails. As the sample grows, your chance of turning up an unbroken string shrinks rapidly (to roughly one-in-sixteen after five tosses; one-in-five-hundred after ten tosses; and one-in-five-hundred-thousand after twenty tosses). A string is far better evidence of bias after twenty tosses than it is after three tosses—but if you succumb to the law of small numbers, you might draw sweeping conclusions from even tiny samples of data, just as the British and Germans did about their opponents’ tactics in World War II.

Of course, the law of small numbers applies to more than military tactics. It explains the rise of stereotypes (concluding that all people with a particular trait behave the same way); the dangers of relying on a single interview when deciding among job or college applicants (concluding that interview performance is a reliable guide to job or college performance at large); and the tendency to see short-term patterns in financial stock charts when in fact short-term stock movements almost never follow predictable patterns. The solution is to pay attention not just to the pattern of data, but also to how much data you have. Small samples aren’t just limited in value; they can be counterproductive because the stories they tell are often misleading.

There are many, many more worth reading. Here's a great chance to build your multidisciplinary skill-set.

Friedrich Nietzsche on Making Something Worthwhile of Ourselves

Friedrich Nietzsche (1844-1900) explored many subjects, perhaps the most important was himself.

A member of our learning community directed me to the passage below, written by Richard Schacht in the introduction to Nietzsche: Human, All Too Human: A Book for Free Spirits.

​If we are to make something worthwhile of ourselves, we have to take a good hard look at ourselves. And this, for Nietzsche, means many things. It means looking at ourselves in the light of everything we can learn about the world and ourselves from the natural sciences — most emphatically including evolutionary biology, physiology and even medical science. It also means looking at ourselves in the light of everything we can learn about human life from history, from the social sciences, from the study of arts, religions, literatures, mores and other features of various cultures. It further means attending to human conduct on different levels of human interaction, to the relation between what people say and seem to think about themselves and what they do, to their reactions in different sorts of situations, and to everything about them that affords clues to what makes them tick. All of this, and more, is what Nietzsche is up to in Human, All Too Human. He is at once developing and employing the various perspectival techniques that seem to him to be relevant to the understanding of what we have come to be and what we have it in us to become. This involves gathering materials for a reinterpretation and reassessment of human life, making tentative efforts along those lines and then trying them out on other human phenomena both to put them to the test and to see what further light can be shed by doing so.

Nietzsche realized that mental models were the key to not only understanding the world but understanding ourselves. Understanding how the world works is the key making more effective decisions and gaining insights. However, its through the journey of discovery of these ideas, that we learn about ourselves. Most of us want to skip the work, so we skim the surface of not only knowledge but ourselves.

Naval Ravikant on Reading, Happiness, Systems for Decision Making, Habits, Honesty and More

Naval Ravikant (@naval) is the CEO and co-founder of AngelList. He’s invested in more than 100 companies, including Uber, Twitter, Yammer, and many others.

Don’t worry, we’re not going to talk about early stage investing. Naval’s an incredibly deep thinker who challenges the status quo on so many things.

In this wide-ranging interview, we talk about reading, habits, decision-making, mental models, and life.

Just a heads up, this is the longest podcast I’ve ever done. While it felt like only thirty minutes, our conversation lasted over two hours!

If you’re like me, you’re going to take a lot of notes so grab a pen and paper. I left some white space on the transcript below in case you want to take notes in the margin.

Enjoy this amazing conversation.

******

Listen

***

Books mentioned

Transcript

Normally only members of our learning community have access to transcripts, however, we wanted to make this one open to everyone. Here's the complete transcript of the interview with Naval.

Blog Posts, Book Reviews, and Abstracts: On Shallowness

We’re quite glad that you read Farnam Street, and we hope we’re always offering you a massive amount of value. (If not, email us and tell us what we can do more effectively.)

But there’s a message all of our readers should appreciate: Blog posts are not enough to generate the deep fluency you need to truly understand or get better at something. We offer a starting point, not an end point.

This goes just as well for book reviews, abstracts, cliff's notes, and a good deal of short-form journalism.

This is a hard message for some who want a shortcut. They want the “gist” and the “high level takeaways”, without doing the work or eating any of the broccoli. They think that’s all it takes: Check out a 5-minute read, and instantly their decision making and understanding of the world will improve right-quick. Most blogs, of course, encourage this kind of shallowness. Because it makes you feel that the whole thing is pretty easy.

Here’s the problem: The world is more complex than that. It doesn’t actually work this way. The nuanced detail behind every “high level takeaway” gives you the context needed to use it in the real world. The exceptions, the edge cases, and the contradictions.

Let me give you an example.

A high-level takeaway from reading Kahneman’s Thinking Fast, and Slow would be that we are subject to something he and Amos Tversky call the Representativeness Heuristic. We create models of things in our head, and then fit our real-world experiences to the model, often over-fitting drastically. A very useful idea.

However, that’s not enough. There are so many follow-up questions. Where do we make the most mistakes? Why does our mind create these models? Where is this generally useful? What are the nuanced examples of where this tendency fails us? And so on. Just knowing about the Heuristic, knowing that it exists, won't perform any work for you.

Or take the rise of human species as laid out by Yuval Harari. It’s great to post on his theory; how myths laid the foundation for our success, how “natural” is probably a useless concept the way it’s typically used, and how biology is the great enabler.

But Harari’s book itself contains the relevant detail that fleshes all of this out. And further, his bibliography is full of resources that demand your attention to get even more backup. How did he develop that idea? You have to look to find out.

Why do all this? Because without the massive, relevant detail, your mind is built on a house of cards.

What Farnam Street and a lot of other great resources give you is something like a brief map of the territory.

Welcome to Colonial Williamsburg! Check out the re-enactors, the museum, and the theatre. Over there is the Revolutionary City. Gettysburg is 4 hours north. Washington D.C. is closer to 2.5 hours.

Great – now you have a lay of the land. Time to dig in and actually learn about the American Revolution. (This book is awesome, if you actually want to do that.)

Going back to Kahneman, one of his and Tversky’s great findings was the concept of the Availability Heuristic. Basically, the mind operates on what it has close at hand.

As Kahneman puts it, “An essential design feature of the associative machine is that it represents only activated ideas. Information that is not retrieved (even unconsciously) from memory might as well not exist. System 1 excels at constructing the best possible story that incorporates ideas currently activated, but it does not (cannot) allow for information it does not have.”

That means that in the moment of decision making, when you’re thinking hard on some complex problem you face, it’s unlikely that your mind is working all that successfully without the details. It doesn't have anything to draw on. It’d be like a chess player who read a book about great chess players, but who hadn’t actually studied all of their moves. Not very effective.

The great difficulty, of course, is that we lack the time to dig deep into everything. Opportunity costs and trade-offs are quite real.

That’s why you must develop excellent filters. What’s worth learning this deeply? We think it’s the first-principle style mental models. The great ideas from physical systems, biological systems, and human systems. The new-new thing you’re studying is probably either A. Wrong or B. Built on one of those great ideas anyways. Farnam Street, in a way, is just a giant filtering mechanism to get you started down the hill.

But don't stop there. Don't stop at the starting line. Resolve to increase your depth and stop thinking you can have it all in 5 minutes or less. Use our stuff, and whoever else's stuff you like, as an entrée to the real thing.

(P.S. If you need to learn how to focus, check this out; if you need to learn how to read more effectively, go with this.)

Charlie Munger on Getting Rich, Wisdom, Focus, Fake Knowledge and More

“In the chronicles of American financial history,” writes David Clark in Tao of Charlie Munger: A Compilation of Quotes from Berkshire Hathaway's Vice Chairman on Life, Business, and the Pursuit of Wealth,” Charlie Munger will be seen as the proverbial enigma wrapped in a paradox— he is both a mystery and a contradiction at the same time.”

On one hand, Munger received an elite education and it shows: He went to Cal Tech to train as a meteorologist for the Second World War and then attended Harvard Law School and eventually opened his own law firm. That part of his success makes sense.

Yet here's a man who never took a single course in economics, business, marketing, finance, psychology or accounting, and managed to become one of the greatest, most admired, and most honorable businessmen of our age, noted by essentially all observers for the originality of his thoughts, especially about business and human behavior. You don't learn that in law school, at Harvard or anywhere else.

Bill Gates said of him: “He is truly the broadest thinker I have ever encountered.” His business partner Warren Buffett put it another way: “He comes equipped for rationality…I would say that to try and typecast Charlie in terms of any other human that I can think of, no one would fit. He's got his own mold.”

How does such an extreme result happen? How is such an original and unduly capable mind formed? In the case of Munger, it's clearly a combination of unusual genetics and an unusual approach to learning and life.

While we can't have his genetics, we can try to steal his approach to rationality. There's almost no limit to the amount one could learn from studying the Munger mind, so let's at least get started by running down some of his best ideas.

***

Wisdom and Circle of Competence

“Knowing what you don’t know is more useful than being brilliant.”
“Acknowledging what you don’t know is the dawning of wisdom.”

Identify your circle of competence and use your knowledge, when possible, to stay away from things you don't understand. There are no points for difficulty at work or in life.  Avoiding stupidity is easier than seeking brilliance.

Of course this relates to another of Munger's sayings, “People are trying to be smart—all I am trying to do is not to be idiotic, but it’s harder than most people think.”

And this reminds me of perhaps my favorite Mungerism of all time, the very quote that sits right beside my desk:

“It is remarkable how much long-term advantage people like us have gotten by trying to be consistently not stupid, instead of trying to be very intelligent.”

***

Divergence

“Mimicking the herd, invites regression to the mean.”

Here's a simple axiom to live by: If you do what everyone else does, you're going to get the same result that everyone else gets. This means, taking out luck (good or bad), if you act average, you're going to be average. If you want to move away from average, you must diverge. You must be different. And if you want to outperform, you must be different and correct. As Munger would say, “How could it be otherwise?”

***

Know When to Fold Em

“Life, in part, is like a poker game, wherein you have to learn to quit sometimes when holding a much-loved hand— you must learn to handle mistakes and new facts that change the odds.”

Mistakes are an opportunity to grow. How we handle adversity is up to us. This is how we become personally antifragile.

***

False Models

Echoing Einstein, who said that “Not everything that counts can be counted, and not everything that can be counted counts,” Munger said about his and Buffett's shift to acquiring high quality businesses for Berkshire Hathaway:

“Once we’d gotten over the hurdle of recognizing that a thing could be a bargain based on quantitative measures that would have horrified Graham, we started thinking about better businesses.”

***

Being Lazy

“Sit on your ass. You’re paying less to brokers, you’re listening to less nonsense, and if it works, the tax system gives you an extra one, two, or three percentage points per annum.”

Time is the friend to a good business and the enemy of the poor business. It's also the friend of knowledge and the enemy of the new and novel. As Seneca said “Time discovers truth.”

***

Investing is a Pari-mutual System

You’re looking for a mispriced gamble,” says Munger. “That’s what investing is. And you have to know enough to know whether the gamble is mispriced. That’s value investing.”  At another time he added: “You should remember that good ideas are rare— when the odds are greatly in your favor, bet heavily.

May the odds forever be in your favor. Actually, learning properly is one way you can tilt the odds in your favor.

***

Focus

When asked about his success, Munger says, “I succeeded because I have a long attention span.”

Long attention spans allow for a deep understanding of subjects. When combined with deliberate practice focus allows you to increase your skills and get out of your rut. The Art of Focus is a divergent and correct strategy that can help you identify where the leverage points are and apply your effort toward them.

***

Fake Knowledge

“Smart people aren’t exempt from professional disasters from overconfidence.”

We're so used to outsourcing our thinking to others that we've forgotten what it's like to really understand something from all perspectives. We've forgotten just how much work that takes. The path of least resistance, however, is just a click away. Fake knowledge, which comes from reading headlines and skimming the news seems harmless, but it's not because it makes us overconfident. It's better to remember a simple trick: anything you're getting easily through google or twitter is likely to be widely known and should not be given undue weight.

However, Munger adds, “If people weren’t wrong so often, we wouldn’t be so rich.

***

Sit Quietly

Echoing Pascal, who said some version of ‘All of humanity's problems stem from man's inability to sit quietly in a room alone,' Munger adds an investing twist:  “It’s waiting that helps you as an investor, and a lot of people just can’t stand to wait.”

The ability to be alone with your thoughts, and turn ideas over and over, without the do something syndrome affects so many of us. A perfectly reasonable deviation is to hold your ground and await more information.

***

Deal With Reality

“I think that one should recognize reality even when one doesn’t like it; indeed, especially when one doesn’t like it.”

Munger clearly learned from Joseph Tussman's wisdom. This means facing harsh truths that you have forced yourself to ignore. It means meeting the world on the worlds terms, not how you wish it would be. If this causes temporary pain, so be it. “Your pain,” writes Kahil Gibran in The Prophet, “is the breaking of the shell that encloses your understanding.”

***

There is No Free Lunch

We like quick solutions that don't require a lot of effort. We're drawn to the modern equivalent of an old hustler selling an all curing tonic. Only the world does not work that way. Munger expands:

“There isn’t a single formula. You need to know a lot about business and human nature and the numbers…It is unreasonable to expect that there is a magic system that will do it for you.”

Acquiring knowledge is hard work. It's reading and adding to your knowledge so it compounds. It's going deep and developing fluency, something Darwin knew well.

***

Maximization/Minimization

In business we often find that the winning system goes almost ridiculously far in maximizing and or minimizing one or a few variables— like the discount warehouses of Costco.

When everything is a priority nothing is a priority. Attempting to maximize competing variables is a recipe for disaster. Picking one variable, and relentlessly focusing on it, which is an effective strategy, diverges from the norm. It's hard to compete with businesses who have correctly identified the right variables to maximize or minimize. When you focus on one variable, you'll increase the odds you're quick and nimble — and can respond to changes in the terrain.

***

Map and Terrain

At Berkshire there has never been a master plan. Anyone who wanted to do it, we fired because it takes on a life of its own and doesn’t cover new reality. We want people taking into account new information.”

Plans are maps that we become attached to. Once we've told everyone there is a plan and what that plan is, especially multi-year plans, we're psychologically more likely to hold to it because coming out and changing it would be admitting we're wrong. This creates a scenario where we're staking the odds against us in changing when things change. Detailed 5-year plans (that will clearly be wrong) are as disastrous as overly-general five year plans (which can never be wrong). Scrap it, isolate the key variables that you need to maximize and minimize, and follow the agile path blazed by Henry Singleton and followed by Buffett and Munger.

***

The Keys to Good Government

There are three keys: honesty, effectiveness, and efficiency.

Munger says:

“In a democracy, everyone takes turns. But if you really want a lot of wisdom, it’s better to concentrate decisions and process in one person. It’s no accident that Singapore has a much better record, given where it started, than the United States. There, power was concentrated in an enormously talented person, Lee Kuan Yew, who was the Warren Buffett of Singapore.”

Lee Kuan Yew put it this way himself: “With few exceptions, democracy has not brought good government to new developing countries. . . . What Asians value may not necessarily be what Americans or Europeans value. Westerners value the freedoms and liberties of the individual. As an Asian of Chinese cultural background, my values are for a government which is honest, effective, and efficient.”

***

One Step At a Time

“Spend each day trying to be a little wiser than you were when you woke up. Discharge your duties faithfully and well. Slug it out one inch at a time, day by day. At the end of the day— if you live long enough— most people get what they deserve.”

An incremental approach to life that reminds one of the nature of compounding. There will always be some going faster than you but we can learn from the Darwinian guide to overachieving your natural IQ. In order for this approach to be effective you need a long axis of time as well as continuous incremental progress.

***

Getting Rich

“The desire to get rich fast is pretty dangerous.” 

Getting rich is a function of being happy with what you have, spending less than you make, and time.

***

Mental Models

“Know the big ideas in the big disciplines and use them routinely— all of them, not just a few.”

Mental Models are the big ideas from multiple disciplines. While most people agree these are worth knowing, they often think they can identify which models will add the most value, and in so doing they miss something important. There is a reason that the “know nothing” index fund almost always beats the investors who think they “know.” Understanding this idea in greater detail, will change a lot of things including how you read. Acquiring the big ideas — without selectivity — is the way to mimic a know nothing index fund.

***

Know-it-alls

“I try to get rid of people who always confidently answer questions about which they don’t have any real knowledge.”

Few things have made as much of a difference in my life as systemically eliminating (and when not possible, reducing the importance of) people who think they know the answer to everything.

***

Stoic Resolve

“There’s no way that you can live an adequate life without many mistakes. In fact, one trick in life is to get so you can handle mistakes. Failure to handle psychological denial is a common way for people to go broke.”

While we all make mistakes, it's how we respond to failure that defines us.

***

Thinking

“We all are learning, modifying, or destroying ideas all the time. Rapid destruction of your ideas when the time is right is one of the most valuable qualities you can acquire. You must force yourself to consider arguments on the other side.”

“It’s bad to have an opinion you’re proud of if you can’t state the arguments for the other side better than your opponents. This is a great mental discipline.”

Thinking is a lot of work. “My first thought,” William Deresiewicz said in one of my favorite speeches, “is never my best thought. My first thought is always someone else’s; it’s always what I’ve already heard about the subject, always the conventional wisdom.”

***

Choose Your Associates Wisely

“Oh, it’s just so useful dealing with people you can trust and getting all the others the hell out of your life. It ought to be taught as a catechism. . . . But wise people want to avoid other people who are just total rat poison, and there are a lot of them.”

No comment needed there.

***

Complement Tao of Charlie Munger with this excellent Peter Bevelin Interview.

The Green Lumber Fallacy: The Difference between Talking and Doing

“Clearly, it is unrigorous to equate skills at doing with skills at talking.”
— Nassim Taleb

***

Before we get to the meat, let's review an elementary idea in biology that will be relevant to our discussion.

If you're familiar with evolutionary theory, you know that populations of organisms are constantly subjected to “selection pressures” — the rigors of their environment which lead to certain traits being favored and passed down to their offspring and others being thrown into the evolutionary dustbin.

Biologists dub these advantages in reproduction “fitness” — as in, the famously lengthening of giraffe necks gave them greater “fitness” in their environment because it helped them reach high up, untouched leaves.

Fitness is generally a relative concept: Since organisms must compete for scarce resources, their fitnesses are measured in the sense of giving a reproductive advantage over one another.

Just as well, a trait that might provide great fitness in one environment may be useless or even disadvantageous in another. (Imagine draining a pond: Any fitness advantages held by a really incredible fish becomes instantly worthless without water.) Traits also relate to circumstance. An advantage at one time could be a disadvantage at another and vice versa.

This makes fitness an all-important concept in biology: Traits are selected for if they provide fitness to the organism within a given environment.

Got it? OK, let's get back to the practical world.

***

The Black Swan thinker Nassim Taleb has an interesting take on fitness and selection in the real world:  People who are good “doers” and people who are good “talkers” are often selected for different traits. Be careful not to mix them up.

In his book Antifragile, Taleb uses this idea to invoke a heuristic he'd once used when hiring traders on Wall Street:

The more interesting their conversation, the more cultured they are, the more they will be trapped into thinking that they are effective at what they are doing in real business (something psychologists call the halo effect, the mistake of thinking that skills in, say, skiing translate unfailingly into skills in managing a pottery workshop or a bank department, or that a good chess player would be a good strategist in real life).

Clearly, it is unrigorous to equate skills at doing with skills at talking. My experience of good practitioners is that they can be totally incomprehensible–they do not have to put much energy into turning their insights and internal coherence into elegant style and narratives. Entrepreneurs are selected to be doers, not thinkers, and doers do, they don't talk, and it would be unfair, wrong, and downright insulting to measure them in the talk department.

In other words, the selection pressures on an entrepreneur are very different from those on a corporate manager or bureaucrat: Entrepreneurs and risk takers succeed or fail not so much on their ability to talk, explain, and rationalize as their ability to get things done.

While the two can often go together, Nassim figured out that they frequently don't. We judge people as ignorant when it's really us who are ignorant.

When you think about it, there's no a priori reason great intellectualizing and great doing must go together: Being able to hack together an incredible piece of code gives you great fitness in the world of software development, while doing great theoretical computer science probably gives you better fitness in academia. The two skills don't have to be connected. Great economists don't usually make great investors.

But we often confuse the two realms.  We're tempted to think that a great investor must be fluent in behavioral economics or a great CEO fluent in Mckinsey-esque management narratives, but in the real world, we see this intuition constantly in violation.

The investor Walter Schloss worked from 9-5, barely left his office, and wasn't considered an entirely high IQ man, but he compiled one of the great investment records of all time. A young Mark Zuckerberg could hardly be described as a prototypical manager or businessperson, yet somehow built one of the most profitable companies in the world by finding others that complemented his weaknesses.

There are a thousand examples: Our narratives about the type of knowledge or experience we must have or the type of people we must be in order to become successful are often quite wrong; in fact, they border on naive. We think people who talk well can do well, and vice versa. This is simply not always so.

We won't claim that great doers cannot be great talkers, rationalizers, or intellectuals. Sometimes they are. But if you're seeking to understand the world properly, it's good to understand that the two traits are not always co-located. Success, especially in some “narrow” area like plumbing, programming, trading, or marketing, is often achieved by rather non-intellectual folks. Their evolutionary fitness doesn't come from the ability to talk, but do. This is part of reality.

***

Taleb calls this idea the Green Lumber Fallacy, after a story in the book What I Learned Losing a Million Dollars. Taleb describes it in Antifragile:

In one of the rare noncharlatanic books in finance, descriptively called What I Learned Losing a Million Dollars, the protagonist makes a big discovery. He remarks that a fellow named Joe Siegel, one of the most successful traders in a commodity called “green lumber,” actually thought it was lumber painted green (rather than freshly cut lumber, called green because it had not been dried). And he made it his profession to trade the stuff! Meanwhile the narrator was into grand intellectual theories and narratives of what caused the price of commodities to move and went bust.

It is not just that the successful expert on lumber was ignorant of central matters like the designation “green.” He also knew things about lumber that nonexperts think are unimportant. People we call ignorant might not be ignorant.

The fact that predicting the order flow in lumber and the usual narrative had little to do with the details one would assume from the outside are important. People who do things in the field are not subjected to a set exam; they are selected in the most non-narrative manager — nice arguments don't make much difference. Evolution does not rely on narratives, humans do. Evolution does not need a word for the color blue.

So let us call the green lumber fallacy the situation in which one mistakes a source of visible knowledge — the greenness of lumber — for another, less visible from the outside, less tractable, less narratable.

The main takeaway is that the real causative factors of success are often hidden from usWe think that knowing the intricacies of green lumber are more important than keeping a close eye on the order flow. We seduce ourselves into overestimating the impact of our intellectualism and then wonder why “idiots” are getting ahead. (Probably hustle and competence.)

But for “skin in the game” operations, selection and evolution don't care about great talk and ideas unless they translate into results. They care what you do with the thing more than that you know the thing. They care about actually avoiding risk rather than your extensive knowledge of risk management theories. (Of course, in many areas of modernity there is no skin in the game, so talking and rationalizing can be and frequently are selected for.)

As Taleb did with his hiring heuristic, this should teach us to be a little skeptical of taking good talkers at face value, and to be a little skeptical when we see “unexplainable” success in someone we consider “not as smart.” There might be a disconnect we're not seeing because we're seduced by narrative. (A problem someone like Lee Kuan Yew avoided by focusing exclusively on what worked.)

And we don't have to give up our intellectual pursuits in order to appreciate this nugget of wisdom; Taleb is right, but it's also true that combining the rigorous, skeptical knowledge of “what actually works” with an ever-improving theory structure of the world might be the best combination of all — selected for in many more environments than simple git-er-done ability, which can be extremely domain and environment dependent. (The green lumber guy might not have been much good outside the trading room.)

After all, Taleb himself was both a successful trader and the highest level of intellectual. Even he can't resist a little theorizing.