All Models Are Wrong

How is your journey towards understanding Farnam Street’s latticework of mental models going? Is it proving useful? Changing your view of the world? If the answer is that it’s going well that’s good. There’s just one tiny hitch.

All models are wrong.

Yep. It's the truth. However, there is another part to that statement:

All models are wrong, some are useful.

Those words come from the British statistician, George Box. In a groundbreaking 1976 paper, Box revealed the fallacy of our desire to categorize and organize the world. We create models (a term with many applications), once to confuse them for reality.

Box also stated:

Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful.

What Exactly Is A Model?

First, we should understand precisely what a model is.

The dictionary definition states a model is ‘a representation, generally in miniature, to show the construction or appearance of something’ or ‘a simplified description, especially a mathematical one, of a system or process, to assist calculations and predictions.’

For our purposes here, we are better served by the second definition. A model is a simplification which fosters understanding.

Think of an architectural model. These are typically a small scale model of a building, made before it's built. Its purpose is to show what the building will look like and to help people working on the project to develop a clear picture of the overall feel. In the iconic scene from Zoolander, Derek (played by Ben Stiller) looks at the architectural model of his propsed ‘school for kids who can’t read good’ and shouts “What is this? A center for ants??”

That scene illustrates the wrong way to understand models: Too literally.

Why We Use Models- And Why They Work

At Farnam Street, we believe in using models for the purpose of building a massive, but finite amount of fundamental, invariant knowledge about how the world really works. Applying this knowledge is the key to making good decisions and avoiding stupidity.

“Scientists generally agree that no theory is 100 percent correct. Thus, the real test of knowledge is not truth, but utility. Science gives us power. The more useful that power, the better the science.”

— Yuval Noah Harari

Time-tested models allow us to understand how things work in the real world. And understanding how things work prepares us to make better decisions without expending too much mental energy in the process.

Instead of relying on fickle and specialized facts, we can learn versatile concepts. The mental models we cover are intended to be widely applicable.

It's crucial for us to understand as many mental models as possible. As the adage goes, a little knowledge can be dangerous and creates more problems than total ignorance. No single model is universally applicable – we find exceptions for nearly everything. Even hardcore physics has not been totally solved.

“The basic trouble, you see, is that people think that “right” and “wrong” are absolute; that everything that isn't perfectly and completely right is totally and equally wrong.”

— Isaac Asimov

Take a look at almost any comment section on the internet and you are guaranteed to find at least one pedant raging about a minor perceived inaccuracy, throwing out the good with the bad. While ignorance and misinformation are certainly not laudable, neither is an obsession with perfection.

Like heuristics, models work as a consequence of the fact they are usually helpful in most situations, not because they are always helpful in a small number of situations.

Models can assist us in making predictions and forecasting the future. Forecasts are never guaranteed, yet they provide us with a degree of preparedness and comprehension of the future. For example, a weather forecast which claims it will rain today may get that wrong. Still, it's correct often enough to enable us to plan appropriately and bring an umbrella.

Mental Models and Minimum Viable Products

Think of mental models as minimum viable products.

Sure, all of them can be improved. But the only way that can happen is if we try them out, educate ourselves and collectively refine them.

We can apply one of our mental models, Occam’s razor, to this. Occam’s razor states that the simplest solution is usually correct. In the same way, our simplest mental models tend to be the most useful. This is because there is minimal room for errors and misapplication.

“The world doesn’t have the luxury of waiting for complete answers before it takes action.”

— Daniel Gilbert

Your kitchen knives are not as sharp as they could be. Does that matter as long as they still cut vegetables? Your bed is not as comfortable as it could be. Does that matter if you can still get a good night’s sleep in it? Your internet is not as fast as it could be. Does that matter as long as you can load this article? Arguably not. Our world runs on the functional, not the perfect. This is what a mental model is – a functional tool. A tool which maybe could be a bit sharper or easier to use, but still does the job.

The statistician David Hand made the following statement in 2014;

In general, when building statistical models, we must not forget that the aim is to understand something about the real world. Or predict, choose an action, make a decision, summarize evidence, and so on, but always about the real world, not an abstract mathematical world: our models are not the reality.

For example, in 1960, Georg Rasch said the following:

When you construct a model you leave out all the details which you, with the knowledge at your disposal, consider inessential…. Models should not be true, but it is important that they are applicable, and whether they are applicable for any given purpose must, of course, be investigated. This also means that a model is never accepted finally, only on trial.

Imagine a world where physics like precision is prized over usefulness.

We would lack medical care because a medicine or procedure can never be perfect. In a world like this, we would possess little scientific knowledge, because research can never be 100% accurate. We would have no art because a work can never be completed. We would have no technology because there are always little flaws which can be ironed out.

“A model is a simplification or approximation of reality and hence will not reflect all of reality … While a model can never be “truth,” a model might be ranked from very useful, to useful, to somewhat useful to, finally, essentially useless.”

— Ken Burnham and David Anderson

In short, we would have nothing. Everything around us is imperfect and uncertain. Some things are more imperfect than others, but issues are always there. Over time, incremental improvements happen through unending experimentation and research.

The Map is Not the Territory

As we know, the map is not the territory. A map can be seen as a symbol or index of a place, not an icon.

When we look at a map of Paris, we know it is a representation of the actual city. There are bound to be flaws; streets which have been renamed, demolished buildings, perhaps a new Metro line. Even so, the map will help us find our way. It is far more useful to have a map showing the way from Notre Dame to Gare du Nord (a tool) than to know how many meters they are apart (a piece of trivia.)

Someone who has spent a lot of time studying a map will be able to use it with greater ease, just like a mental model. Someone who lives in Paris will find the map easier to understand than a tourist, just as someone who uses a mental model in their day to day life will apply it better than a novice. As long as there are no major errors, we can consider the map useful, even if it is by no means a reflection of reality. Gregory Bateson writes in Steps to an Ecology of Mind that the purpose of a map is not to be true, but to have a structure which represents truth within the current context.

“A map is not the territory it represents, but, if correct, it has a similar structure to the territory, which accounts for its usefulness.”

— Alfred Korzybski

Physical maps generally become more accurate as time passes. Not long ago, they often included countries which didn’t exist, omitted some which did, portrayed the world as flat or fudged distances. Nowadays, our maps have come a long way.

The same goes for mental models – they are always evolving, being revised – never really achieving perfection. Certainly, over time, the best models are revised only slightly, but we must never consider our knowledge “set”.

Another factor to consider in using models is to take into account what they're used for.

Many mental models (e.g. entropy, critical mass and activation energy) are based upon scientific and mathematical concepts. A person who works in those areas will obviously need a deeper understanding of it than someone who want to learn to think better when making investment decisions. They will need a different map and a more detailed one showing elements which the rest of us have no need for.

“A model which took account of all the variation of reality would be of no more use than a map at the scale of one to one.”

— Joan Robinson

In Partial Enchantments of the Quixote, Jorge Luis Borges provides an even more interesting analysis of the confusion between models and reality:

Let us imagine that a portion of the soil of England has been leveled off perfectly and that on it a cartographer traces a map of England. The job is perfect; there is no detail of the soil of England, no matter how minute that is not registered on the map; everything has there its correspondence. This map, in such a case, should contain a map of the map, which should contain a map of the map of the map, and so on to infinity.Why does it disturb us that the map be included in the map and the thousand and one nights in the book of the Thousand and One Nights? Why does it disturb us that Don Quixote be a reader of the Quixote and Hamlet a spectator of Hamlet? I believe I have found the reason: these inversions suggest that if the characters of a fictional work can be readers or spectators, we, its readers or spectators, can be fictions.

How Do We Know If A Model Is Useful?

This is a tricky question to answer. When looking at any model, it is helpful to ask some of the following questions:

  • How long has this model been around? As a general rule, mental models which have been around for a long time (such as Occam’s razor) will have been subjected to a great deal of scrutiny. Time is an excellent curator, trimming away inefficient ideas. A mental model which is new may not be particularly refined or versatile. Many of our mental models originate from Ancient Greece and Rome, meaning they have to be functional to have survived this long.
  • Is it a representation of reality? In other words, does it reflect the real world? Or is it based on abstractions?
  • Does this model apply to multiple areas? The more elastic a model is, the more valuable it is to learn about. (Of course, be careful not to apply the model where it doesn't belong. Mind Feynman: “You must not fool yourself, and you're the easiest person to fool.”)
  • How did this model originate? Many mental models arise from scientific or mathematical concepts. The more fundamental the domain, the more likely the model is to be true and lasting.
  • Is it based on first principles? A first principle is a foundational concept which cannot be deduced from any other concept and must be known.
  • Does it require infinite regress? Infinite regress refers to something which is justified by principles, which themselves require justification by other principles. A model based on infinite regress is likely to required extensive knowledge of a particular topic, and have minimal real-world application.

When using any mental model, we must avoid becoming too rigid. There are exceptions to all of them, and situations in which they are not applicable.

Think of the latticework as a toolkit. That's why it pays to do the work up front to put so many of them in your toolbox at a deep, deep level. If you only have one or two, you're likely to attempt to use them in places that don't make sense. If you've absorbed them only lightly, you will not be able to use them when the time is at hand.

If on the other hand, you have a toolbox full of them and they're sunk in deep, you're more likely to pull out the best ones for the job exactly when they are needed.

Too many people are caught up wasting time on physics-like precision in areas of practical life that do not have such precision available. A better approach is to ask “Is it useful?” and, if yes, “To what extent?”

Mental models are a way of thinking about the world that prepares us to make good decisions in the first place.

Mutually Assured Destruction — What Have We Done?

“They’ll take an eye for an eye until the whole world can’t see
We must stumble forward blind, repeating history.”
— Conor Oberst


The History of Mutually Assured Destruction

On the day in 1945 that Robert A Lewis, copilot of the B-29 Superfortress dropped the first atomic bomb on Hiroshima, he wrote six agonizingly poignant words in his log book: “My God, what have we done?”

What exactly had he done? His question is more complex than it seems.

If we look at that act in the literal sense, Lewis had just dropped ‘Little Boy’, the first of two atomic bombs which killed an estimated 129,000+ people. An accurate figure of the death toll is impossible to establish. This act was ordered by President Truman at the end of World War 2, to end the invasion of Japan and create ‘peace.’

Less than a week after the bombing of Nagasaki, Japan surrendered and World War 2 ended a short time later.

Nuclear fission was first discovered in 1938, and scientists soon theorized that the development of atomic bombs was plausible. After hearing of Nazi plans to develop nuclear weapons, the US began its own research projects.

The Manhattan Project was set up and researchers developed two types of atomic bomb. When Japan refused to surrender, the decision was made to use the new weapon on two major cities. It achieved the desired effect and the war was finally over.

In his declaration to the Japanese people upon the topic of surrender, Emperor Hirohito stated:

The enemy now possesses a new and terrible weapon with the power to destroy many innocent lives and do incalculable damage. Should we continue to fight, not only would it result in an ultimate collapse and obliteration of the Japanese nation, but also it would lead to the total extinction of human civilization.

Controversy is still prevalent (and doubtless always will be) as to the justification of the bombing. Many scholars debate if it caused or prevented more deaths. However, the big question is not ‘what have we done?’ but ‘what will we do?’ Dropping the first atomic bomb did not just open a macabre Pandora’s box, it also forced humanity to see the possibility that we will destroy ourselves and this entire planet in the process of settling disputes between nations.

As of 2016, 174,000 survivors of the bombing are still alive, living with the physical, psychological, and social consequences. The after effects spread far beyond Japan- indeed the ripples of the first use of nuclear weapons affect us all, even if not in an obvious way.

We have a strange tendency to forget that we are all – every human on earth – in this together. As it stands — for now, Elon — Earth is the only planet we have to live on. We can segregate it by national borders, but these are man-made ideas, not physical separations. This is where the concept of mutually assured destruction comes in. It happens to be one of the few instances where we can all agree on something: we must not wipe ourselves out and destroy our planet.

Why The Concept of Mutually Assured Destruction Matters

The concept of mutually assured destruction was first described by Wilkie Collins, a 19th century English author. In a letter written at the time of the Franco-Prussian war, over 70 years before the first atomic bomb dropped, Collins wrote:

I am, like the rest of my countrymen, heartily on the German side in the War. But what is to be said of the progress of humanity? Here are the nations still ready to slaughter each other, at the command of one miserable wretch whose interest is to set them fighting! Is this the nineteenth century? or the ninth? Are we before the time of Christ or after? I begin to believe in only one civilizing influence – the discovery one of these days, of a destructive agent so terrible that War shall mean annihilation, and men’s fears shall force them to keep the peace.

It seems that Collins was very much ahead of his time.

Alfred Nobel (founder of the Nobel Prize and the inventor of dynamite) recognized this too, saying:

The day when two army corps can annihilate each other in one second, all civilized nations, it is to be hoped, will recoil from war and discharge their troops.

For most of human history, combat was hand to hand. People fought face to face, using swords, knives, bayonets, clubs, and other handheld weapons.

Humans have always fought each other with a viciousness largely unique to our species. Archaeological digs have uncovered evidence of genocides as far back as 5000 BC. As our intelligence and technology has advanced, so has our capacity to kill each other in large numbers.

Warren Buffett pointed this out in a CNN interview.

You know, thousands of years ago we had psychotics and we had religious fanatics and we had megalomaniacs. But about the most they could do was throw a stone at somebody if they wished evil on them. Today, since 1945, the ability to inflict evil, or harm, on other people in huge numbers has grown exponentially.

Nuclear weapons are the culmination of this progress towards methods of wiping out huge numbers of people with minimal effort.

After the US dropped the first atomic bombs on Japan, other countries raced to develop their own. The USSR had hydrogen bombs within 8 years. Both developed their technology to the point where either of them had the ability to basically decimate the entire world if the leaders chose to. It goes without saying that a nation had never held that type of power before.

By the 1960s, the concept of mutually assured destruction (hereafter referred to as MAD) was crystallized. Both the US and the USSR could bring about the end of humanity (including themselves), but neither wanted to. This lead to a stalemate, essentially stating ‘I won’t if you don’t.’ For either to attack would mean their own destruction, defeating the purpose of war. Ironically enough, the concept of MAD has led to relative peace between countries with nuclear capabilities. Tension is still prevalent, as each must keep up with the developments of the other to ensure continued equality.

During the Cold War, MAD was probably responsible for the lack of serious conflict between the US and USSR. The US kept a fleet of airplanes airborne non-stop, ready to drop nuclear bombs on the USSR at a moment's’ notice, should they strike first. Even if the USSR tried to destroy the entire US, they would still be able to retaliate using airplanes. However, airplanes were logistically and financially difficult to sustain and the US began to look for alternatives. Ballistic missile submarines were adopted as a solution. Submarines are also operated by the UK, France, China, India, and Russia. While world peace is certainly a long way off, this nuclear fleet provides a semblance of global stability.

The Key Components of Mutually Assured Destruction

There are several key components of the doctrine of MAD:

  • Both sides in a combat must have the capacity to completely destroy the other. Any inequality in their power has the potential to tip the balance. The US and USSR have since developed more nuclear technology – guided missile systems, and weapons sprinkled around the globe in submarines. Neither side can have sufficient nuclear shelters to protect substantial numbers of people in the event of an attack. If one side can cause a degree of destruction which would prevent a counterattack, the concept of MAD is not applicable.
  • Both sides must have a genuine reason and motivation to believe that the other would be willing to destroy them. Any doubt in this area is dangerous.
  • Both sides must be able to detect attacks with perfect accuracy. This necessitates the ability to know when a nuclear attack has occurred, without any errors. If one side uses stealth detonation (such as bombs smuggled into a country), MAD is not assured.
  • Both sides must know exactly where a threat originates from. One serious problem is the border between China and Russia, both of which have nuclear weapons. Parts of China actually protrude into Russia, which could lead to complications as one could make it appear as if an attack originated from the other.
  • Both sides must act rationally (in short, all those with power must be able to act like adults and take the concept of MAD seriously.) A rogue leader with a great deal of power and a disregard for human life beyond their own would have the potential to start a nuclear war. A chilling fact is that this came close to happening during the Cuban Missile Crisis when a lone submarine commander attempted to detonate a nuclear missile. That single act of insanity might have easily meant that you could not be reading this right now. Carl Sagan sums this up: “The nuclear arms race is like two sworn enemies standing waist-deep in gasoline, one with three matches and the other with five…every thinking person fears nuclear war and every technological state plans for it. Everyone knows it is madness and every nation has an excuse.”

As that list shows, the concept is somewhat fragile and requires constant vigilance and innovation to maintain. There is also the ever prevalent risk of accidental or terrorist detonation.

The Potential Consequences of Mutually Assured Destruction

Thankfully, we do not have a clear picture of the potential consequences of MAD playing out. We do have an idea of what they would be – a nuclear apocalypse. Although the concept of a nuclear apocalypse has become almost comical due to its prevalence in science fiction, it is important to understand that it remains a very real possibility.

Should a nuclear war break out, it is believed that the result would be the collapse of human civilization, as much or all of the planet becomes unsuitable for life, cities are erased and technology becomes unusable.

Any people who manage to survive the initial blast(s), would have a poor chance of long-term survival due to firestorms, a possible nuclear winter and the effects of radiation. Combine that with a famine and lack of law enforcement or medical care for survivors- the outlook would be bleak, to say the least. Substantial numbers of people could survive a global world war, although it is unlikely that we would be able to overcome the secondary impact. In any case, what would human life be without any economic or political structure or any of the other important concepts we take for granted?

Once again, it is difficult yet important to take this concept seriously as it seems so unrealistic. When we take into account the fact that at least 15,000 nuclear weapons are held worldwide, the chances are higher than we imagine.

In Bomb Steve Sheinkin writes:

“Consider this. A study published in Scientific American in 2010 looked at the probable impact of a “small” nuclear war, one where India and Pakistan each dropped fifty atomic bombs. The scientists concluded that the explosions would ignite massive firestorms, sending enormous amounts of dust and smoke into the atmosphere. This would block some of the sun's light from reaching the earth, making the planet colder and darker – for about ten years. Farming would collapse, and people all over the globe would starve to death. And that's if only half of one percent of all the atomic bombs on earth were used.

In the end, this is a difficult story to sum up. The making of the atomic bomb is one of history's most amazing examples of teamwork and genius and poise under pressure. But it's also the story of how humans created a weapon capable of wiping our species off the planet. It's a story with no end in sight. And, like it or not, you are in it.”


The mental model of mutually assured destruction can be particularly useful outside of warfare.


Ask yourself, “Where are we competing where everyone is effectively the same?” and “What would happen if we got into a heated price war … could we destroy our competitor? Would they destroy us?” Ideally, you can avoid these situations but if you can't you should, at least, be aware of them.

You can learn a lot just from identifying the situations where you're in a version of mutually assured destruction. For the model to work as a thinking tool you don't have to go as far as destruction. You can think in terms of ability for competitors to cause you pain or for you to cause them pain.


One of the ways we understand trust is through mutually assured destruction.

Consider two people considering entering into a business deal, both of whom go out for a night of heavy drinking. After a lot of drinks, they end up cheating with each other on their partners. They each know they can cause harm to the other, and thus trust between them may be increased.

Another example is that of two businesses engaged in tax fraud together. Either could rat the other out, knowing full well they would then be turned over.

The list goes on and on.

In the end, while you should try to avoid situations of mutually assured destruction, they can promote good behavior between parties. However, it only takes one party in a situation to start a massive chain reaction with usually catastrophic outcomes.

Mutually assured destruction is part of the Farnam Street latticework of mental models.

People Don’t Follow Titles: Necessity and Sufficiency in Leadership

“Colonel Graff: You have a habit of upsetting your commander.
Ender Wiggin: I find it hard to respect someone just because they outrank me, sir.”
— Orson Scott Card


Many leaders confuse necessary conditions for leadership with sufficient ones.

Titles often come with the assumption people will follow you based on a title. Whether by election, appointment, or divine right, at some point you were officially put in the position. But leadership is based on more than just titles.

Not only do title-based leaders feel like once they get the title that everyone will fall in line, but they also feel they are leading because they are in charge — a violation of the golden rules of leadership. This makes them toxic to organization culture.

A necessary condition for leadership is trust, which doesn't come from titles. You have to earn it.


Necessary conditions are those that must be present, but are not, on their own, enough for achievement.

Perhaps an easy example will help illuminate. Swinging at a pitch in baseball is necessary to hit the ball, but not sufficient to do so.

War offers another example. It's necessary to know the capabilities of your enemy and their positions, but that is not sufficient to win a battle.

Leadership can be very similar. Being in a position of leadership is necessary to lead an organization, but that is not sufficient to get people moving towards a common goal. Titles, on their own, do not confer legitimacy. And legitimacy is one of the sufficient conditions of leadership.

If your team, organization, or country doesn't view you as legitimate you will have a hard time getting anything done. Because they won’t work for you, and you can’t do it all yourself. Leadership without legitimacy is a case of multiply by zero.

There is a wonderful example of this, from the interesting history of the Mongolians. In his book The Secret History of the Mongol Queens, Jack Weatherford tells an amazing story of the unlikely, but immensely successful, leadership of Manduhai the Wise.

250 years after Genghis Khan, the empire was in fragments. The Mongols had retreated into their various tribes, often fighting each other and nominally ruled by outsiders from China and the Middle East. There was still a Khan, but he exercised no real power. The Mongol tribes were very much at the mercy of their neighbors.

In 1470 the sitting Khan died, survived only by a junior wife. There were immediate suitors vying for her affection because by marrying her the title of Khan could be claimed. Her name was Manduhai. Instead of choosing the easy path of remarriage and an alliance, she decided to pursue her dream of uniting the Mongol nation.

First, she had to choose a consort that would allow her to keep the title of Queen. There was one remaining legitimate survivor of Genghis Khan’s bloodline – a sickly 7-year-old boy. Orphaned as a baby and neglected by his first caregiver, he had been under Manduhai’s protection for a few years. Because of his lineage, she took him to the Shrine of the First Queen and asked for divine blessings in installing him as the Great Khan. They would rule together, but clearly, due to his age and condition, she would be in charge.

Although her words would be addressed to the shrine, and she would face away from the crowd, there could be no question that, in addition to being the spiritual outcry of a pilgrim, these words constituted a desperate plea of a queen to her people. This would be the most important political speech of her life.

She was successful in securing the appointment. But Manduhai understood that the title of Great Khan for the little boy and Khatun (Queen) for her would not be enough. She needed the support of all the Mongol tribes to give the titles legitimacy, and here there were a significant number of obstacles to overcome.

Twice before in the previous generations, boys of his age had been proclaimed Great Khan, only to be murdered by their rivals before they could reach full maturity. Other fully grown men who bore the title were also ignominiously struck down and killed by the Muslim warlords who tried to control them.

First Manduhai had to keep herself and the boy, Dayan Khan, alive. Then she had to demonstrate that they were the right people to unite the Mongol tribes and ensure prosperity for all. This would take both physical battles and a strategic understanding of how to employ little power for great effect. Her success was by no means guaranteed.

Throughout their reign, as on this awkward inaugural day, they frequently benefited from the underestimation of their abilities by those who struggled against them. In the world where physical strength and mastery of the horse and bow seemed to be all that really mattered, no one seemed to anticipate the advantages of patient intelligence, careful planning, and consistency of action.

It was these traits that led Manduhai to carefully craft her plan of action. She needed to position herself as a true leader that could unite the Mongol tribes.

Vows, prayers, and rituals before a shrine added much needed scared legitimacy to Dayan Khan’s rule, but without force of arms, they amounted to empty gestures and wasted breath. Only after demonstrating that she had the skill to win, as well as the supernatural blessing to do so, could Manduhai hope to rule the Mongols. She had enemies on every side, and she needed to choose her first battle carefully. She had to confront each enemy, but she had to confront each in its own due time. Manduhai needed to manage the flow of conflicts by deciding when and where to fight and not allowing others to force her into a war for which she was not prepared or stood little chance of winning.

She made an important strategic alliance with one of the failed suitors, a popular and intelligent general who controlled the area immediately east of her power base. Then she went to battle to secure her western front. Some tribes supported her from the outset, due to the spiritual power of her partnership with the boy, the ‘true Khan’. The rest she conquered, support snowballing behind her.

In addition to its strategic importance, the western campaign against the Oirat was a notable propaganda victory, demonstrating that Manduhai had the blessing of the Shrine of the First Queen and the Eternal Blue Sky. Manduhai showed that she was in control of her country.

Grinding it out in the trenches inspired support. Manduhai demonstrated the courage and intelligence to lead and to provide what her people needed. She was not an empire builder, seeking to conquer the world. Rather, she was pragmatic desiring to unify the Mongol nation to ensure they had the means to thwart any future attempt at takeover by a foreign power.

In contrast to the expansive territorial acquisition favored by prior generations of steppe conquerors, Manduhai pursued a strategy of geographic precision. Better to control the right spot rather than be responsible for conquering, organizing, and running a massive empire of reluctant subjects. … Rather than trying to conquer and occupy the extensive links of the Silk Route or the vast expanse of China, she sought to conquer just the strategic spot from which to control them.

Her story teaches us the difference between necessity and sufficiency when it comes to leadership.

Manduhai ticked all the necessary boxes, being a Queen, choosing a descendant of Genghis Khan to rule by her side, and asking for meaningful spiritual blessings. While necessary these were not sufficient to rule. To actually be accepted as a leader, she had to prove herself both on the battlefield and in strategic negotiations. She understood that people would only follow her if they believed in her, and saw that she was working for them. And finally, she also considered how to use her leadership to create something that would continue long after she had gone.

Manduhai concentrated the remainder of her life in protecting what she had accomplished and making certain that the nation could sustain itself after her departure. With the same assiduous devotion she had applied to the battlefield and the unification of the Mongol nation, Manduhai and Dayan Khan now set to the reorganization of the Mongol government and its protection in the future.

In this, she succeeded. She cemented her power as Queen by ultimately working for the peace and prosperity of the entire Mongol nation. Perhaps this is why she is remembered by them as Mandukhai the Wise.

Thought Experiment: How Einstein Solved Difficult Problems

“We live not only in a world of thoughts, but also in a world of things.
Words without experience are meaningless.”
— Vladimir Nabokov


The Basics

“All truly wise thoughts have been thought already thousands of times; but to make them truly ours, we must think them over again honestly, until they take root in our personal experience.”
— Johann Wolfgang von Goethe


Imagine a small town with a hard working barber. The barber shaves everyone in the town who does not shave themselves. He does not shave anyone who shaves themselves. So, who shaves the barber?

The ‘impossible barber’ is one classic example of a thought experiment – a means of exploring a concept, hypothesis or idea through extensive thought. When finding empirical evidence is impossible, we turn to thought experiments to unspool complex concepts.

In the case of the impossible barber, setting up an experiment to figure out who shaves him would not be feasible or even desirable. After all, the barber cannot exist. Thought experiments are usually rhetorical. No particular answer can or should be found.

The purpose is to encourage speculation, logical thinking and to change paradigms. Thought experiments push us outside our comfort zone by forcing us to confront questions we cannot answer with ease. They reveal that we do not know everything and some things cannot be known.

In a paper entitled Thought Experimentation of Presocratic Philosophy, Nicholas Rescher writes:

Homo sapiens is an amphibian who can live and function in two very different realms- the domain of actual facts which we can investigate in observational inquiry, and the domain of the imaginative projection which we can explore in thought through reasoning…A thought experiment is an attempt to draw instruction from a process of hypothetical reasoning that proceeding by eliciting the consequences of a hypothesis which, for anything that one actually knows to the contrary, may be false. It consists in reasoning from a supposition that is not accepted as true- perhaps even known to be false but is assumed provisionally in the interests of making a point or resolving a conclusion.

As we know from the narrative fallacy, complex information is best digested in the form of narratives and analogies. Many thought experiments make use of this fact to make them more accessible. Even those who are not knowledgeable about a particular field can build an understanding through thought experiments. The aim is to condense first principles into a form which can be understood through analysis and reflection. Some incorporate empirical evidence, looking at it from an alternative perspective.

The benefit of thought experiments (as opposed to aimless rumination) is their structure. In an organized manner, thought experiments allow us to challenge intellectual norms, move beyond the boundaries of ingrained facts, comprehend history, make logical decisions, foster innovative ideas, and widen our sphere of reference.

Despite being improbable or impractical, thought experiments should be possible, in theory.

The History of Thought Experiments

Thought experiments have a rich and complex history, stretching back to the ancient Greeks and Romans. As a mental model, they have enriched many of our greatest intellectual advances, from philosophy to quantum mechanics.

An early example of a thought experiment is Zeno’s narrative of Achilles and the tortoise, dating to around 430 BC. Zeno’s thought experiments aimed to deduce first principles through the elimination of untrue concepts.

In one instance, the Greek philosopher used it to ‘prove’ motion is an illusion. Known as the dichotomy paradox, it involves Achilles racing a tortoise. Out of generosity, Achilles gives the tortoise a 100m head start. Once Achilles begins running, he soon catches up on the head start. However, by that point, the tortoise has moved another 10m. By the time he catches up again, the tortoise will have moved further. Zeno claimed Achilles could never win the race as the distance between the pair would constantly increase.

In the 17th century, Galileo further developed the concept by using thought experiments to affirm his theories. One example is his thought experiment involving two balls (one heavy, one light) which are dropped from the Leaning Tower of Pisa. Prior philosophers had theorized the heavy ball would land first. Galileo claimed this was untrue, as mass does not influence acceleration. We will look at Galileo’s thought experiments in more detail later on in this post.

In 1814, Pierre Laplace explored determinism through ‘Laplace’s demon.’ This is a theoretical ‘demon’ which has an acute awareness of the location and movement of every single particle in existence. Would Laplace’s demon know the future? If the answer is yes, the universe must be linear and deterministic. If no, the universe is nonlinear and free will exists.

In 1897, the German term ‘Gedankenexperiment’ passed into English and a cohesive picture of how thought experiments are used worldwide began to form.

Albert Einstein used thought experiments for so of his most important discoveries. The most famous of this thought experiments was on a beam of light, which was made into a brilliant children's book. What would happen if you could catch up to a beam of light as it moved he asked himself? The answers lead him down a different path toward time, which lead to the special theory of relativity.

In On Thought Experiments, 19th-century Philosopher and physicist Ernst Mach writes that curiosity is an inherent human quality. We see this in babies, as they test the world around them and learn the principle of cause and effect. With time, our exploration of the world becomes more and more in depth. We reach a point where we can no longer experiment through our hands alone. At that point, we move into the realm of thought experiments.

Thought experiments are a structured manifestation of our natural curiosity about the world.

Mach writes:

Our own ideas are more easily and readily at our disposal than physical facts. We experiment with thought, so as to say, at little expense. This it shouldn’t surprise us that, oftentime, the thought experiment precedes the physical experiment and prepares the way for it… A thought experiment is also a necessary precondition for a physical experiment. Every inventor and every experimenter must have in his mind the detailed order before he actualizes it. Even if Stephenson knew the train, the rails and the steam engine from experience, he must have, nonetheless, have preconceived in his thoughts the combination of a train on wheels, driven by a steam engine, before he could have proceeded to the realization. No less did Galileo have to envisage, in his imagination, the arrangements for the investigation of gravity, before these were actualized. Even the beginner learns in experimenting than as insufficient preliminary estimate, or nonobservance of sources of error has for him no less tragic comic results than the proverbial ‘look before you leap’ does in practical life.

Mach compares thought experiments to the plans and images we form in our minds before commencing an endeavor. We all do this — rehearsing a conversation before having it, planning a piece of work before starting it, figuring out every detail of a meal before cooking it. Mach views this as an integral part of our ability to engage in complex tasks and to innovate creatively.

According to Mach, the results of some thought experiments can be so certain that it is unnecessary to physically perform it. Regardless of the accuracy of the result, the desired purpose has been achieved.

We will look at some key examples of thought experiments throughout this post, which will show why Mach’s words are so important. He adds:

It can be seen that the basic method of the thought experiment is just like that of a physical experiment, namely, the method of variation. By varying the circumstances (continuously, if possible) the range of validity of an idea (expectation) related to these circumstances is increased.

Although some people view thought experiments as pseudo-science, Mach saw them as valid and important for experimentation.

Types of Thought Experiment

“Can't you give me brains?” asked the Scarecrow.

“You do not need them. You are learning something every day. A baby has brains, but it does not know much. Experience is the only thing that brings knowledge, and the longer you are on earth the more experience you are sure to get.”
― L. Frank Baum, The Wonderful Wizard of Oz


Several key types of thought experiment have been identified:

  • Prefactual – Involving potential future outcomes. E.g. ‘What will X cause to happen?’
  • Counterfactual – Contradicting known facts. E.g. ‘If Y happened instead of X, what would be the outcome?’
  • Semi-factual – Contemplating how a different past could have lead to the same present. E.g. ‘If Y had happened instead of X, would the outcome be the same?’
  • Prediction– Theorising future outcomes based on existing data. Predictions may involve mental or computational models. E.g. ‘If X continues to happen, what will the outcome be in one year?’
  • Hindcasting– Running a prediction in reverse to see if it forecasts an event which has already happened. E.g. ‘X happened, could Y have predicted it?’
  • Retrodiction– Moving backwards from an event to discover the root cause. Retrodiction is often used for problem solving and prevention purposes. E.g. ‘What caused X? How can we prevent it from happening again?’
  • Backcasting – Considering a specific future outcome, then working forwards from the present to deduce its causes. E.g. ‘If X happens in one year, what would have caused it?’

Thought Experiments in Philosophy

“With our limited senses and consciousness, we only glimpse a small portion of reality. Furthermore, everything in the universe is in a state of constant flux. Simple words and thoughts cannot capture this flux or complexity. The only solution for an enlightened person is to let the mind absorb itself in what it experiences, without having to form a judgment on what it all means. The mind must be able to feel doubt and uncertainty for as long as possible. As it remains in this state and probes deeply into the mysteries of the universe, ideas will come that are more dimensional and real than if we had jumped to conclusions and formed judgments early on.”

― Robert Greene, Mastery


Thoughts experiments have been an integral part of philosophy since ancient times. This is in part due to philosophical hypotheses often being subjective and impossible to prove through empirical evidence.

Philosophers use thought experiments to convey theories in an accessible manner. With the aim of illustrating a particular concept (such as free will or mortality), philosophers explore imagined scenarios. The goal is not to uncover a ‘correct’ answer, but to spark new ideas.

An early example of a philosophical thought experiment is Plato’s Allegory of the Cave, which centers around a dialogue between Socrates and Glaucon (Plato’s brother.)

A group of people are born and live within a dark cave. Having spent their entire lives seeing nothing but shadows on the wall, they lack a conception of the world outside. Knowing nothing different, they do not even wish to leave the cave. At some point, they are lead outside and see a world consisting of much more than shadows.

“The frog in the well knows nothing of the mighty ocean.”

— Japanese Proverb

Plato used this to illustrate the incomplete view of reality most us have. Only by learning philosophy, Plato claimed, can we see more than shadows.

Upon leaving the cave, the people realize the outside world is far more interesting and fulfilling. If a solitary person left, they would want to others to do the same. However, if they return to the cave, their old life will seem unsatisfactory. This discomfort would become misplaced, leading them to resent the outside world. Plato used this to convey his (almost compulsively) deep appreciation for the power of educating ourselves. To take up the mantle of your own education and begin seeking to understand the world is the first step on the way out of the cave.

Moving from caves to insects, let’s take a look at a fascinating thought experiment from 20th-century philosopher Ludwig Wittgenstein. Imagine

Imagine a world where each person has a beetle in a box. In this world, the only time anyone can see a beetle is when they look in their own box. As a consequence, the conception of a beetle each individual has is based on their own. It could be that everyone has something different, or that the boxes are empty, or even that the contents are amorphous.

Wittgenstein uses the ‘Beetle in a Box’ thought experiment to convey his work on the subjective nature of pain. We can each only know what pain is to us, and we cannot feel another person’s agony. If people in the hypothetical world were to have a discussion on the topic of beetles, each would only be able to share their individual perspective. The conversation would have little purpose because each person can only convey what they see as a beetle. In the same way, it is useless for us to describe our pain using analogies (‘it feels like a red hot poker is stabbing me in the back’) or scales (‘the pain is 7/10.’)

Thought Experiments in Science

Although empirical evidence is usually necessary for science, thought experiments may be used to develop a hypothesis or to prepare for experimentation. Some hypotheses cannot be tested (e.g string theory) – at least, not given our current capabilities.

Theoretical scientists may turn to thought experiments to develop a provisional answer, often informed by Occam’s razor.

Nicholas Rescher writes:

In natural science, thought experiments are common. Think, for example, of Einstein’s pondering the question of what the world would look like if one were to travel along a ray of light. Think too of physicists’ assumption of a frictionlessly rolling body or the economists’ assumption of a perfectly efficient market in the interests of establishing the laws of descent or the principles of exchange, respectively…Ernst Mach [mentioned in the introduction] made the sound point that any sensibly designed real experiment should be preceded by a thought experiment that anticipates at any rate the possibility of its outcome.

In a paper entitled Thought Experiments in Scientific Reasoning, Andrew D. Irvine explains that thought experiments are a key part of science. They are in the same realm as physical experiments. Thought experiments require all assumptions to be supported by empirical evidence. The context must be believable, and it must provide useful answers to complex questions. A thought experiment must have the potential to be falsified.

Irvine writes:

Just as a physical experiment often has repercussions for its background theory in terms of confirmation, falsification or the like, so too will a thought experiment. Of course, the parallel is not exact; thought experiments…no do not include actual interventions within the physical environment.

In  Do All Rational Folks Think As We Do? Barbara D. Massey writes:

Often critique of thought experiments demands the fleshing out or concretizing of descriptions so that what would happen in a given situation becomes less a matter of guesswork or pontification. In thought experiments we tend to elaborate descriptions with the latest scientific models in mind…The thought experiment seems to be a close relative of the scientist’s laboratory experiment with the vital difference that observations may be made from perspectives which are in reality impossible, for example, from the perspective of moving at the speed of light…The thought experiment seems to discover facts about how things work within the laboratory of the mind.

One key example of a scientific thought experiment is Schrodinger’s cat.

Developed in 1935 by Edwin Schrodinger, Schrodinger's cat seeks to illustrate the counterintuitive nature of quantum mechanics in a more understandable manner. Although difficult to present in a simplified manner, the idea is

Although difficult to present in a simplified manner, the idea is that of a cat which is neither alive nor dead, encased within a box. Inside the box is a Geiger counter and a small quantity of decaying radioactive material. The amount of radioactive material is small, and over a period time, it is equally probable it will decay or not. If it does decay, a tube of acid will smash and poison the cat. Without opening the box, it is impossible to know if the cat is alive or not.

Let's ignore the ethical implications and the fact that, if this were performed, the angry meowing of the cat would be a clue. Like most thought experiments, the details are arbitrary – it is irrelevant what animal it is, what kills it, or the time frame.

Schrodinger’s point was that quantum mechanics are indeterminate. When does a quantum system switch from one state to a different one? Can the cat be both alive and dead, and is that conditional on it being observed? What about the cat’s own observation of itself?

In Search of Schrodinger’s Cat, John Gribbin writes:

Nothing is real unless it is observed…there is no underlying reality to the world. “Reality,” in the everyday sense, is not a good way to think about the behavior of the fundamental particles that make up the universe; yet at the same time those particles seem to be inseparably connected into some invisible whole, each aware of what happens to the others.

Schrodinger himself wrote in Nature and The Greeks:

We do not belong to this material world that science constructs for us. We are not in it; we are outside. We are only spectators. The reason why we believe that we are in it that we belong to the picture, is that our bodies are in the picture. Our bodies belong to it. Not only my own body, but those of my friends, also of my dog and cat and horse, and of all the other people and animals. And this is my only means of communicating with them.

Another important early example of a scientific thought experiment is Galileo’s Leaning Tower of Pisa Experiment.

Galileo sought to disprove the prevailing belief that gravity is influenced by the mass of an object. Since the time of Aristotle, people had assumed that a 10g object would fall at 1/10th the speed of a 100g object. Oddly, no one is recorded as having tested this.

According to Galileo’s early biography (written in 1654), he dropped two objects from the Leaning Tower of Pisa to disprove the gravitational mass relation hypothesis. Both landed at the same time, ushering in a new understanding of gravity. It is unknown if Galileo performed the experiment itself, so it is regarded as a thought experiment, not a physical one. Galileo reached his conclusion through the use of other thought experiments.

Biologists use thought experiments, often of the counterfactual variety. In particular, evolutionary biologists question why organisms exist as they do today. For example, why are sheep not green? As surreal as the question is, it is a valid one. A green sheep would be better camouflaged from predators. Another thought experiment involves asking: why don’t organisms (aside from certain bacteria) have wheels? Again, the question is surreal but is still a serious one. We know from our vehicles that wheels are more efficient for moving at speed than legs, so why do they not naturally exist beyond the microscopic level?

Psychology and Ethics — The Trolley Problem

Picture the scene. You are a lone passerby in a street where a tram is running along a track. The driver has lost control of it. If the tram continues along its current path, the five passengers will die in the ensuing crash. You notice a switch which would allow the tram to move to a different track, where a man is standing. The collision would kill him but would save the five passengers. Do you press the switch?

This thought experiment has been discussed in various forms since the early 1900s. Psychologists and ethicists have discussed the trolley problem at length, often using it in research. It raises many questions, such as:

  • Is a casual observer required to intervene?
  • Is there a measurable value to human life? I.e. is one life less valuable than five?
  • How would the situation differ if the observer were required to actively push a man onto the tracks rather than pressing the switch?
  • What if the man being pushed were a ‘villain’? Or a loved one of the observer? How would this change the ethical implications?
  • Can an observer make this choice without the consent of the people involved?

Research has shown most people are far more willing to press a switch than to push someone onto the tracks. This changes if the man is a ‘villain’- people are then far more willing to push him. Likewise, they are reluctant if the person being pushed is a loved one. In

In Incognito: The Secret Lives of The Brain, David Eagleman writes that our brains have a distinctly different response to the idea of pushing someone and the idea of pushing a switch. When confronted with a switch, brain scans show that our rational thinking areas are activated. Changing pushing a switch to pushing a person and our emotional areas activate. Eagleman summarizes that:

People register emotionally when they have to push someone; when they only have to tip a lever, their brain behaves like Star Trek’s Mr. Spock.

The trolley problem is theoretical, but it does have real world implications. For example, the majority of people who eat meat would not be content to kill the animal themselves- they are happy to press the switch but not to push the man. Even those who do not consume meat tend to ignore the fact they are indirectly contributing to the deaths of animals due to production quotas, which mean the meat they would have eaten ends up getting wasted. They feel morally superior as they are not actively pushing anyone onto the tracks, yet are still like an observer who does not intervene in anyway. As we move towards autonomous vehicles, there may be real life instances of similar situations. Vehicles may be required to make utilitarian choices – such as swerving into a ditch and killing the driver to avoid a group of children.

Although psychology and ethics are separate fields, they often make use of the same thought experiments.

The Infinite Monkey Theorem and Mathematics

“Ford!” he said, “there's an infinite number of monkeys outside who want to talk to us about this script for Hamlet they've worked out.”

― Douglas Adams, The Hitchhiker's Guide to the Galaxy


The infinite monkey theorem is a mathematical thought experiment. The premise is that infinite monkeys with typewriters will, eventually, type the complete works of Shakespeare. Some versions involve infinite monkeys or a single work. Mathematicians use the monkey(s) as a representation of a device which produces letters at random.

In Fooled By Randomness, Nassim Taleb writes:

If one puts an infinite number of monkeys in front of (strongly built) typewriters, and lets them clap away, there is a certainty that one of them will come out with an exact version of the ‘Iliad.' Upon examination, this may be less interesting a concept than it appears at first: Such probability is ridiculously low. But let us carry the reasoning one step beyond. Now that we have found that hero among monkeys, would any reader invest his life's savings on a bet that the monkey would write the ‘Odyssey' next?

The infinite monkey theorem is intended to illustrate the idea that any issue can be solved through enough random input, in the manner a drunk person arriving home will eventually manage to fit their key in the lock even if they do it without much finesse. It also represents the nature of probability and the idea that any scenario is workable, given enough time and resources.

To learn more about thought experiments, consider reading The Pig That Wants to Be Eaten, The Infinite Tortoise or The Laboratory of the Mind.

Rory Sutherland on The Psychology of Advertising, Complex Evolved Systems, Reading, Decision Making

“There is a huge danger in looking at life as an optimization problem.”


Rory Sutherland (@rorysutherland) is the Vice Chairman of Ogilvy & Mather Group, which is one of the largest advertising companies in the world.

Rory started the behavioral insights team and spends his days applying behavioral economics and evolutionary psychology to solve problems that conventionally advertising agencies haven't been able to solve.

In this wide-ranging interview we talk about: how advertising agencies are solving airport security problems, what Silicon Valley misses, how to mess with self-driving cars, reading habits, decision making, the intersection of advertising and psychology, and so much more.

This interview was recorded live in London, England.

Enjoy this amazing conversation.

“The problem with economics is not only that it is wrong but that it's incredibly creatively limiting.”


A lot of people like to take notes while listening. A transcription of this conversation is available to members of our learning community or you can purchase one separetly.


If you liked this, check out all the episodes of the knowledge project.

Habits vs Goals : A Look at the Benefits of a Systematic Approach to Life

“First forget inspiration.
Habit is more dependable.
Habit will sustain you whether you're inspired or not.
Habit is persistence in practice.”

— Octavia Butler


Nothing will change your future trajectory like habits.

We all have goals, big or small things which we want to achieve within a certain time frame. Some people want to make a million dollars by the time they turn 30. Some want to lose 20lb before summer. Some want to write a book in the next 6 months. When we begin to chase an intangible or vague concept (success, wealth, health, happiness) making a tangible goal is often the first step.

Habits are processes operating in the background that powers our lives. Good habits help us reach our goals. Bad ones hinder us. Either way habits powerfully influence our automatic behavior.

The difference between habits and goals is not semantic. Each requires different forms of action. For example:

  • We want to learn a new language. We could decide we want to be fluent in 6 months (goal), or we could commit to 30-minutes of practice each day (habit.)
  • We want to read more books. We could set the goal to read 50 books by the end of the year, or we could decide to always carry one (habit.)
  • We want to spend more time with family. We could plan to spend 7 hours a week with family (goal), or we could choose to eat dinner with them each night (habit.)

The Problems With Goals

When we want to change an aspect of our lives, setting a goal is often the logical first step. Despite being touted by many a self-help guru, this approach has some problematic facets.

Goals have an endpoint. This is why many people revert to their previous state after achieving a certain goal. People run marathons, then stop exercising altogether afterward. Or they make a certain amount of money, then fall into debt soon after. Others reach a goal weight, only to spoil their progress by overeating to celebrate.

Goals rely on factors which we do not always have control over. It’s an unavoidable fact that reaching a goal is not always possible, regardless of effort. An injury might derail a fitness goal. An unexpected expense might sabotage a financial goal. A family tragedy might impede a creative output goal. When we set a goal, we are attempting to transform what is usually a heuristic process into an algorithmic one.

Goals rely on willpower and self-discipline. As Charles Duhigg wrote in The Power of Habit:

Willpower isn’t just a skill. It’s a muscle, like the muscles in your arms or legs, and it gets tired as it works harder, so there’s less power left over for other things.

Keeping a goal in mind and using it to direct our actions requires constant willpower. During times when other parts of our lives deplete our supply, it can be easy to forget it. For example, the goal of saving money requires self-discipline each time we make a purchase. Meanwhile, the habit of putting $50 in a savings account weekly requires little effort. Habits, not goals, make otherwise difficult things easy.

Goals can make us complacent or reckless. Studies have shown that people’s brains can confuse goal setting with achievement. This effect is pronounced when they inform others. Furthermore, unrealistic goals can lead to dangerous or unethical behavior.

The Benefits of Habits

“Habit is the intersection of knowledge (what to do), skill (how to do), and desire (want to do).”
— Stephen Covey


Once formed habits operate automatically. Habits take otherwise difficult tasks—like saving money—and make them easy.

The purpose of a well-crafted set of habits is to ensure we reach our goals with incremental steps. The benefits of a systematic approach to achievement include:

Habits can mean we overshot our goals. Let’s say a person’s goal is to write a novel. They decide to write 200 words a day, meaning it should take 250 days. Writing 200 words takes little effort, and even on the busiest, most stressful days they get it done. However, on some days that small step leads to them writing 1000 or more. As a result, they finish the book in much less time.Yet setting ‘write a book in 4 months’ as a goal would have been intimidating.

Habits are easy to complete. As Duhigg wrote;

Habits are powerful, but delicate. They can emerge outside our consciousness or can be deliberately designed. They often occur without our permission but can be reshaped by fiddling with their parts. They shape our lives far more than we realize—they are so strong, in fact, that they cause our brains to cling to them at the exclusion of all else, including common sense.”

Once we develop a habit, our brains actually change to make the behavior easier to complete. After about 30 days of practice, enacting a habit becomes easier than not doing so.

Habits are for life. Our lives are structured around habits, many of them barely noticeable. According to Duhigg’s research, habits make up 40% of our waking hours. These, often minuscule actions which add up to make who we are. William James (a man who knew the problems caused by bad habits) summarized their importance as such:

All our life, so far as it has definite form, is but a mass of habits – practical, emotional, and intellectual – systematically organized for our weal or woe, and bearing us irresistibly toward our destiny, whatever the latter may be.

Once a habit becomes ingrained, it can last for life (unless broken for some reason.)

Habits can compound. Stephen Covey paraphrased Gandhi when he explained:

Sow a thought, reap an action; sow an action, reap a habit; sow a habit, reap a character; sow a character, reap a destiny.

In other words, building a single habit can have a wider impact on our lives. Duhigg calls these ‘keystone habits.'

These behaviors which cause people to change related areas of their lives. For example, people who start exercising daily may end up eating better and drinking less. Likewise, those who quit a bad habit may end up replacing it with a positive alternative. (Naval and I talked about habit replacement a lot on this podcast episode).

Habits can be as small as necessary. A common piece of advice for those seeking to build a habit is to start small. Stanford psychologist BJ Fogg recommends ‘tiny habits’, such as flossing one tooth. Once these become ingrained, the degree of complexity can increase. If you want to read more you can start with 25 pages a day. AFter this becomes part of your routine, you can increase the page number to reach your goal.

Why a Systematic Approach Works

“First we make our habits, then our habits make us.”
— Charles C. Nobel


By switching our focus from specific goals to creating positive long-term habits, continuous improvement can become a way of life. This is evident from the documented habits of many successful people.

Warren Buffett reads all day to build the knowledge necessary for his investments.

Stephen King writes 1000 words a day, 365 days a year (a habit he describes as “a sort of creative sleep.”) Athlete Eliud Kipchoge makes notes after each training session to establish areas which can be improved. These habits repeated hundreds of times over years, are not incidental. With consistency, the benefits of these non-negotiable actions compound and lead to extraordinary achievements.

While goals rely on extrinsic motivation, habits are automatic. They literally rewire our brain.

When seeking to attain something in our lives, we would do well to invest our time into forming positive habits, rather than concentrating on a specific goal.

For further reading on this topic, look to Drive: The Surprising Secret of What Motivates Us, How to Fail at Almost Everything and Still Win Big, and The Power of Habit.