Tag: Karl Popper

The Central Mistake of Historicism: Karl Popper on Why Trend is Not Destiny

Philosophy can be a little dry in concept. The word itself conjures up images of thinking about thought, why we exist, and other metaphysical ideas that seem a little divorced from the everyday world.

One true philosopher who bucked the trend was the genius Austrian philosopher of science, Karl Popper.

Popper had at least three important lines of inquiry:

  1. How does progressive scientific thought actually happen?
  2. What type of society do we need to allow for scientific progress to be made?
  3. What can we say we actually know about the world?

Popper’s work led to his idea of falsifiability as the main criterion of a scientific theory. Simply put, an idea or theory doesn’t enter the realm of science until we can state it in such a way that a test could prove it wrong. This important identifier allowed him to distinguish between science and pseudoscience.

An interesting piece of Popper’s work was an attack on what he called historicism — the idea that history has fixed laws or trends that inevitably lead to certain outcomes. Included would be the Marxist interpretation of human history as a push and pull between classes, the Platonic ideals of the systemic “rise and fall” of cities and societies in a fundamentally predictable way, John Stuart Mill’s laws of succession, and even the theory that humanity inevitably progresses towards a “better” and happier outcome, however defined. Modern ideas in this category might well include Thomas Piketty’s theory of how capitalism leads to an accumulation of dangerous inequality, the “inevitability” of America’s fall from grace in the fashion of the Roman empire, or even Russell Brand's popular diatribe on utopian upheaval from a few years back.

Popper considered this kind of thinking pseudoscience, or worse — a dangerous ideology that tempts wannabe state planners and utopians to control society. (Perhaps through violent revolution, for example.) He did not consider such historicist doctrines falsifiable. There is no way, for example, to test whether Marxist theory is actually true or not, even in a thought experiment. We must simply take it on faith, based on a certain interpretation of history, that the bourgeoisie and the proletariat are at odds, and that the latter is destined to create uprisings. (Destined being the operative word — it implies inevitability.) If we’re to assert that the there is a Law of Increasing Technological Complexity in human society, which many are tempted to do these days, is that actually a testable hypothesis? Too frequently, these Laws become immune to falsifying evidence — any new evidence is interpreted through the lens of the theory. Instead of calling them interpretations, we call them Laws, or some similarly connotative word.

More deeply, Popper realized the important point that history is a unique process — it only gets run once. We can’t derive Laws of History that predict the future the way we can with, say, a law of physics that carries predictive capability under stated conditions. (i.e. If I drop a ceramic coffee cup more than 2 feet, it will shatter.) We can only merely deduce some tendencies of human nature, laws of the physical world, and so on, and generate some reasonable expectation that if X happens, Y is somewhat likely to follow. But viewing the process of human or organic history as possessing the regularity of a solar system is folly.

He discusses this in his book The Poverty of Historicism.

The evolution of life on earth, or of a human society, is a unique historical process. Such a process, we may assume, proceeds in accordance with all kinds of causal laws, for example, the laws of mechanics, of chemistry, of heredity and segregation, of natural selection, etc. Its description, however, is not a law, but only a single historical statement. Universal laws make assertions concerning some unvarying order[…] and although there is no reason why the observation of one single instance should not incite us to formulate a universal law, nor why, if we are lucky, we should not even hit upon the truth, it is clear that any law, formulated in this or in any other way, must be tested by new instances before it can be taken seriously by science. But we cannot hope to test a universal hypothesis nor to find a natural law acceptable to science if we are ever confined to the observation of one unique process. Nor can the observation of one unique process help us to foresee its future development. The most careful observation of one developing caterpillar will to help us to predict its transformation into a butterfly.

Popper realized that once we deduce a theory of the Laws of Human Development, carried into the ever-after, we are led into a gigantic confirmation bias problem. For example, we can certainly find confirmations for the idea that humans have progressed, in a specifically defined way, towards increasing technological complexity. But is that a Law of history, in the inviolable sense? For that, we really can’t say.

The problem is that to establish cause-and-effect, in a scientific sense, requires two things: A universal law (or a set of them) and some initial conditions (and ideally these are played out over a really large sample size to give us confidence). Popper explains:

I suggest that to give a causal explanation of a certain specific event means deducing a statement describing this event from two kinds of premises: from some universal laws, and from some singular or specific statements which we may call specific initial conditions.

For example, we can say that we have given a causal explanation of the breaking of a certain thread if we find this thread could carry a weight of only one pound, and that a weight of two pounds was put on it. If we analyze this causal explanation, then we find that two different constituents are involved. (1) Some hypotheses of the character of universal laws of nature; in this case, perhaps: ‘For every thread of a given structure s (determined by material, thickness, etc.) there is a characteristic weight w such that the thread will break if any weight exceeding w is suspended on it’ and ‘For every thread of the structure s, the characteristic weight w equals one pound.’ (2) Some specific statements—the initial conditions—pertaining to the particular event in question; in this case we may have two such statements: ’This is a thread of structure s, and ‘The weight put on this thread was a weight of two pounds’.

The trend is not destiny

Here we hit on the problem of trying to assert any fundamental laws by which human history must inevitably progress. Trend is not destiny. Even if we can derive and understand certain laws of human biological nature, the trends of history itself dependent on conditions, and conditions change.

Explained trends do exist, but their persistence depends on the persistence of certain specific initial conditions (which in turn may sometimes be trends).

Mill and his fellow historicists overlook the dependence of trends on initial conditions. They operate with trends as if they were unconditional, like laws. Their confusion of laws with trends make them believe in trends which are unconditional (and therefore general); or, as we may say, in ‘absolute trends'; for example a general historical tendency towards progress—‘a tendency towards a better and happier state’. And if they at all consider a ‘reduction’ of their tendencies to laws, they believe that these tendencies can be immediately derived from universal laws alone, such as the laws of psychology (or dialectical materialism, etc.).

This, we may say, is the central mistake of historicism. Its “laws of development” turn out to be absolute trends; trends which, like laws, do not depend on initial conditions, and which carry us irresistibly in a certain direction into the future. They are the basis of unconditional prophecies, as opposed to conditional scientific predictions.


The point is that these (initial) conditions are so easily overlooked. There is, for example, a trend towards an ‘accumulation of means of production’ (as Marx puts it). But we should hardly expect it to persist in a population which is rapidly decreasing; and such a decrease may in turn depend on extra-economic conditions, for example, on chance interventions, or conceivably on the direct physiological (perhaps bio-chemical) impact of an industrial environment. There are, indeed, countless possible conditions; and in order to be able to examine these possibilities in our search for the true conditions of the trend, we have all the time to try to imagine conditions under which the trend in question would disappear. But this is just what the historicist cannot do. He firmly believes in his favorite trend, and conditions under which it would disappear to him are unthinkable. The poverty of historicism, we might say, is a poverty of imagination. The historicist continuously upbraids those who cannot imagine a change in their little worlds; yet it seems that the historicist is himself deficient in imagination, for he cannot imagine a change in the conditions of change.

Still interested? Check out our previous post on Popper’s theory of falsification, or check out The Poverty of Historicism to explore his idea more deeply. A warning: It’s not a beach read. I had to read it twice to get the basic idea. But, once grasped, it’s well worth the time.

Karl Popper on The Line Between Science and Pseudoscience

It's not immediately clear, to the layman, what the essential difference is between science and something masquerading as science: pseudoscience. The distinction gets at the core of what comprises human knowledge: How do we actually know something to be true? Is it simply because our powers of observation tell us so? Or is there more to it?

Sir Karl Popper (1902-1994), the scientific philosopher, was interested in the same problem. How do we actually define the scientific process? How do we know which theories can be said to be truly explanatory?


He began addressing it in a lecture, which is printed in the book Conjectures and Refutations: The Growth of Scientific Knowledge (also available online):

When I received the list of participants in this course and realized that I had been asked to speak to philosophical colleagues I thought, after some hesitation and consultation, that you would probably prefer me to speak about those problems which interest me most, and about those developments with which I am most intimately acquainted. I therefore decided to do what I have never done before: to give you a report on my own work in the philosophy of science, since the autumn of 1919 when I first began to grapple with the problem, ‘When should a theory be ranked as scientific?' or ‘Is there a criterion for the scientific character or status of a theory?'

Popper saw a problem with the number of theories he considered non-scientific that, on their surface, seemed to have a lot in common with good, hard, rigorous science. But the question of how we decide which theories are compatible with the scientific method, and those which are not, was harder than it seemed.


It is most common to say that science is done by collecting observations and grinding out theories from them. Charles Darwin once said, after working long and hard on the problem of the Origin of Species,

My mind seems to have become a kind of machine for grinding general laws out of large collections of facts.

This is a popularly accepted notion. We observe, observe, and observe, and we look for theories to best explain the mass of facts. (Although even this is not really true: Popper points out that we must start with some a priori knowledge to be able to generate new knowledge. Observation is always done with some hypotheses in mind–we can't understand the world from a totally blank slate. More on that another time.)

The problem, as Popper saw it, is that some bodies of knowledge more properly named pseudosciences would be considered scientific if the “Observe & Deduce” operating definition were left alone. For example, a believing astrologist can ably provide you with “evidence” that their theories are sound. The biographical information of a great many people can be explained this way, they'd say.

The astrologist would tell you, for example, about how “Leos” seek to be the centre of attention; ambitious, strong, seeking the limelight. As proof, they might follow up with a host of real-life Leos: World-leaders, celebrities, politicians, and so on. In some sense, the theory would hold up. The observations could be explained by the theory, which is how science works, right?

Sir Karl ran into this problem in a concrete way because he lived during a time when psychoanalytic theories were all the rage at just the same time Einstein was laying out a new foundation for the physical sciences with the concept of relativity. What made Popper uncomfortable were comparisons between the two. Why did he feel so uneasy putting Marxist theories and Freudian psychology in the same category of knowledge as Einstein's Relativity? Did all three not have vast explanatory power in the world? Each theory's proponents certainly believed so, but Popper was not satisfied.

It was during the summer of 1919 that I began to feel more and more dissatisfied with these three theories–the Marxist theory of history, psychoanalysis, and individual psychology; and I began to feel dubious about their claims to scientific status. My problem perhaps first took the simple form, ‘What is wrong with Marxism, psycho-analysis, and individual psychology? Why are they so different from physical theories, from Newton's theory, and especially from the theory of relativity?'

I found that those of my friends who were admirers of Marx, Freud, and Adler, were impressed by a number of points common to these theories, and especially by their apparent explanatory power. These theories appeared to be able to explain practically everything that happened within the fields to which they referred. The study of any of them seemed to have the effect of an intellectual conversion or revelation, opening your eyes to a new truth hidden from those not yet initiated. Once your eyes were thus opened you saw confirming instances everywhere: the world was full of verifications of the theory.

Whatever happened always confirmed it. Thus its truth appeared manifest; and unbelievers were clearly people who did not want to see the manifest truth; who refused to see it, either because it was against their class interest, or because of their repressions which were still ‘un-analysed' and crying aloud for treatment.

Here was the salient problem: The proponents of these new sciences saw validations and verifications of their theories everywhere. If you were having trouble as an adult, it could always be explained by something your mother or father had done to you when you were young, some repressed something-or-other that hadn't been analysed and solved. They were confirmation bias machines.

What was the missing element? Popper had figured it out before long: The non-scientific theories could not be falsified. They were not testable in a legitimate way. There was no possible objection that could be raised which would show the theory to be wrong.

In a true science, the following statement can be easily made: “If happens, it would show demonstrably that theory is not true.” We can then design an experiment, a physical one or sometimes a simple thought experiment, to figure out if actually does happen It's the opposite of looking for verification; you must try to show the theory is incorrect, and if you fail to do so, thereby strengthen it.

Pseudosciences cannot and do not do this–they are not strong enough to hold up. As an example, Popper discussed Freud's theories of the mind in relation to Alfred Adler's so-called “individual psychology,” which was popular at the time:

I may illustrate this by two very different examples of human behaviour: that of a man who pushes a child into the water with the intention of drowning it; and that of a man who sacrifices his life in an attempt to save the child. Each of these two cases can be explained with equal ease in Freudian and in Adlerian terms. According to Freud the first man suffered from repression (say, of some component of his Oedipus complex), while the second man had achieved sublimation. According to Adler the first man suffered from feelings of inferiority (producing perhaps the need to prove to himself that he dared to commit some crime), and so did the second man (whose need was to prove to himself that he dared to rescue the child). I could not think of any human behaviour which could not be interpreted in terms of either theory. It was precisely this fact–that they always fitted, that they were always confirmed–which in the eyes of their admirers constituted the strongest argument in favour of these theories. It began to dawn on me that this apparent strength was in fact their weakness.

Popper contrasted these theories against Relativity, which made specific, verifiable predictions, giving the conditions under which the predictions could be shown false. It turned out that Einstein's predictions came to be true when tested, thus verifying the theory through attempts to falsify it. But the essential nature of the theory gave grounds under which it could have been wrong. To this day, physicists seek to figure out where Relativity breaks down in order to come to a more fundamental understanding of physical reality. And while the theory may eventually be proven incomplete or a special case of a more general phenomenon, it has still made accurate, testable predictions that have led to practical breakthroughs.

Thus, in Popper's words, science requires testability: “If observation shows that the predicted effect is definitely absent, then the theory is simply refuted.”  This means a good theory must have an element of risk to it. It must be able to be proven wrong under stated conditions.

From there, Popper laid out his essential conclusions, which are useful to any thinker trying to figure out if a theory they hold dear is something that can be put in the scientific realm:

1. It is easy to obtain confirmations, or verifications, for nearly every theory–if we look for confirmations.

2. Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory–an event which would have refuted the theory.

3. Every ‘good' scientific theory is a prohibition: it forbids certain things to happen. The more a theory forbids, the better it is.

4. A theory which is not refutable by any conceivable event is nonscientific. Irrefutability is not a virtue of a theory (as people often think) but a vice.

5. Every genuine test of a theory is an attempt to falsify it, or to refute it. Testability is falsifiability; but there are degrees of testability: some theories are more testable, more exposed to refutation, than others; they take, as it were, greater risks.

6. Confirming evidence should not count except when it is the result of a genuine test of the theory; and this means that it can be presented as a serious but unsuccessful attempt to falsify the theory. (I now speak in such cases of ‘corroborating evidence'.)

7. Some genuinely testable theories, when found to be false, are still upheld by their admirers–for example by introducing ad hoc some auxiliary assumption, or by re-interpreting the theory ad hoc in such a way that it escapes refutation. Such a procedure is always possible, but it rescues the theory from refutation only at the price of destroying, or at least lowering, its scientific status. (I later described such a rescuing operation as a ‘conventionalist twist' or a ‘conventionalist stratagem'.)

One can sum up all this by saying that the criterion of the scientific status of a theory is its falsifiability, or refutability, or testability.

Finally, Popper was careful to say that it is not possible to prove that Freudianism was not true, at least in part. But we can say that we simply don't know whether it's true because it does not make specific testable predictions. It may have many kernels of truth in it, but we can't tell. The theory would have to be restated.

This is the essential “line of demarcation, as Popper called it, between science and pseudoscience.

The Uses Of Being Wrong

Confessions of wrongness are the exception not the rule.

Daniel Drezner, a professor of international politics at the Fletcher School of Law and Diplomacy at Tufts University, pointing to the difference between being wrong in a prediction and making an error, writes:

Error, even if committed unknowingly, suggests sloppiness. That carries a more serious stigma than making a prediction that fails to come true.

Social sciences, unlike physical and natural sciences, finds a shortage of high-quality data on which to make predictions.

How does Science Advance?

A theory may be scientific even if there is not a shred of evidence in its favour, and it may be pseudoscientific even if all the available evidence is in its favour. That is, the scientific or non-scientific character of a theory can be determined independently of the facts. A theory is ‘scientific' if one is prepared to specify in advance a crucial experiment (or observation) which can falsify it, and it is pseudoscientific if one refuses to specify such a ‘potential falsifier'. But if so, we do not demarcate scientific theories from pseudoscientific ones, but rather scientific methods from non-scientific method.

Karl Popper viewed the progression of science as falsification — that is science progresses by elimination of what doesn't work and hold. Popper's falsifiability criterion ignores the tenacity of scientific theories, even in the face of disconfirming evidence. Scientists, like many of us, do not abandon a theory because the evidence may contradict it.

The wake of science is littered with discussions on anomalies and not refutations.

Another theory on scientific advancement, proposed by Thomas Kuhn, a distinguished American philosopher of science, argues that science proceeds with a series of revolutions with an almost religious conversion.

Imre Lakatos, a Hungarian philosopher of mathematics and science, wrote:

(The) history of science, of course, is full of accounts of how crucial experiments allegedly killed theories. But all such accounts are fabricated long after the theory has been abandoned.

Lakatos bridged the gap between Popper and Khun by addressing what they failed to solve.

The hallmark of empirical progress is not trivial verifications: Popper is right that there are millions of them. It is no success for Newtonian theory that stones, when dropped, fall towards the earth, no matter how often this is repeated. But, so-called ‘refutations' are not the hallmark of empirical failure, as Popper has preached, since all programmes grow in a permanent ocean of anomalies. What really counts are dramatic, unexpected, stunning predictions: a few of them are enough to tilt the balance; where theory lags behind the facts, we are dealing with miserable degenerating research programmes.

Now, how do scientific revolutions come about? If we have two rival research programmes, and one is progressing while the other is degenerating, scientists tend to join the progressive programme. This is the rationale of scientific revolutions. But while it is a matter of intellectual honesty to keep the record public, it is not dishonest to stick to a degenerating programme and try to turn it into a progressive one.

As opposed to Popper the methodology of scientific research programmes does not offer instant rationality. One must treat budding programmes leniently: programmes may take decades before they get off the ground and become empirically progressive. Criticism is not a Popperian quick kill, by refutation. Important criticism is always constructive: there is no refutation without a better theory. Kuhn is wrong in thinking that scientific revolutions are sudden, irrational changes in vision. [The history of science refutes both Popper and Kuhn: ] On close inspection both Popperian crucial experiments and Kuhnian revolutions turn out to be myths: what normally happens is that progressive research programmes replace degenerating ones.


A lot of the falsification effort is devoted to proving others wrong and not ourselves. “It’s rare for academics, Drezner writes, to publicly disavow their own theories and hypotheses.”

Indeed, a common lament in the social sciences is that negative findings—i.e., empirical tests that fail to support an author’s initial hypothesis—are never published.

Why is it so hard for us to see when we are wrong?

It is not necessarily concern for one’s reputation. Even predictions that turn out to be wrong can be intellectually profitable—all social scientists love a good straw-man argument to pummel in a literature review. Bold theories get cited a lot, regardless of whether they are right.

Part of the reason is simple psychology; we all like being right much more than being wrong.

As Kathryn Schulz observes in Being Wrong, “the thrill of being right is undeniable, universal, and (perhaps most oddly) almost entirely undiscriminating … . It’s more important to bet on the right foreign policy than the right racehorse, but we are perfectly capable of gloating over either one.”

As we create arguments and gather supporting evidence (while discarding evidence that does not fit) we increasingly persuade ourselves that we are right. We gain confidence and try to sway the opinions of others.

There are benefits to being wrong.

Schulz argues in Being Wrong that “the capacity to err is crucial to human cognition. Far from being a moral flaw, it is inextricable from some of our most humane and honorable qualities: empathy, optimism, imagination, conviction, and courage. And far from being a mark of indifference or intolerance, wrongness is a vital part of how we learn and change.”

Drezner argues that some of the tools of the information age give us hope that we might become increasingly likely to admit being wrong.

Blogging and tweeting encourages the airing of contingent and tentative arguments as events play out in real time. As a result, far less stigma attaches to admitting that one got it wrong in a blog post than in peer-reviewed research. Indeed, there appears to be almost no professional penalty for being wrong in the realm of political punditry. Regardless of how often pundits make mistakes in their predictions, they are invited back again to pontificate more.

As someone who has blogged for more than a decade, I’ve been wrong an awful lot, and I’ve grown somewhat more comfortable with the feeling. I don’t want to make mistakes, of course. But if I tweet or blog my half-formed supposition, and it then turns out to be wrong, I get more intrigued about why I was wrong. That kind of empirical and theoretical investigation seems more interesting than doubling down on my initial opinion. Younger scholars, weaned on the Internet, more comfortable with the push and pull of debate on social media, may well feel similarly.

Still curious? Daniel W. Drezner is the author of The System Worked: How the World Stopped Another Great Depression.

Falsification: How to Destroy Incorrect Ideas

“The human mind is a lot like the human egg,
and the human egg has a shut-off device.
When one sperm gets in, it shuts down so the next one can’t get in.”

— Charlie Munger


Sir Karl Popper wrote that the nature of scientific thought is that we could never be sure of anything. The only way to test the validity of any theory was to prove it wrong, a process he labeled falsification. And it turns out we're quite bad at falsification.

When it comes to testing a theory we don't instinctively try to find evidence we're wrong. It's much easier and more mentally satisfying to find information that proves our intuition. This is known as the confirmation bias.

In Paul Tough's book, How Children Succeed: Grit, Curiosity, and the Hidden Power of Character, he tells the story of an English psychologist Peter Cathcart Wason, who came up with an “ingenious experiment to demonstrate our natural tendency to confirm rather than disprove our own ideas.”

Subjects were told that they would be given a series of three numbers that followed a certain rule known only to the experimenter. Their assignment was to figure out what the rule was, which they could do by offering the experimenter other strings of three numbers and asking him whether or not these new strings met the rule.

The string of numbers the subjects were given was quite simple:


Try it: What’s your first instinct about the rule governing these numbers? And what’s another string you might test with the experimenter in order to find out if your guess is right? If you’re like most people, your first instinct is that the rule is “ascending even numbers” or “numbers increasing by two.” And so you guess something like:


And the experimenter says, “Yes! That string of numbers also meets the rule.” And your confidence rises. To confirm your brilliance, you test one more possibility, just as due diligence, something like:


“Yes!” says the experimenter. Another surge of dopamine. And you proudly make your guess: “The rule is: even numbers, ascending in twos.” “No!” says the experimenter. It turns out that the rule is “any ascending numbers.” So 8-10-12 does fit the rule, it’s true, but so does 1-2-3. Or 4-23-512. The only way to win the game is to guess strings of numbers that would prove your beloved hypothesis wrong—and that is something each of us is constitutionally driven to avoid.

In the study, only 1 in five people was able to guess the correct rule.

And the reason we’re all so bad at games like this is the tendency toward confirmation bias: It feels much better to find evidence that confirms what you believe to be true than to find evidence that falsifies what you believe to be true. Why go out in search of disappointment?

There is also a video explaining Wason's work.

A Wonderfully Simple Heuristic to Recognize Charlatans

While we can learn a lot from what successful people do in the mornings, as Nassim Taleb points out, we can learn a lot from what failed people do before breakfast too.

Inversion is actually one of the most powerful mental models in our arsenal. Not only does inversion help us innovate but it also helps us deal with uncertainty.

“It is in the nature of things,” says Charlie Munger, “that many hard problems are best solved when they are addressed backward.”

Sometimes we can't articulate what we want. Sometimes we don't know. Sometimes there is so much uncertainty that the best approach is to attempt to avoid certain outcomes rather than attempt to guide towards the ones we desire. In short, we don't always know what we want but we know what we don't want.

Avoiding stupidity is often easier than seeking brilliance.

“For the Arab scholar and religious leader Ali Bin Abi-Taleb (no relation), keeping one’s distance from an ignorant person is equivalent to keeping company with a wise man.”

The “apophatic,” writes Nassim Taleb in Antifragile, “focuses on what cannot be said directly in words, from the greek apophasis (saying no, or mentioning without meaning).”

The method began as an avoidance of direct description, leading to a focus on negative description, what is called in Latin via negativa, the negative way, after theological traditions, particularly in the Eastern Orthodox Church. Via negativa does not try to express what God is— leave that to the primitive brand of contemporary thinkers and philosophasters with scientistic tendencies. It just lists what God is not and proceeds by the process of elimination.

Statues are carved by subtraction.

Michelangelo was asked by the pope about the secret of his genius, particularly how he carved the statue of David, largely considered the masterpiece of all masterpieces. His answer was: “It’s simple. I just remove everything that is not David.”

Where Is the Charlatan?

Recall that the interventionista focuses on positive action—doing. Just like positive definitions, we saw that acts of commission are respected and glorified by our primitive minds and lead to, say, naive government interventions that end in disaster, followed by generalized complaints about naive government interventions, as these, it is now accepted, end in disaster, followed by more naive government interventions. Acts of omission, not doing something, are not considered acts and do not appear to be part of one’s mission.


I have used all my life a wonderfully simple heuristic: charlatans are recognizable in that they will give you positive advice, and only positive advice, exploiting our gullibility and sucker-proneness for recipes that hit you in a flash as just obvious, then evaporate later as you forget them. Just look at the “how to” books with, in their title, “Ten Steps for—” (fill in: enrichment, weight loss, making friends, innovation, getting elected, building muscles, finding a husband, running an orphanage, etc.).

We learn the most from the negative.

[I]n practice it is the negative that’s used by the pros, those selected by evolution: chess grandmasters usually win by not losing; people become rich by not going bust (particularly when others do); religions are mostly about interdicts; the learning of life is about what to avoid. You reduce most of your personal risks of accident thanks to a small number of measures.

Skill doesn't always win.

In anything requiring a combination of skill and luck, the most skillful don't always win. That's one of the key messages of Michael Mauboussin's book The Success Equation: Untangling Skill and Luck in Business, Sports, and Investing. This is hard for us to swallow because we intuitively feel that if you are successful you have skill for the same reasons that if the outcome is good we think you made a good decision. We can't predict whether a person who has skills will succeed but Taleb argues that we can “pretty much predict” that a person without skills will eventually have their luck run out.

Subtractive Knowledge

Taleb argues that the greatest “and most robust contribution to knowledge consists in removing what we think is wrong—subtractive epistemology.” He continues that “we know a lot more about what is wrong than what is right.” What does not work, that is negative knowledge, is more robust than positive knowledge. This is because it's a lot easier for something we know to fail than it is for something we know that isn't so to succeed.

There is a whole book on the half-life of what we consider to be ‘knowledge or fact' called The Half-Life of Facts. Basically, because of our partial understanding of the world, which is constantly evolving, we believe things that are not true. That's not the only reason that we believe things that are not true but it's a big one.

The thing is we're not so smart. If I've only seen white swans, saying “all swans are white” may be accurate given my limited view of the world but we can never be sure that there are no black swans until we've seen everything.

Or as Taleb puts it: “since one small observation can disprove a statement, while millions can hardly confirm it, disconfirmation is more rigorous than confirmation.”

Most people attribute this philosophical argument to Karl Popper but Taleb dug up some evidence that it goes back to the “skeptical-empirical” medical schools of the post-classical era in the Eastern Mediterranean.

Being antifragile isn't about what you do, but rather what you avoid. Avoid fragility. Avoid stupidity. Don't be the sucker. Be like Dariwn.