Over 500,000 people visited Farnam Street last month to expand their knowledge and improve their thinking. Work smarter, not harder with our free weekly newsletter that's full of time-tested knowledge you can add to your mental toolbox.

Tag Archives: Science

Naval Ravikant on Reading, Happiness, Systems for Decision Making, Habits, Honesty and More

Naval Ravikant (@naval) is the CEO and co-founder of AngelList. He’s invested in more than 100 companies, including Uber, Twitter, Yammer, and many others.

Don’t worry, we’re not going to talk about early stage investing. Naval’s an incredibly deep thinker who challenges the status quo on so many things.

In this wide-ranging interview, we talk about reading, habits, decision-making, mental models, and life.

Just a heads up, this is the longest podcast I’ve ever done. While it felt like only thirty minutes, our conversation lasted over two hours!

If you’re like me, you’re going to take a lot of notes so grab a pen and paper. I left some white space on the transcript below in case you want to take notes in the margin.

Enjoy this amazing conversation.




Books mentioned


Normally only members of our learning community have access to transcripts, however, we wanted to make this one open to everyone. Here's the complete transcript of the interview with Naval.

How To Mentally Overachieve — Charles Darwin’s Reflections On His Own Mind

We’ve written quite a bit about the marvelous British naturalist Charles Darwin, who with his Origin of Species created perhaps the most intense intellectual debate in human history, one which continues up to this day.

Darwin’s Origin was a courageous and detailed thought piece on the nature and development of biological species. It's the starting point for nearly all of modern biology.

But, as we’ve noted before, Darwin was not a man of pure IQ. He was not Issac Newton, or Richard Feynman, or Albert Einstein — breezing through complex mathematical physics at a young age.

Charlie Munger thinks Darwin would have placed somewhere in the middle of a good private high school class. He was also in notoriously bad health for most of his adult life and, by his son’s estimation, a terrible sleeper. He really only worked a few hours a day in the many years leading up to the Origin of Species.

Yet his “thinking work” outclassed almost everyone. An incredible story.

In his autobiography, Darwin reflected on this peculiar state of affairs. What was he good at that led to the result? What was he so weak at? Why did he achieve better thinking outcomes? As he put it, his goal was to:

“Try to analyse the mental qualities and the conditions on which my success has depended; though I am aware that no man can do this correctly.”

In studying Darwin ourselves, we hope to better appreciate our own strengths and weaknesses and, not to mention understand the working methods of a “mental overachiever.

Let's explore what Darwin saw in himself.


1. He did not have a quick intellect or an ability to follow long, complex, or mathematical reasoning. He may have been a bit hard on himself, but Darwin realized that he wasn't a “5 second insight” type of guy (and let's face it, most of us aren't). His life also proves how little that trait matters if you're aware of it and counter-weight it with other methods.

I have no great quickness of apprehension or wit which is so remarkable in some clever men, for instance, Huxley. I am therefore a poor critic: a paper or book, when first read, generally excites my admiration, and it is only after considerable reflection that I perceive the weak points. My power to follow a long and purely abstract train of thought is very limited; and therefore I could never have succeeded with metaphysics or mathematics. My memory is extensive, yet hazy: it suffices to make me cautious by vaguely telling me that I have observed or read something opposed to the conclusion which I am drawing, or on the other hand in favour of it; and after a time I can generally recollect where to search for my authority. So poor in one sense is my memory, that I have never been able to remember for more than a few days a single date or a line of poetry.

2. He did not feel easily able to write clearly and concisely. He compensated by getting things down quickly and then coming back to them later, thinking them through again and again. Slow, methodical….and ridiculously effective: For those who haven't read it, the Origin of Species is extremely readable and clear, even now, 150 years later.

I have as much difficulty as ever in expressing myself clearly and concisely; and this difficulty has caused me a very great loss of time; but it has had the compensating advantage of forcing me to think long and intently about every sentence, and thus I have been led to see errors in reasoning and in my own observations or those of others.

There seems to be a sort of fatality in my mind leading me to put at first my statement or proposition in a wrong or awkward form. Formerly I used to think about my sentences before writing them down; but for several years I have found that it saves time to scribble in a vile hand whole pages as quickly as I possibly can, contracting half the words; and then correct deliberately. Sentences thus scribbled down are often better ones than I could have written deliberately.

3. He forced himself to be an incredibly effective and organized collector of information. Darwin's system of reading and indexing facts in large portfolios is worth emulating, as is the habit of taking down conflicting ideas immediately.

As in several of my books facts observed by others have been very extensively used, and as I have always had several quite distinct subjects in hand at the same time, I may mention that I keep from thirty to forty large portfolios, in cabinets with labelled shelves, into which I can at once put a detached reference or memorandum. I have bought many books, and at their ends I make an index of all the facts that concern my work; or, if the book is not my own, write out a separate abstract, and of such abstracts I have a large drawer full. Before beginning on any subject I look to all the short indexes and make a general and classified index, and by taking the one or more proper portfolios I have all the information collected during my life ready for use.

4. He had possibly the most valuable trait in any sort of thinker: A passionate interest in understanding reality and putting it in useful order in his headThis “Reality Orientation” is hard to measure and certainly does not show up on IQ tests, but probably determines, to some extent, success in life.

On the favourable side of the balance, I think that I am superior to the common run of men in noticing things which easily escape attention, and in observing them carefully. My industry has been nearly as great as it could have been in the observation and collection of facts. What is far more important, my love of natural science has been steady and ardent.

This pure love has, however, been much aided by the ambition to be esteemed by my fellow naturalists. From my early youth I have had the strongest desire to understand or explain whatever I observed,–that is, to group all facts under some general laws. These causes combined have given me the patience to reflect or ponder for any number of years over any unexplained problem. As far as I can judge, I am not apt to follow blindly the lead of other men. I have steadily endeavoured to keep my mind free so as to give up any hypothesis, however much beloved (and I cannot resist forming one on every subject), as soon as facts are shown to be opposed to it.

Indeed, I have had no choice but to act in this manner, for with the exception of the Coral Reefs, I cannot remember a single first-formed hypothesis which had not after a time to be given up or greatly modified. This has naturally led me to distrust greatly deductive reasoning in the mixed sciences. On the other hand, I am not very sceptical—a frame of mind which I believe to be injurious to the progress of science. A good deal of scepticism in a scientific man is advisable to avoid much loss of time, but I have met with not a few men, who, I feel sure, have often thus been deterred from experiment or observations, which would have proved directly or indirectly serviceable.


Therefore my success as a man of science, whatever this may have amounted to, has been determined, as far as I can judge, by complex and diversified mental qualities and conditions. Of these, the most important have been—the love of science—unbounded patience in long reflecting over any subject—industry in observing and collecting facts—and a fair share of invention as well as of common sense.

5. Most inspirational to us of average intellect, he outperformed his own mental aptitude with these good habits, surprising even himself with the results.

With such moderate abilities as I possess, it is truly surprising that I should have influenced to a considerable extent the belief of scientific men on some important points.


Still Interested? Read his autobiography, his The Origin of Species, or check out David Quammen's wonderful short biography of the most important period of Darwin's life. Also, if you missed it, check out our prior post on Darwin's Golden Rule.

The Island of Knowledge: Science and the Meaning of Life

“As the Island of Knowledge grows, so do the shores of our ignorance—the boundary between the known and unknown. Learning more about the world doesn't lead to a point closer to a final destination—whose existence is nothing but a hopeful assumption anyways—but to more questions and mysteries. The more we know, the more exposed we are to our ignorance, and the more we know to ask.”


Common across human history is our longing to better understand the world we live in, and how it works. But how much can we actually know about the world?

In his book, The Island of Knowledge: The Limits of Science and the Search for Meaning, Physicist Marcelo Gleiser traces our progress of modern science in the pursuit to the most fundamental questions on existence, the origin of the universe, and the limits of knowledge.

What we know of the world is limited by what we can see and what we can describe, but our tools have evolved over the years to reveal ever more pleats into our fabric of knowledge. Gleiser celebrates this persistent struggle to understand our place in the world and travels our history from ancient knowledge to our current understanding.

While science is not the only way to see and describe the world we live in, it is a response to the questions on who we are, where we are, and how we got here. “Science speaks directly to our humanity, to our quest for light, ever more light.

To move forward, science needs to fail, which runs counter to our human desire for certainty. “We are surrounded by horizons, by incompleteness.” Rather than give up, we struggle along a scale of progress. What makes us human is this journey to understand more about the mysteries of the world and explain them with reason. This is the core of our nature.

While the pursuit is never ending, the curious journey offers insight not just into the natural world, but insight into ourselves.

“What I see in Nature is a magnificent structure that we can comprehend only
very imperfectly,
and that must fill a thinking person with a feeling of humility.”
— Albert Einstein

We tend to think that what we see is all there is — that there is nothing we cannot see. We know it isn't true when we stop and think, yet we still get lulled into a trap of omniscience.

Science is thus limited, offering only part of the story — the part we can see and measure. The other part remains beyond our immediate reach.

What we see of the world,” Gleiser begins, “is only a sliver of what's out there.”

There is much that is invisible to the eye, even when we augment our sensorial perception with telescopes, microscopes, and other tools of exploration. Like our senses, every instrument has a range. Because much of Nature remains hidden from us, our view of the world is based only on the fraction of reality that we can measure and analyze. Science, as our narrative describing what we see and what we conjecture exists in the natural world, is thus necessarily limited, telling only part of the story. … We strive toward knowledge, always more knowledge, but must understand that we are, and will remain, surrounded by mystery. This view is neither antiscientific nor defeatist. … Quite the contrary, it is the flirting with this mystery, the urge to go beyond the boundaries of the known, that feeds our creative impulse, that makes us want to know more.

While we may broadly understand the map of what we call reality, we fail to understand its terrain. Reality, Gleiser argues, “is an ever-shifting mosaic of ideas.”


The incompleteness of knowledge and the limits of our scientific worldview only add to the richness of our search for meaning, as they align science with our human fallibility and aspirations.

What we call reality is a (necessarily) limited synthesis. It is certainly our reality, as it must be, but it is not the entire reality itself:

My perception of the world around me, as cognitive neuroscience teaches us, is synthesized within different regions of my brain. What I call reality results from the integrated sum of countless stimuli collected through my five senses, brought from the outside into my head via my nervous system. Cognition, the awareness of being here now, is a fabrication of a vast set of chemicals flowing through myriad synaptic connections between my neurons. … We have little understanding as to how exactly this neuronal choreography engenders us with a sense of being. We go on with our everyday activities convinced that we can separate ourselves from our surroundings and construct an objective view of reality.

The brain is a great filtering tool, deaf and blind to vast amounts of information around us that offer no evolutionary advantage. Part of it we can see and simply ignore. Other parts, like dust particles and bacteria, go unseen because of limitations of our sensory tools.

As the Fox said to the Little Prince in Antoine de Saint-Exupery's fable, “What is essential is invisible to the eye.” There is no better example than oxygen.

Science has increased our view. Our measurement tools and instruments can see bacteria and radiation, subatomic particles and more. However precise these tools have become, their view is still limited.

There is no such thing as an exact measurement. Every measurement must be stated within its precision and quoted together with “error bars” estimating the magnitude of errors. High-precision measurements are simply measurements with small error bars or high confidence levels; there are no perfect, zero-error measurements.


Technology limits how deeply experiments can probe into physical reality. That is to say, machines determine what we can measure and thus what scientists can learn about the Universe and ourselves. Being human inventions, machines depend on our creativity and available resources. When successful, they measure with ever-higher accuracy and on occasion may also reveal the unexpected.

“All models are wrong, some are useful.”
— George Box

What we know about the world is only what we can detect and measure — even if we improve our “detecting and measuring” as time goes along. And thus we make our conclusions of reality on what we can currently “see.”

We see much more than Galileo, but we can't see it all. And this restriction is not limited to measurements: speculative theories and models that extrapolate into unknown realms of physical reality must also rely on current knowledge. When there is no data to guide intuition, scientists impose a “compatibility” criterion: any new theory attempting to extrapolate beyond tested ground should, in the proper limit, reproduce current knowledge.


If large portions of the world remain unseen or inaccessible to us, we must consider the meaning of the word “reality” with great care. We must consider whether there is such a thing as an “ultimate reality” out there — the final substrate of all there is — and, if so, whether we can ever hope to grasp it in its totality.


We thus must ask whether grasping reality's most fundamental nature is just a matter of pushing the limits of science or whether we are being quite naive about what science can and can't do.

Here is another way of thinking about this: if someone perceives the world through her senses only (as most people do), and another amplifies her perception through the use of instrumentation, who can legitimately claim to have a truer sense of reality? One “sees” microscopic bacteria, faraway galaxies, and subatomic particles, while the other is completely blind to such entities. Clearly they “see” different things and—if they take what they see literally—will conclude that the world, or at least the nature of physical reality, is very different.

Asking who is right misses the point, although surely the person using tools can see further into the nature of things. Indeed, to see more clearly what makes up the world and, in the process to make more sense of it and ourselves is the main motivation to push the boundaries of knowledge. … What we call “real” is contingent on how deeply we are able to probe reality. Even if there is such thing as the true or ultimate nature of reality, all we have is what we can know of it.


Our perception of what is real evolves with the instruments we use to probe Nature. Gradually, some of what was unknown becomes known. For this reason, what we call “reality” is always changing. … The version of reality we might call “true” at one time will not remain true at another. … Given that our instruments will always evolve, tomorrow's reality will necessarily include entitles not known to exist today. … More to the point, as long as technology advances—and there is no reason to suppose that it will ever stop advancing for as long as we are around—we cannot foresee an end to this quest. The ultimate truth is elusive, a phantom.

Gleiser makes his point with a beautiful metaphor. The Island of Knowledge.

Consider, then, the sum total of our accumulated knowledge as constituting an island, which I call the “Island of Knowledge.” … A vast ocean surrounds the Island of Knowledge, the unexplored ocean of the unknown, hiding countless tantalizing mysteries.

The Island of Knowledge grows as we learn more about the world and ourselves. And as the island grows, so too “do the shores of our ignorance—the boundary between the known and unknown.”

Learning more about the world doesn't lead to a point closer to a final destination—whose existence is nothing but a hopeful assumption anyways—but to more questions and mysteries. The more we know, the more exposed we are to our ignorance, and the more we know to ask.

As we move forward we must remember that despite our quest, the shores of our ignorance grow as the Island of Knowledge grows. And while we will struggle with the fact that not all questions will have answers, we will continue to progress. “It is also good to remember,” Gleiser writes, “that science only covers part of the Island.”

Richard Feynman has pointed out before that science can only answer the subset of question that go, roughly, “If I do this, what will happen?” Answers to questions like Why do the rules operate that way? and Should I do it? are not really questions of scientific nature — they are moral, human questions, if they are knowable at all.

There are many ways of understanding and knowing that should, ideally, feed each other. “We are,” Gleiser concludes, “multidimensional creatures and search for answers in many, complementary ways. Each serves a purpose and we need them all.”

“The quest must go on. The quest is what makes us matter: to search for more answers, knowing that the significant ones will often generate surprising new questions.”

The Island of Knowledge is a wide-ranging tour through scientific history from planetary motions to modern scientific theories and how they affect our ideas on what is knowable.

What’s So Significant About Significance?

How Not to be wrong

One of my favorite studies of all time took the 50 most common ingredients from a cookbook and searched the literature for a connection to cancer: 72% had a study linking them to increased or decreased risk of cancer. (Here's the link for the interested.)

Meta-analyses (studies examining multiple studies) quashed the effect pretty seriously, but how many of those single studies were probably reported on in multiple media outlets, permanently causing changes in readers' dietary habits? (We know from studying juries that people are often unable to “forget” things that are subsequently proven false or misleading — misleading data is sticky.)

The phrase “statistically significant” is one of the more unfortunately misleading ones of our time. The word significant in the statistical sense — meaning distinguishable from random chance — does not carry the same meaning in common parlance, in which we mean distinguishable from something that does not matterWe'll get to what that means.

Confusing the two gets at the heart of a lot of misleading headlines and it's worth a brief look into why they don't mean the same thing, so you can stop being scared that everything you eat or do is giving you cancer.


The term statistical significance is used to denote when an effect is found to be extremely unlikely to have occurred by chance. In order to make that determination, we have to propose a null hypothesis to be rejected. Let's say we propose that eating an apple a day reduces the incidence of colon cancer. The “null hypothesis” here would be that eating an apple a day does nothing to the incidence of colon cancer — that we'd be equally likely to get colon cancer if we ate that daily apple.

When we analyze the data of our study, we're technically not looking to say “Eating an apple a day prevents colon cancer” — that's a bit of a misconception. What we're actually doing is an inversion we want the data to provide us with sufficient weight to reject the idea that apples have no effect on colon cancer.

And even when that happens, it's not an all-or-nothing determination. What we're actually saying is “It would be extremely unlikely for the data we have, which shows a daily apple reduces colon cancer by 50%, to have popped up by chance. Not impossible, but very unlikely.” The world does not quite allow us to have absolute conviction.

How unlikely? The currently accepted standard in many fields is 5% — there is a less than 5% chance the data would come up this way randomly. That immediately tells you that at least 1 out of every 20 studies must be wrong, but alas that is where we're at. (The problem with the 5% p-value, and the associated problem of p-hacking has been subject to some intense debate, but we won't deal with that here.)

We'll get to why “significance can be insignificant,” and why that's so important, in a moment. But let's make sure we're fully on board with the importance of sorting chance events from real ones with another illustration, this one outlined by Jordan Ellenberg in his wonderful book How Not to Be WrongPay close attention:

Suppose we're in null hypothesis land, where the chance of death is exactly the same (say, 10%) for the fifty patients who got your drug and the fifty who got [a] placebo. But that doesn't mean that five of the drug patients die and five of the placebo patients die. In fact, the chance that exactly five of the drug patients die is about 18.5%; not very likely, just as it's not very likely that a long series of coin tosses would yield precisely as many heads as tails. In the same way, it's not very likely that exactly the same number of drug patients and placebo patients expire during the course of the trial. I computed:

13.3% chance equally many drug and placebo patients die
43.3% chance fewer placebo patients than drug patients die
43.3% chance fewer drug patients than placebo patients die

Seeing better results among the drug patients than the placebo patients says very little, since this isn't at all unlikely, even under the null hypothesis that your drug doesn't work.

But things are different if the drug patients do a lot better. Suppose five of the placebo patients die during the trial, but none of the drug patients do. If the null hypothesis is right, both classes of patients should have a 90% chance of survival. But in that case, it's highly unlikely that all fifty of the drug patients would survive. The first of the drug patients has a 90% chance; now the chance that not only the first but also the second patient survives is 90% of that 90%, or 81%–and if you want the third patient to survive as well, the chance of that happening is only 90% of that 81%, or 72.9%. Each new patient whose survival you stipulate shaves a little off the chances, and by the end of the process, where you're asking about the probability that all fifty will survive, the slice of probability that remains is pretty slim:

(0.9) x (0.9) x (0.9) x … fifty times! … x (0.9) x (0.9) = 0.00515 …

Under the null hypothesis, there's only one chance in two hundred of getting results this good. That's much more compelling. If I claim I can make the sun come up with my mind, and it does, you shouldn't be impressed by my powers; but if I claim I can make the sun not come up, and it doesn't, then I've demonstrated an outcome very unlikely under the null hypothesis, and you'd best take notice.

So you see, all this null hypothesis stuff is pretty important because what you want to know is if an effect is really “showing up” or if it just popped up by chance.

A final illustration should make it clear:

Imagine you were flipping coins with a particular strategy of getting more heads, and after 30 flips you had 18 heads and 12 tails. Would you call it a miracle? Probably not — you'd realize immediately that it's perfectly possible for an 18/12 ratio to happen by chance. You wouldn't write an article in U.S. News and World Report proclaiming you'd figured out coin flipping.

Now let's say instead you flipped the coin 30,000 times and you get 18,000 heads and 12,000 tails…well, then your case for statistical significance would be pretty tight.  It would be approaching impossible to get that result by chance — your strategy must have something to it. The null hypothesis of “My coin flipping technique is no better than the usual one” would be easy to reject! (The p-value here would be orders of magnitude less than 5%, by the way.)

That's what this whole business is about.


Now that we've got this idea down, we come to the big question that statistical significance cannot answer: Even if the result is distinguishable from chance, does it actually matter?

Statistical significance cannot tell you whether the result is worth paying attention to — even if you get the p-value down to a minuscule number, increasing your confidence that what you saw was not due to chance. 

In How Not to be Wrong, Ellenberg provides a perfect example:

A 1995 study published in a British journal indicated that a new birth control pill doubled the risk of venous thrombosis (potentially killer blood clot) in its users. Predictably, 1.5 million British women freaked out, and some meaningfully large percentage of them stopped taking the pill. In 1996, 26,000 more babies were born than the previous year and there were 13,600 more abortions. Whoops!

So what, right? Lots of mothers' lives were saved, right?

Not really. The initial probability of a women getting a venous thrombosis with any old birth control pill, was about 1 in 7,000 or about 0.01%. That means that the “Killer Pill,” even if was indeed increasing “thrombosis risk,” only increased that risk to 2 in 7,000, or about 0.02%!! Is that worth rearranging your life for? Probably not.

Ellenberg makes the excellent point that, at least in the case of health, the null hypothesis is unlikely to be right in most cases! The body is a complex system — of course what we put in it affects how it functions in some direction or another. It's unlikely to be absolute zero.

But numerical and scale-based thinking, indispensable for anyone looking to not be a sucker, tells us that we must distinguish between small and meaningless effects (like the connection between almost all individual foods and cancer so far) and real ones (like the connection between smoking and lung cancer).

And now we arrive at the problem of “significance” — even if an effect is really happening, it still may not matter!  We must learn to be wary of “relative” statistics (i.e., “the risk has doubled”), and look to favor “absolute” statistics, which tell us whether the thing is worth worrying about at all.

So we have two important ideas:

A. Just like coin flips, many results are perfectly possible by chance. We use the concept of “statistical significance” to figure out how likely it is that the effect we're seeing is real and not just a random illusion, like seeing 18 heads in 30 coin tosses.

B. Even if it is really happening, it still may be unimportant – an effect so insignificant in real terms that it's not worth our attention.

These effects should combine to raise our level of skepticism when hearing about groundbreaking new studies! (A third and equally important problem is the fact that correlation is not causation, a common problem in many fields of science including nutritional epidemiology. Just because x is associated with y does not mean that x is causing y.)

Tread carefully and keep your thinking cap on.


Still Interested? Read Ellenberg's great book to get your head working correctly, and check out our posts on Bayesian thinking, another very useful statistical tool, and learn a little about how we distinguish science from pseudoscience.

A Few Useful Mental Tools from Richard Feynman

We've covered the brilliant physicist Richard Feynman many times here before. He was a genius. A true genius. But there have been many geniuses — physics has been fortunate to attract some of them — and few of them are as well known as Feynman. Why is Feynman so well known? It's likely because he had tremendous range outside of pure science, and although he won a Nobel Prize for his work in quantum mechanics, he's probably best known for other things, primarily his wonderful ability to explain and teach.

This ability was on display in a series of non-technical lectures in 1963, memorialized in a short book called The Meaning of it All: Thoughts of a Citizen Scientist. The lectures are a wonderful example of how well Feynman's brain worked outside of physics, talking through basic reasoning and some of the problems of his day.

Particularly useful are a series of “tricks of the trade” he gives in a section called This Unscientific Age. These tricks show Feynman taking the method of thought he learned in pure science and applying it to the more mundane topics most of us have to deal with every day. They're wonderfully instructive. Let's check them out.

Mental Tools from Richard Feynman

Before we start, it's worth noting that Feynman takes pains to mention that not everything needs to be considered with scientific accuracy. So don't waste your time unless it's a scientific matter. So let's start with a deep breath:

Now, that there are unscientific things is not my grief. That's a nice word. I mean, that is not what I am worrying about, that there are unscientific things. That something is unscientific is not bad; there is nothing the matter with it. It is just unscientific. And scientific is limited, of course, to those things that we can tell about by trial and error. For example, there is the absurdity of the young these days chanting things about purple people eaters and hound dogs, something that we cannot criticize at all if we belong to the old flat foot floogie and a floy floy or the music goes down and around. Sons of mothers who sang about “come, Josephine, in my flying machine,” which sounds just about as modern as “I'd like to get you on a slow boat to China.” So in life, in gaiety, in emotion, in human pleasures and pursuits, and in literature and so on, there is no need to be scientific, there is no reason to be scientific. One must relax and enjoy life. That is not the criticism. That is not the point.

As we enter the realm of “knowable” things in a scientific sense, the first trick has to do with deciding whether someone truly knows their stuff or is mimicking:

The first one has to do with whether a man knows what he is talking about, whether what he says has some basis or not. And my trick that I use is very easy. If you ask him intelligent questions—that is, penetrating, interested, honest, frank, direct questions on the subject, and no trick questions—then he quickly gets stuck. It is like a child asking naive questions. If you ask naive but relevant questions, then almost immediately the person doesn't know the answer, if he is an honest man. It is important to appreciate that.

And I think that I can illustrate one unscientific aspect of the world which would be probably very much better if it were more scientific. It has to do with politics. Suppose two politicians are running for president, and one goes through the farm section and is asked, “What are you going to do about the farm question?” And he knows right away— bang, bang, bang.

Now he goes to the next campaigner who comes through. “What are you going to do about the farm problem?” “Well, I don't know. I used to be a general, and I don't know anything about farming. But it seems to me it must be a very difficult problem, because for twelve, fifteen, twenty years people have been struggling with it, and people say that they know how to solve the farm problem. And it must be a hard problem. So the way that I intend to solve the farm problem is to gather around me a lot of people who know something about it, to look at all the experience that we have had with this problem before, to take a certain amount of time at it, and then to come to some conclusion in a reasonable way about it. Now, I can't tell you ahead of time what conclusion, but I can give you some of the principles I'll try to use—not to make things difficult for individual farmers, if there are any special problems we will have to have some way to take care of them,” etc., etc., etc.

That's a wonderfully useful way to figure out whether someone is Max Planck or the chaffeur.

The second trick regards how to deal with uncertainty:

People say to me, “Well, how can you teach your children what is right and wrong if you don't know?” Because I'm pretty sure of what's right and wrong. I'm not absolutely sure; some experiences may change my mind. But I know what I would expect to teach them. But, of course, a child won't learn what you teach him.

I would like to mention a somewhat technical idea, but it's the way, you see, we have to understand how to handle uncertainty. How does something move from being almost certainly false to being almost certainly true? How does experience change? How do you handle the changes of your certainty with experience? And it's rather complicated, technically, but I'll give a rather simple, idealized example.

You have, we suppose, two theories about the way something is going to happen, which I will call “Theory A” and “Theory B.” Now it gets complicated. Theory A and Theory B. Before you make any observations, for some reason or other, that is, your past experiences and other observations and intuition and so on, suppose that you are very much more certain of Theory A than of Theory B—much more sure. But suppose that the thing that you are going to observe is a test. According to Theory A, nothing should happen. According to Theory B, it should turn blue. Well, you make the observation, and it turns sort of a greenish. Then you look at Theory A, and you say, “It's very unlikely,” and you turn to Theory B, and you say, “Well, it should have turned sort of blue, but it wasn't impossible that it should turn sort of greenish color.” So the result of this observation, then, is that Theory A is getting weaker, and Theory B is getting stronger. And if you continue to make more tests, then the odds on Theory B increase. Incidentally, it is not right to simply repeat the same test over and over and over and over, no matter how many times you look and it still looks greenish, you haven't made up your mind yet. But if you find a whole lot of other things that distinguish Theory A from Theory B that are different, then by accumulating a large number of these, the odds on Theory B increase.

Feynman is talking about Grey Thinking here, the ability to put things on a gradient from “probably true” to “probably false” and how we deal with that uncertainty. He isn't proposing a method of figuring out absolute, doctrinaire truth.

Another term for what he's proposing is Bayesian updating — starting with a priori odds, based on earlier understanding, and “updating” the odds of something based on what you learn thereafter. An extremely useful tool.

Feynman's third trick is the realization that as we investigate whether something is true or not, new evidence and new methods of experimentation should show the effect getting stronger and stronger, not weaker. He uses an excellent example here by analyzing mental telepathy:

I give an example. A professor, I think somewhere in Virginia, has done a lot of experiments for a number of years on the subject of mental telepathy, the same kind of stuff as mind reading. In his early experiments the game was to have a set of cards with various designs on them (you probably know all this, because they sold the cards and people used to play this game), and you would guess whether it's a circle or a triangle and so on while someone else was thinking about it. You would sit and not see the card, and he would see the card and think about the card and you'd guess what it was. And in the beginning of these researches, he found very remarkable effects. He found people who would guess ten to fifteen of the cards correctly, when it should be on the average only five. More even than that. There were some who would come very close to a hundred percent in going through all the cards. Excellent mind readers.

A number of people pointed out a set of criticisms. One thing, for example, is that he didn't count all the cases that didn't work. And he just took the few that did, and then you can't do statistics anymore. And then there were a large number of apparent clues by which signals inadvertently, or advertently, were being transmitted from one to the other.

Various criticisms of the techniques and the statistical methods were made by people. The technique was therefore improved. The result was that, although five cards should be the average, it averaged about six and a half cards over a large number of tests. Never did he get anything like ten or fifteen or twenty-five cards. Therefore, the phenomenon is that the first experiments are wrong. The second experiments proved that the phenomenon observed in the first experiment was nonexistent. The fact that we have six and a half instead of five on the average now brings up a new possibility, that there is such a thing as mental telepathy, but at a much lower level. It's a different idea, because, if the thing was really there before, having improved the methods of experiment, the phenomenon would still be there. It would still be fifteen cards. Why is it down to six and a half? Because the technique improved. Now it still is that the six and a half is a little bit higher than the average of statistics, and various people criticized it more subtly and noticed a couple of other slight effects which might account for the results.

It turned out that people would get tired during the tests, according to the professor. The evidence showed that they were getting a little bit lower on the average number of agreements. Well, if you take out the cases that are low, the laws of statistics don't work, and the average is a little higher than the five, and so on. So if the man was tired, the last two or three were thrown away. Things of this nature were improved still further. The results were that mental telepathy still exists, but this time at 5.1 on the average, and therefore all the experiments which indicated 6.5 were false. Now what about the five? . . . Well, we can go on forever, but the point is that there are always errors in experiments that are subtle and unknown. But the reason that I do not believe that the researchers in mental telepathy have led to a demonstration of its existence is that as the techniques were improved, the phenomenon got weaker. In short, the later experiments in every case disproved all the results of the former experiments. If remembered that way, then you can appreciate the situation.

This echoes Feyman's dictum about not fooling oneself: We must refine our process for probing and experimenting if we're to get at real truth, always watching out for little troubles. Otherwise, we torture the world so that results fit our expectations. If we carefully refine and re-test and the effect gets weaker all the time, it's likely to not be true, or at least not to the magnitude originally hoped for.

The fourth trick is to ask the right question, which is not “Could this be the case?” but “Is this actually the case?” Many get so caught up with the former that they forget to ask the latter:

That brings me to the fourth kind of attitude toward ideas, and that is that the problem is not what is possible. That's not the problem. The problem is what is probable, what is happening. It does no good to demonstrate again and again that you can't disprove that this could be a flying saucer. We have to guess ahead of time whether we have to worry about the Martian invasion. We have to make a judgment about whether it is a flying saucer, whether it's reasonable, whether it's likely. And we do that on the basis of a lot more experience than whether it's just possible, because the number of things that are possible is not fully appreciated by the average individual. And it is also not clear, then, to them how many things that are possible must not be happening. That it's impossible that everything that is possible is happening. And there is too much variety, so most likely anything that you think of that is possible isn't true. In fact that's a general principle in physics theories: no matter what a guy thinks of, it's almost always false. So there have been five or ten theories that have been right in the history of physics, and those are the ones we want. But that doesn't mean that everything's false. We'll find out.

The fifth trick is a very, very common one, even 50 years after Feynman pointed it out. You cannot judge the probability of something happening after it's already happened. That's cherry-picking. You have to run the experiment forward for it to mean anything:

I now turn to another kind of principle or idea, and that is that there is no sense in calculating the probability or the chance that something happens after it happens. A lot of scientists don't even appreciate this. In fact, the first time I got into an argument over this was when I was a graduate student at Princeton, and there was a guy in the psychology department who was running rat races. I mean, he has a T-shaped thing, and the rats go, and they go to the right, and the left, and so on. And it's a general principle of psychologists that in these tests they arrange so that the odds that the things that happen happen by chance is small, in fact, less than one in twenty. That means that one in twenty of their laws is probably wrong. But the statistical ways of calculating the odds, like coin flipping if the rats were to go randomly right and left, are easy to work out.

This man had designed an experiment which would show something which I do not remember, if the rats always went to the right, let's say. I can't remember exactly. He had to do a great number of tests, because, of course, they could go to the right accidentally, so to get it down to one in twenty by odds, he had to do a number of them. And its hard to do, and he did his number. Then he found that it didn't work. They went to the right, and they went to the left, and so on. And then he noticed, most remarkably, that they alternated, first right, then left, then right, then left. And then he ran to me, and he said, “Calculate the probability for me that they should alternate, so that I can see if it is less than one in twenty.” I said, “It probably is less than one in twenty, but it doesn't count.”

He said, “Why?” I said, “Because it doesn't make any sense to calculate after the event. You see, you found the peculiarity, and so you selected the peculiar case.”

For example, I had the most remarkable experience this evening. While coming in here, I saw license plate ANZ 912. Calculate for me, please, the odds that of all the license plates in the state of Washington I should happen to see ANZ 912. Well, it's a ridiculous thing. And, in the same way, what he must do is this: The fact that the rat directions alternate suggests the possibility that rats alternate. If he wants to test this hypothesis, one in twenty, he cannot do it from the same data that gave him the clue. He must do another experiment all over again and then see if they alternate. He did, and it didn't work.

The sixth trick is one that's familiar to almost all of us, yet almost all of us forget about every day: The plural of anecdote is not data. We must use proper statistical sampling to know whether or not we know what we're talking about:

The next kind of technique that's involved is statistical sampling. I referred to that idea when I said they tried to arrange things so that they had one in twenty odds. The whole subject of statistical sampling is somewhat mathematical, and I won't go into the details. The general idea is kind of obvious. If you want to know how many people are taller than six feet tall, then you just pick people out at random, and you see that maybe forty of them are more than six feet so you guess that maybe everybody is. Sounds stupid.

Well, it is and it isn't. If you pick the hundred out by seeing which ones come through a low door, you're going to get it wrong. If you pick the hundred out by looking at your friends you'll get it wrong because they're all in one place in the country. But if you pick out a way that as far as anybody can figure out has no connection with their height at all, then if you find forty out of a hundred, then, in a hundred million there will be more or less forty million. How much more or how much less can be worked out quite accurately. In fact, it turns out that to be more or less correct to 1 percent, you have to have 10,000 samples. People don't realize how difficult it is to get the accuracy high. For only 1 or 2 percent you need 10,000 tries.

The last trick is to realize that many errors people make simply come from lack of information. They don't even know they're missing the tools they need. This can be a very tough one to guard against — it's hard to know when you're missing information that would change your mind — but Feynman gives the simple case of astrology to prove the point:

Now, looking at the troubles that we have with all the unscientific and peculiar things in the world, there are a number of them which cannot be associated with difficulties in how to think, I think, but are just due to some lack of information. In particular, there are believers in astrology, of which, no doubt, there are a number here. Astrologists say that there are days when it's better to go to the dentist than other days. There are days when it's better to fly in an airplane, for you, if you are born on such a day and such and such an hour. And it's all calculated by very careful rules in terms of the position of the stars. If it were true it would be very interesting. Insurance people would be very interested to change the insurance rates on people if they follow the astrological rules, because they have a better chance when they are in the airplane. Tests to determine whether people who go on the day that they are not supposed to go are worse off or not have never been made by the astrologers. The question of whether it's a good day for business or a bad day for business has never been established. Now what of it? Maybe it's still true, yes.

On the other hand, there's an awful lot of information that indicates that it isn't true. Because we have a lot of knowledge about how things work, what people are, what the world is, what those stars are, what the planets are that you are looking at, what makes them go around more or less, where they're going to be in the next 2000 years is completely known. They don't have to look up to find out where it is. And furthermore, if you look very carefully at the different astrologers they don't agree with each other, so what are you going to do? Disbelieve it. There's no evidence at all for it. It's pure nonsense.

The only way you can believe it is to have a general lack of information about the stars and the world and what the rest of the things look like. If such a phenomenon existed it would be most remarkable, in the face of all the other phenomena that exist, and unless someone can demonstrate it to you with a real experiment, with a real test, took people who believe and people who didn't believe and made a test, and so on, then there's no point in listening to them.


Still Interested? Check out the (short) book: The Meaning of it All: Thoughts of a Citizen-Scientist.

Richard Feynman on Teaching Math to Kids and the Lessons of Knowledge

Legendary scientist Richard Feynman was famous for his penetrating insight and clarity of thought. Famous for not only the work he did to garner a Nobel Prize, but also for the lucidity of explanations of ordinary things such as why trains stay on the tracks as they go around a curve, how we look for new laws of science, how rubber bands work, and the beauty of the natural world.

Feynman knew the difference between knowing the name of something and knowing something. And was often prone to telling the emperor they had no clothes as this illuminating example from James Gleick's book Genius: The Life and Science of Richard Feynman shows.

Educating his children gave him pause as to how the elements of teaching should be employed. By the time his son Carl was four, Feynman was “actively lobbying against a first-grade science book proposed for California schools.”

It began with pictures of a mechanical wind-up dog, a real dog, and a motorcycle, and for each the same question: “What makes it move?” The proposed answer—“ Energy makes it move”— enraged him.

That was tautology, he argued—empty definition. Feynman, having made a career of understanding the deep abstractions of energy, said it would be better to begin a science course by taking apart a toy dog, revealing the cleverness of the gears and ratchets. To tell a first-grader that “energy makes it move” would be no more helpful, he said, than saying “God makes it move” or “moveability makes it move.”

Feynman proposed a simple test for whether one is teaching ideas or mere definitions: “Without using the new word which you have just learned, try to rephrase what you have just learned in your own language. Without using the word energy, tell me what you know now about the dog’s motion.”

The other standard explanations were equally horrible: gravity makes it fall, or friction makes it wear out. You didn't get a pass on learning because you were a first-grader and Feynman's explanations not only captured the attention of his audience—from Nobel winners to first-graders—but also offered true knowledge. “Shoe leather wears out because it rubs against the sidewalk and the little notches and bumps on the sidewalk grab pieces and pull them off.” That is knowledge. “To simply say, ‘It is because of friction,’ is sad, because it’s not science.”

Richard Feynman on Teaching

Choosing Textbooks for Grade Schools

In 1964 Feynman made the rare decision to serve on a public commission for choosing mathematics textbooks for California's grade schools. As Gleick describes it:

Traditionally this commissionership was a sinecure that brought various small perquisites under the table from textbook publishers. Few commissioners— as Feynman discovered— read many textbooks, but he determined to read them all, and had scores of them delivered to his house.

This was the era of new math in children's textbooks: introducing high-level concepts, such as set theory and non decimal number systems into grade school.

Feynman was skeptical of this approach but rather than simply let it go, he popped the balloon.

He argued to his fellow commissioners that sets, as presented in the reformers’ textbooks, were an example of the most insidious pedantry: new definitions for the sake of definition, a perfect case of introducing words without introducing ideas.

A proposed primer instructed first-graders: “Find out if the set of the lollipops is equal in number to the set of the girls.”

To Feynman this was a disease. It confused without adding precision to the normal sentence: “Find out if there are just enough lollipops for the girls.”

According to Feynman, specialized language should wait until it is needed. (In case you're wondering, he argued the peculiar language of set theory is rarely, if ever, needed —only in understanding different degrees of infinity—which certainly wasn't necessary at a grade-school level.)

Feynman convincingly argued this was knowledge of words without actual knowledge. He wrote:

It is an example of the use of words, new definitions of new words, but in this particular case a most extreme example because no facts whatever are given…. It will perhaps surprise most people who have studied this textbook to discover that the symbol ∪ or ∩ representing union and intersection of sets … all the elaborate notation for sets that is given in these books, almost never appear in any writings in theoretical physics, in engineering, business, arithmetic, computer design, or other places where mathematics is being used.

The point became philosophical.

It was crucial, he argued, to distinguish clear language from precise language. The textbooks placed a new emphasis on precise language: distinguishing “number” from “numeral,” for example, and separating the symbol from the real object in the modern critical fashion— pupil for schoolchildren, it seemed to Feynman. He objected to a book that tried to teach a distinction between a ball and a picture of a ball— the book insisting on such language as “color the picture of the ball red.”

“I doubt that any child would make an error in this particular direction,” Feynman said, adding:

As a matter of fact, it is impossible to be precise … whereas before there was no difficulty. The picture of a ball includes a circle and includes a background. Should we color the entire square area in which the ball image appears all red? … Precision has only been pedantically increased in one particular corner when there was originally no doubt and no difficulty in the idea.

In the real world absolute precision can never be reached and the search for degrees of precision that are not possible (but are desirable) causes a lot of folly.

Feynman has his own ideas for teaching children mathematics.


Process vs. Outcome

Feynman proposed that first-graders learn to add and subtract more or less the way he worked out complicated integrals— free to select any method that seems suitable for the problem at hand.A modern-sounding notion was, The answer isn’t what matters, so long as you use the right method. To Feynman no educational philosophy could have been more wrong. The answer is all that does matter, he said. He listed some of the techniques available to a child making the transition from being able to count to being able to add. A child can combine two groups into one and simply count the combined group: to add 5 ducks and 3 ducks, one counts 8 ducks. The child can use fingers or count mentally: 6, 7, 8. One can memorize the standard combinations. Larger numbers can be handled by making piles— one groups pennies into fives, for example— and counting the piles. One can mark numbers on a line and count off the spaces— a method that becomes useful, Feynman noted, in understanding measurement and fractions. One can write larger numbers in columns and carry sums larger than 10.

To Feynman the standard texts were flawed. The problem


was considered a third-grade problem because it involved the concept of carrying. However, Feynman pointed out most first-graders could easily solve this problem by counting 30, 31, 32.

He proposed that kids be given simple algebra problems (2 times what plus 3 is 7) and be encouraged to solve them through the scientific method, which is tantamount to trial and error. This, he argued, is what real scientists do.

“We must,” Feynman said, “remove the rigidity of thought.” He continued “We must leave freedom for the mind to wander about in trying to solve the problems…. The successful user of mathematics is practically an inventor of new ways of obtaining answers in given situations. Even if the ways are well known, it is usually much easier for him to invent his own way— a new way or an old way— than it is to try to find it by looking it up.”

It was better in the end to have a bag of tricks at your disposal that could be used to solve problems than one orthodox method. Indeed, part of Feynman's genius was his ability to solve problems that were baffling others because they were using the standard method to try and solve them. He would come along and approach the problem with a different tool, which often led to simple and beautiful solutions.


If you give some thought to how Farnam Street helps you, one of the ways is by adding to your bag of tricks so that you can pull them out when you need them to solve problems. We call these tricks mental models and they work kinda like lego — interconnecting and reinforcing one another. The more pieces you have, the more things you can build.

Complement this post with Feynman's excellent advice on how to learn anything.