Fun with Logical Fallacies

We came across a cool book recently called Logically Fallacious: The Ultimate Collection of Over 300 Logical Fallacies, by a social psychologist named Bo Bennett. We were a bit skeptical at first — lists like that can be lacking thoughtfulness and synthesis — but then were hooked by a sentence in the introduction that brought the book near and dear to our hearts:

This book is a crash course, meant to catapult you into a world where you start to see things how they really are, not how you think they are.

We could use the same tag line for Farnam Street. (What was that thing about great artists stealing?)

Logically Fallacious a fun little reference guide to bad thinking, but let’s try to highlight a few that seem to arise quite often without enough recognition. (To head off any objections at the pass, most of these are not strict logical fallacies in the technical sense, but more so examples of bad reasoning.)

Logical Fallacies

No True Scotsman

This one is a favorite. It arises when someone makes a broad sweeping claim that a “real” or “true” so and so would only do X or would never do Y.

Example: “No true Scotsman would drink an ale like that!”

“I know dyed-in-the-wool Scotsmen who drink many such ales!”

“Well then he’s not a True Scotsman!”

Problem: The problem should be obvious: It’s a circular definition! A True Scotsman is thus defined as anyone who would not drink such ales, which then makes them a True Scotsman, and so on. It’s non-falsifiable. There’s a Puritanical aspect to this line of reasoning that almost always leads to circularity.

Genetic Fallacy

This doesn’t have to do with genetics per se so much as the genetic origin of an argument. The “genetic fallacy” is when you disclaim someone’s argument based solely on some aspect of their background or the motivation of the claim.

Example: “Of course Joe’s arguing that unions are good for the world, he’s the head of the Local 147 Chapter!”

Problem: Whether or not Joe is the head of his local union chapter has nothing to do with whether unions are good or bad. It certainly may influence his argument, but it doesn’t invalidate his argument. You must approach the merits of the argument rather than the merits of Joe to figure out whether it’s true or not.

Failure to Elucidate

This is when someone tries to “explain” something slippery by redefining it in an equally nebulous way, instead of actually explaining it. Hearing something stated this way is usually a strong indicator that the person doesn’t know what they’re talking about.

Example: “The Secret works because of the vibration of sub-lingual frequencies.”

“What the heck are sub-lingual frequencies?”

“They’re waves of energy that exist below the level of our consciousness.”


Problem: The claimant thinks they have explained the thing in a satisfactory way, but they haven’t — they’ve simply offered another useless definition that does no work in explaining why the claim makes any sense. Too often the challenger will simply accept the follow up, or worse, repeat it to others, without getting a satisfactory explanation. In a Feynman-like way, you must keep probing, and if the probes reveal more failures to elucidate, it’s likely that you can reject the claim, at least until real evidence is presented.

Causal Reductionism

This reflects closely on Nassim Taleb’s work and the concept of the Narrative Fallacy — an undue simplifying of reality to a simple cause–> effect chain.

Example: “Warren Buffett was successful because his dad was a Congressman. He had a leg up I don’t have!”

Problem: This form of argument is used pretty frequently because the claimant wishes it was true or is otherwise comfortable with the narrative. It resolves reality into a neat little box, when actual reality is complicated. To address this particular example, extreme success on the level of a Buffett clearly would have multiple causes acting in the same direction. His father’s political affiliation is probably way down the list.

This fallacy is common in conspiracy theory-type arguments, where the proponent is convinced that because they have some inarguable facts — Howard Buffett was a congressman; being politically connected offers some advantages — their conclusion must also be correct. They ignore other explanations that are likely to be more correct, or refuse to admit that we don’t quite know the answer. Reductionism leads to a lot of wrong thinking — the antidote is learning to think more broadly and be skeptical of narratives.

“Fallacy of Composition/Fallacy of Division”

These two fallacies are two sides of the same coin: The first problem is thinking that if some part of a greater whole has certain properties, that the whole must share the same properties. The second is the reverse: Thinking that because a whole is judged to have certain properties, that its constituent parts must necessarily share those properties.

Examples: “Your brain is made of molecules, and molecules are not conscious, so your brain must not be the source of consciousness.”

“Wall Street is a dishonest place, and so my neighbor Steve, who works at Goldman Sachs, must be a crook.”

Problem: In the first example, stolen directly from the book, we’re ignoring emergent properties: Qualities that emerge upon the combination of various elements with more mundane innate qualities. (Like a great corporate culture.) In the second example, we make the same mistake in a mirrored way: We forget that greed may be emergent in the system itself, even from a group of otherwise fairly honest people. The other mistake is assuming that each constituent part of the system must necessarily share the traits of the whole system. (i.e., because Wall St. is a dishonest system, your neighbor must be dishonest.)


Still Interested? Check out the whole book. It’s fun to pick up regularly and see which fallacies you can start recognizing all around you.

Our Genes and Our Behavior

“But now we are starting to show genetic influence on individual differences using DNA. DNA is a game changer; it’s a lot harder to argue with DNA than it is with a twin study or an adoption study.”
— Robert Plomin


It’s not controversial to say that our genetics help explain our physical traits. Tall parents will, on average, have tall children. Overweight parents will, on average, have overweight children. Irish parents have Irish looking kids. This is true to the point of banality and only a committed ignorant would dispute it.

It’s slightly more controversial to talk about genes influencing behavior. For a long time, it was denied entirely. For most of the 20th century, the “experts” in human behavior had decided that “nurture” beat “nature” with a score of 100-0. Particularly influential was the child’s early life — the way their parents treated them in the womb and throughout early childhood. (Thanks Freud!)

So, where are we at now?

Genes and Behavior

Developmental scientists and behavioral scientists eventually got to work with twin studies and adoption studies, which tended to show that certain traits were almost certainly heritable and not reliant on environment, thanks to the natural controlled experiments of twins separated at birth. (This eventually provided fodder for Judith Rich Harris’s wonderful work on development and personality.)

All throughout, the geneticists, starting with Gregor Mendel and his peas, kept on working. As behavioral geneticist Robert Plomin explains, the genetic camp split early on. Some people wanted to understand the gene itself in detail, using very simple traits to figure it out (eye color, long or short wings, etc.) and others wanted to study the effect of genes on complex behavior, generally:

People realized these two views of genetics could come together. Nonetheless, the two worlds split apart because Mendelians became geneticists who were interested in understanding genes. They would take a convenient phenotype, a dependent measure, like eye color in flies, just something that was easy to measure. They weren’t interested in the measure, they were interested in how genes work. They wanted a simple way of seeing how genes work.

By contrast, the geneticists studying complex traits—the Galtonians—became quantitative geneticists. They were interested in agricultural traits or human traits, like cardiovascular disease or reading ability, and would use genetics only insofar as it helped them understand that trait. They were behavior centered, while the molecular geneticists were gene centered. The molecular geneticists wanted to know everything about how a gene worked. For almost a century these two worlds of genetics diverged.

Eventually, the two began to converge. One camp (the gene people) figured out that once we could sequence the genome, they might be able to understand more complicated behavior by looking directly at genes in specific people with unique DNA, and contrasting them against one another.

The reason why this whole gene-behavior game is hard is because, as Plomin makes clear, complex traits like intelligence are not like eye color. There’s no “smart gene” — it comes from the interaction of thousands of different genes and can occur in a variety of combinations. Basic Mendel-style counting (the sort of dominant/recessive eye color gene thing you learned in high school biology) doesn’t work in analyzing the influence of genes on complex traits:

The word gene wasn’t invented until 1903. Mendel did his work in the mid-19th century. In the early 1900s, when Mendel was rediscovered, people finally realized the impact of what he did, which was to show the laws of inheritance of a single gene. At that time, these Mendelians went around looking for Mendelian 3:1 segregation ratios, which was the essence of what Mendel showed, that inheritance was discreet. Most of the socially, behaviorally, or agriculturally important traits aren’t either/or traits, like a single-gene disorder. Huntington’s disease, for example, is a single-gene dominant disorder, which means that if you have that mutant form of the Huntington’s gene, you will have Huntington’s disease. It’s necessary and sufficient. But that’s not the way complex traits work.

The importance of genetics is hard to understate, but until the right technology came along, we could only observe it indirectly. A study might have shown that 50% of the variance in cognitive ability was due to genetics, but we had no idea which specific genes, in which combinations, actually produced smarter people.

But the Moore’s law style improvement in genetic testing means that we can cheaply and effectively map out entire genomes for a very low cost. And with that, the geneticists have a lot of data to work with, a lot of correlations to begin sussing out. The good thing about finding strong correlations between genes and human traits is that we know which one is causative: The gene! Obviously, your reading ability doesn’t cause you to have certain DNA; it must be the other way around. So “Big Data” style screening is extremely useful, once we get a little better at it.


The problem is that, so far, the successes have been a bit minimal. There are millions of “ATCG” base pairs to check on.  As Plomin points out, we can only pinpoint about 20% of the specific genetic influence for something simple like height, which we know is about 90% heritable. Complex traits like schizophrenia are going to take a lot of work:

We’ve got to be able to figure out where the so-called missing heritability is, that is, the gap between the DNA variants that we are able to identify and the estimates we have from twin and adoption studies. For example, height is about 90 percent heritable, meaning, of the differences between people in height, about 90 percent of those differences can be explained by genetic differences. With genome-wide association studies, we can account for 20 percent of the variance of height, or a quarter of the heritability of height. That’s still a lot of missing heritability, but 20 percent of the variance is impressive.

With schizophrenia, for example, people say they can explain 15 percent of the genetic liability. The jury is still out on how that translates into the real world. What you want to be able to do is get this polygenic score for schizophrenia that would allow you to look at the entire population and predict who’s going to become schizophrenic. That’s tricky because the studies are case-control studies based on extreme, well-diagnosed schizophrenics, versus clean controls who have no known psychopathology. We’ll know soon how this polygenic score translates to predicting who will become schizophrenic or not.

It brings up an interesting question that gets us back to the beginning of the piece: If we know that genetics have an influence on some complex behavioral traits (and we do), and we can with the continuing progress of science and technology, sequence a baby’s genome and predict to a certain extent their reading level, facility with math, facility with social interaction, etc., do we do it?

Well, we can’t until we get a general recognition that genes do indeed influence behavior and do have predictive power as far as how children perform. So far, the track record on getting educators to see that it’s all quite real is pretty bad. Like the Freudians before, there’s a resistance to the “nature” aspect of the debate, probably influenced by some strong ideologies:

If you look at the books and the training that teachers get, genetics doesn’t get a look-in. Yet if you ask teachers, as I’ve done, about why they think children are so different in their ability to learn to read, and they know that genetics is important. When it comes to governments and educational policymakers, the knee-jerk reaction is that if kids aren’t doing well, you blame the teachers and the schools; if that doesn’t work, you blame the parents; if that doesn’t work, you blame the kids because they’re just not trying hard enough. An important message for genetics is that you’ve got to recognize that children are different in their ability to learn. We need to respect those differences because they’re genetic. Not that we can’t do anything about it.

It’s like obesity. The NHS is thinking about charging people to be fat because, like smoking, they say it’s your fault. Weight is not as heritable as height, but it’s highly heritable. Maybe 60 percent of the differences in weight are heritable. That doesn’t mean you can’t do anything about it. If you stop eating, you won’t gain weight, but given the normal life in a fast-food culture, with our Stone Age brains that want to eat fat and sugar, it’s much harder for some people.

We need to respect the fact that genetic differences are important, not just for body mass index and weight, but also for things like reading disability. I know personally how difficult it is for some children to learn to read. Genetics suggests that we need to have more recognition that children differ genetically, and to respect those differences. My grandson, for example, had a great deal of difficulty learning to read. His parents put a lot of energy into helping him learn to read. We also have a granddaughter who taught herself to read. Both of them now are not just learning to read but reading to learn.

Genetic influence is just influence; it’s not deterministic like a single gene. At government levels—I’ve consulted with the Department for Education—I don’t think they’re as hostile to genetics as I had feared, they’re just ignorant of it. Education just doesn’t consider genetics, whereas teachers on the ground can’t ignore it. I never get static from them because they know that these children are different when they start. Some just go off on very steep trajectories, while others struggle all the way along the line. When the government sees that, they tend to blame the teachers, the schools, or the parents, or the kids. The teachers know. They’re not ignoring this one child. If anything, they’re putting more energy into that child.

It’s frustrating for Plomin because he knows that eventually DNA mapping will get good enough that real, and helpful, predictions will be possible. We’ll be able to target kids early enough to make real differences — earlier than problems actually manifest — and hopefully change the course of their lives for the better. But so far, no dice.

Education is the last backwater of anti-genetic thinking. It’s not even anti-genetic. It’s as if genetics doesn’t even exist. I want to get people in education talking about genetics because the evidence for genetic influence is overwhelming. The things that interest them—learning abilities, cognitive abilities, behavior problems in childhood—are the most heritable things in the behavioral domain. Yet it’s like Alice in Wonderland. You go to educational conferences and it’s as if genetics does not exist.

I’m wondering about where the DNA revolution will take us. If we are explaining 10 percent of the variance of GCSE scores with a DNA chip, it becomes real. People will begin to use it. It’s important that we begin to have this conversation. I’m frustrated at having so little success in convincing people in education of the possibility of genetic influence. It is ignorance as much as it is antagonism.

Here’s one call for more reality recognition.


Still Interested? Check out a book by John Brookman of with a curated collection of articles published on genetics.

J.K. Rowling On People’s Intolerance of Alternative Viewpoints

At the PEN America Literary Gala & Free Expression Awards, J.K. Rowling, of Harry Potter fame, received the 2016 PEN/Allen Foundation Literary Service Award. Embedded in her acceptance speech is some timeless wisdom on tolerance and acceptance:

Intolerance of alternative viewpoints is spreading to places that make me, a moderate and a liberal, most uncomfortable. Only last year, we saw an online petition to ban Donald Trump from entry to the U.K. It garnered half a million signatures.

Just a moment.

I find almost everything that Mr. Trump says objectionable. I consider him offensive and bigoted. But he has my full support to come to my country and be offensive and bigoted there. His freedom to speak protects my freedom to call him a bigot. His freedom guarantees mine. Unless we take that absolute position without caveats or apologies, we have set foot upon a road with only one destination. If my offended feelings can justify a travel ban on Donald Trump, I have no moral ground on which to argue that those offended by feminism or the fight for transgender rights or universal suffrage should not oppress campaigners for those causes. If you seek the removal of freedoms from an opponent simply on the grounds that they have offended you, you have crossed the line to stand alongside tyrants who imprison, torture and kill on exactly the same justification.

Too often we look at the world through our own eyes and fail to acknowledge the eyes of others. In so doing we often lose touch with reality.

The quick reaction our brains have to people who disagree with us is often that they are idiots. They shouldn’t be allowed to talk or have a platform. They should lose.

This reminds me of Kathryn Schulz’s insightful view on what we do when someone disagrees with us.

As a result we dismiss the views of others, failing to even consider that our view of the world might be wrong.

It’s easy to be dismissive and intolerant of others. It’s easy to say they’re idiots and wish they didn’t have the same rights you have. It’s harder to map that to the very freedoms we enjoy and relate it to the world we want to live in.

A Few Useful Mental Tools from Richard Feynman

We’ve covered the brilliant physicist Richard Feynman many times here before. He was a genius. A true genius. But there have been many geniuses — physics has been fortunate to attract some of them — and few of them are as well known as Feynman. Why is Feynman so well known? It’s likely because he had tremendous range outside of pure science, and although he won a Nobel Prize for his work in quantum mechanics, he’s probably best known for other things, primarily his wonderful ability to explain and teach.

This ability was on display in a series of non-technical lectures in 1963, memorialized in a short book called The Meaning of it All: Thoughts of a Citizen Scientist. The lectures are a wonderful example of how well Feynman’s brain worked outside of physics, talking through basic reasoning and some of the problems of his day.

Particularly useful are a series of “tricks of the trade” he gives in a section called This Unscientific Age. These tricks show Feynman taking the method of thought he learned in pure science and applying it to the more mundane topics most of us have to deal with every day. They’re wonderfully instructive. Let’s check them out.

Mental Tools from Richard Feynman

Before we start, it’s worth noting that Feynman takes pains to mention that not everything needs to be considered with scientific accuracy. So don’t waste your time unless it’s a scientific matter. So let’s start with a deep breath:

Now, that there are unscientific things is not my grief. That’s a nice word. I mean, that is not what I am worrying about, that there are unscientific things. That something is unscientific is not bad; there is nothing the matter with it. It is just unscientific. And scientific is limited, of course, to those things that we can tell about by trial and error. For example, there is the absurdity of the young these days chanting things about purple people eaters and hound dogs, something that we cannot criticize at all if we belong to the old flat foot floogie and a floy floy or the music goes down and around. Sons of mothers who sang about “come, Josephine, in my flying machine,” which sounds just about as modern as “I’d like to get you on a slow boat to China.” So in life, in gaiety, in emotion, in human pleasures and pursuits, and in literature and so on, there is no need to be scientific, there is no reason to be scientific. One must relax and enjoy life. That is not the criticism. That is not the point.

As we enter the realm of “knowable” things in a scientific sense, the first trick has to do with deciding whether someone truly knows their stuff or is mimicking:

The first one has to do with whether a man knows what he is talking about, whether what he says has some basis or not. And my trick that I use is very easy. If you ask him intelligent questions—that is, penetrating, interested, honest, frank, direct questions on the subject, and no trick questions—then he quickly gets stuck. It is like a child asking naive questions. If you ask naive but relevant questions, then almost immediately the person doesn’t know the answer, if he is an honest man. It is important to appreciate that.

And I think that I can illustrate one unscientific aspect of the world which would be probably very much better if it were more scientific. It has to do with politics. Suppose two politicians are running for president, and one goes through the farm section and is asked, “What are you going to do about the farm question?” And he knows right away— bang, bang, bang.

Now he goes to the next campaigner who comes through. “What are you going to do about the farm problem?” “Well, I don’t know. I used to be a general, and I don’t know anything about farming. But it seems to me it must be a very difficult problem, because for twelve, fifteen, twenty years people have been struggling with it, and people say that they know how to solve the farm problem. And it must be a hard problem. So the way that I intend to solve the farm problem is to gather around me a lot of people who know something about it, to look at all the experience that we have had with this problem before, to take a certain amount of time at it, and then to come to some conclusion in a reasonable way about it. Now, I can’t tell you ahead of time what conclusion, but I can give you some of the principles I’ll try to use—not to make things difficult for individual farmers, if there are any special problems we will have to have some way to take care of them,” etc., etc., etc.

That’s a wonderfully useful way to figure out whether someone is Max Planck or the chaffeur.

The second trick regards how to deal with uncertainty:

People say to me, “Well, how can you teach your children what is right and wrong if you don’t know?” Because I’m pretty sure of what’s right and wrong. I’m not absolutely sure; some experiences may change my mind. But I know what I would expect to teach them. But, of course, a child won’t learn what you teach him.

I would like to mention a somewhat technical idea, but it’s the way, you see, we have to understand how to handle uncertainty. How does something move from being almost certainly false to being almost certainly true? How does experience change? How do you handle the changes of your certainty with experience? And it’s rather complicated, technically, but I’ll give a rather simple, idealized example.

You have, we suppose, two theories about the way something is going to happen, which I will call “Theory A” and “Theory B.” Now it gets complicated. Theory A and Theory B. Before you make any observations, for some reason or other, that is, your past experiences and other observations and intuition and so on, suppose that you are very much more certain of Theory A than of Theory B—much more sure. But suppose that the thing that you are going to observe is a test. According to Theory A, nothing should happen. According to Theory B, it should turn blue. Well, you make the observation, and it turns sort of a greenish. Then you look at Theory A, and you say, “It’s very unlikely,” and you turn to Theory B, and you say, “Well, it should have turned sort of blue, but it wasn’t impossible that it should turn sort of greenish color.” So the result of this observation, then, is that Theory A is getting weaker, and Theory B is getting stronger. And if you continue to make more tests, then the odds on Theory B increase. Incidentally, it is not right to simply repeat the same test over and over and over and over, no matter how many times you look and it still looks greenish, you haven’t made up your mind yet. But if you find a whole lot of other things that distinguish Theory A from Theory B that are different, then by accumulating a large number of these, the odds on Theory B increase.

Feynman is talking about Grey Thinking here, the ability to put things on a gradient from “probably true” to “probably false” and how we deal with that uncertainty. He isn’t proposing a method of figuring out absolute, doctrinaire truth.

Another term for what he’s proposing is Bayesian updating — starting with a priori odds, based on earlier understanding, and “updating” the odds of something based on what you learn thereafter. An extremely useful tool.

Feynman’s third trick is the realization that as we investigate whether something is true or not, new evidence and new methods of experimentation should show the effect getting stronger and stronger, not weaker. He uses an excellent example here by analyzing mental telepathy:

I give an example. A professor, I think somewhere in Virginia, has done a lot of experiments for a number of years on the subject of mental telepathy, the same kind of stuff as mind reading. In his early experiments the game was to have a set of cards with various designs on them (you probably know all this, because they sold the cards and people used to play this game), and you would guess whether it’s a circle or a triangle and so on while someone else was thinking about it. You would sit and not see the card, and he would see the card and think about the card and you’d guess what it was. And in the beginning of these researches, he found very remarkable effects. He found people who would guess ten to fifteen of the cards correctly, when it should be on the average only five. More even than that. There were some who would come very close to a hundred percent in going through all the cards. Excellent mind readers.

A number of people pointed out a set of criticisms. One thing, for example, is that he didn’t count all the cases that didn’t work. And he just took the few that did, and then you can’t do statistics anymore. And then there were a large number of apparent clues by which signals inadvertently, or advertently, were being transmitted from one to the other.

Various criticisms of the techniques and the statistical methods were made by people. The technique was therefore improved. The result was that, although five cards should be the average, it averaged about six and a half cards over a large number of tests. Never did he get anything like ten or fifteen or twenty-five cards. Therefore, the phenomenon is that the first experiments are wrong. The second experiments proved that the phenomenon observed in the first experiment was nonexistent. The fact that we have six and a half instead of five on the average now brings up a new possibility, that there is such a thing as mental telepathy, but at a much lower level. It’s a different idea, because, if the thing was really there before, having improved the methods of experiment, the phenomenon would still be there. It would still be fifteen cards. Why is it down to six and a half? Because the technique improved. Now it still is that the six and a half is a little bit higher than the average of statistics, and various people criticized it more subtly and noticed a couple of other slight effects which might account for the results.

It turned out that people would get tired during the tests, according to the professor. The evidence showed that they were getting a little bit lower on the average number of agreements. Well, if you take out the cases that are low, the laws of statistics don’t work, and the average is a little higher than the five, and so on. So if the man was tired, the last two or three were thrown away. Things of this nature were improved still further. The results were that mental telepathy still exists, but this time at 5.1 on the average, and therefore all the experiments which indicated 6.5 were false. Now what about the five? . . . Well, we can go on forever, but the point is that there are always errors in experiments that are subtle and unknown. But the reason that I do not believe that the researchers in mental telepathy have led to a demonstration of its existence is that as the techniques were improved, the phenomenon got weaker. In short, the later experiments in every case disproved all the results of the former experiments. If remembered that way, then you can appreciate the situation.

This echoes Feyman’s dictum about not fooling oneself: We must refine our process for probing and experimenting if we’re to get at real truth, always watching out for little troubles. Otherwise, we torture the world so that results fit our expectations. If we carefully refine and re-test and the effect gets weaker all the time, it’s likely to not be true, or at least not to the magnitude originally hoped for.

The fourth trick is to ask the right question, which is not “Could this be the case?” but “Is this actually the case?” Many get so caught up with the former that they forget to ask the latter:

That brings me to the fourth kind of attitude toward ideas, and that is that the problem is not what is possible. That’s not the problem. The problem is what is probable, what is happening. It does no good to demonstrate again and again that you can’t disprove that this could be a flying saucer. We have to guess ahead of time whether we have to worry about the Martian invasion. We have to make a judgment about whether it is a flying saucer, whether it’s reasonable, whether it’s likely. And we do that on the basis of a lot more experience than whether it’s just possible, because the number of things that are possible is not fully appreciated by the average individual. And it is also not clear, then, to them how many things that are possible must not be happening. That it’s impossible that everything that is possible is happening. And there is too much variety, so most likely anything that you think of that is possible isn’t true. In fact that’s a general principle in physics theories: no matter what a guy thinks of, it’s almost always false. So there have been five or ten theories that have been right in the history of physics, and those are the ones we want. But that doesn’t mean that everything’s false. We’ll find out.

The fifth trick is a very, very common one, even 50 years after Feynman pointed it out. You cannot judge the probability of something happening after it’s already happened. That’s cherry-picking. You have to run the experiment forward for it to mean anything:

I now turn to another kind of principle or idea, and that is that there is no sense in calculating the probability or the chance that something happens after it happens. A lot of scientists don’t even appreciate this. In fact, the first time I got into an argument over this was when I was a graduate student at Princeton, and there was a guy in the psychology department who was running rat races. I mean, he has a T-shaped thing, and the rats go, and they go to the right, and the left, and so on. And it’s a general principle of psychologists that in these tests they arrange so that the odds that the things that happen happen by chance is small, in fact, less than one in twenty. That means that one in twenty of their laws is probably wrong. But the statistical ways of calculating the odds, like coin flipping if the rats were to go randomly right and left, are easy to work out.

This man had designed an experiment which would show something which I do not remember, if the rats always went to the right, let’s say. I can’t remember exactly. He had to do a great number of tests, because, of course, they could go to the right accidentally, so to get it down to one in twenty by odds, he had to do a number of them. And its hard to do, and he did his number. Then he found that it didn’t work. They went to the right, and they went to the left, and so on. And then he noticed, most remarkably, that they alternated, first right, then left, then right, then left. And then he ran to me, and he said, “Calculate the probability for me that they should alternate, so that I can see if it is less than one in twenty.” I said, “It probably is less than one in twenty, but it doesn’t count.”

He said, “Why?” I said, “Because it doesn’t make any sense to calculate after the event. You see, you found the peculiarity, and so you selected the peculiar case.”

For example, I had the most remarkable experience this evening. While coming in here, I saw license plate ANZ 912. Calculate for me, please, the odds that of all the license plates in the state of Washington I should happen to see ANZ 912. Well, it’s a ridiculous thing. And, in the same way, what he must do is this: The fact that the rat directions alternate suggests the possibility that rats alternate. If he wants to test this hypothesis, one in twenty, he cannot do it from the same data that gave him the clue. He must do another experiment all over again and then see if they alternate. He did, and it didn’t work.

The plural of anecdote is not data. Click To Tweet

The sixth trick is one that’s familiar to almost all of us, yet almost all of us forget about every day: The plural of anecdote is not data. We must use proper statistical sampling to know whether or not we know what we’re talking about:

The next kind of technique that’s involved is statistical sampling. I referred to that idea when I said they tried to arrange things so that they had one in twenty odds. The whole subject of statistical sampling is somewhat mathematical, and I won’t go into the details. The general idea is kind of obvious. If you want to know how many people are taller than six feet tall, then you just pick people out at random, and you see that maybe forty of them are more than six feet so you guess that maybe everybody is. Sounds stupid.

Well, it is and it isn’t. If you pick the hundred out by seeing which ones come through a low door, you’re going to get it wrong. If you pick the hundred out by looking at your friends you’ll get it wrong because they’re all in one place in the country. But if you pick out a way that as far as anybody can figure out has no connection with their height at all, then if you find forty out of a hundred, then, in a hundred million there will be more or less forty million. How much more or how much less can be worked out quite accurately. In fact, it turns out that to be more or less correct to 1 percent, you have to have 10,000 samples. People don’t realize how difficult it is to get the accuracy high. For only 1 or 2 percent you need 10,000 tries.

The last trick is to realize that many errors people make simply come from lack of information. They don’t even know they’re missing the tools they need. This can be a very tough one to guard against — it’s hard to know when you’re missing information that would change your mind — but Feynman gives the simple case of astrology to prove the point:

Now, looking at the troubles that we have with all the unscientific and peculiar things in the world, there are a number of them which cannot be associated with difficulties in how to think, I think, but are just due to some lack of information. In particular, there are believers in astrology, of which, no doubt, there are a number here. Astrologists say that there are days when it’s better to go to the dentist than other days. There are days when it’s better to fly in an airplane, for you, if you are born on such a day and such and such an hour. And it’s all calculated by very careful rules in terms of the position of the stars. If it were true it would be very interesting. Insurance people would be very interested to change the insurance rates on people if they follow the astrological rules, because they have a better chance when they are in the airplane. Tests to determine whether people who go on the day that they are not supposed to go are worse off or not have never been made by the astrologers. The question of whether it’s a good day for business or a bad day for business has never been established. Now what of it? Maybe it’s still true, yes.

On the other hand, there’s an awful lot of information that indicates that it isn’t true. Because we have a lot of knowledge about how things work, what people are, what the world is, what those stars are, what the planets are that you are looking at, what makes them go around more or less, where they’re going to be in the next 2000 years is completely known. They don’t have to look up to find out where it is. And furthermore, if you look very carefully at the different astrologers they don’t agree with each other, so what are you going to do? Disbelieve it. There’s no evidence at all for it. It’s pure nonsense.

The only way you can believe it is to have a general lack of information about the stars and the world and what the rest of the things look like. If such a phenomenon existed it would be most remarkable, in the face of all the other phenomena that exist, and unless someone can demonstrate it to you with a real experiment, with a real test, took people who believe and people who didn’t believe and made a test, and so on, then there’s no point in listening to them.


Still Interested? Check out the (short) book: The Meaning of it All: Thoughts of a Citizen-Scientist.

Richard Feynman on Teaching Math to Kids and the Lessons of Knowledge

Legendary scientist Richard Feynman was famous for his penetrating insight and clarity of thought. Famous for not only the work he did to garner a Nobel Prize, but also for the lucidity of explanations of ordinary things such as why trains stay on the tracks as they go around a curve, how we look for new laws of science, how rubber bands work, and the beauty of the natural world.

Feynman knew the difference between knowing the name of something and knowing something. And was often prone to telling the emperor they had no clothes as this illuminating example from James Gleick’s book Genius: The Life and Science of Richard Feynman shows.

Educating his children gave him pause as to how the elements of teaching should be employed. By the time his son Carl was four, Feynman was “actively lobbying against a first-grade science book proposed for California schools.”

It began with pictures of a mechanical wind-up dog, a real dog, and a motorcycle, and for each the same question: “What makes it move?” The proposed answer—“ Energy makes it move”— enraged him.

That was tautology, he argued—empty definition. Feynman, having made a career of understanding the deep abstractions of energy, said it would be better to begin a science course by taking apart a toy dog, revealing the cleverness of the gears and ratchets. To tell a first-grader that “energy makes it move” would be no more helpful, he said, than saying “God makes it move” or “moveability makes it move.”

Feynman proposed a simple test for whether one is teaching ideas or mere definitions: “Without using the new word which you have just learned, try to rephrase what you have just learned in your own language. Without using the word energy, tell me what you know now about the dog’s motion.”

The other standard explanations were equally horrible: gravity makes it fall, or friction makes it wear out. You didn’t get a pass on learning because you were a first-grader and Feynman’s explanations not only captured the attention of his audience—from Nobel winners to first-graders—but also offered true knowledge. “Shoe leather wears out because it rubs against the sidewalk and the little notches and bumps on the sidewalk grab pieces and pull them off.” That is knowledge. “To simply say, ‘It is because of friction,’ is sad, because it’s not science.”

Richard Feynman on Teaching

Choosing Textbooks for Grade Schools

In 1964 Feynman made the rare decision to serve on a public commission for choosing mathematics textbooks for California’s grade schools. As Gleick describes it:

Traditionally this commissionership was a sinecure that brought various small perquisites under the table from textbook publishers. Few commissioners— as Feynman discovered— read many textbooks, but he determined to read them all, and had scores of them delivered to his house.

This was the era of new math in children’s textbooks: introducing high-level concepts, such as set theory and non decimal number systems into grade school.

Feynman was skeptical of this approach but rather than simply let it go, he popped the balloon.

He argued to his fellow commissioners that sets, as presented in the reformers’ textbooks, were an example of the most insidious pedantry: new definitions for the sake of definition, a perfect case of introducing words without introducing ideas.

A proposed primer instructed first-graders: “Find out if the set of the lollipops is equal in number to the set of the girls.”

To Feynman this was a disease. It confused without adding precision to the normal sentence: “Find out if there are just enough lollipops for the girls.”

According to Feynman, specialized language should wait until it is needed. (In case you’re wondering, he argued the peculiar language of set theory is rarely, if ever, needed —only in understanding different degrees of infinity—which certainly wasn’t necessary at a grade-school level.)

Feynman convincingly argued this was knowledge of words without actual knowledge. He wrote:

It is an example of the use of words, new definitions of new words, but in this particular case a most extreme example because no facts whatever are given…. It will perhaps surprise most people who have studied this textbook to discover that the symbol ∪ or ∩ representing union and intersection of sets … all the elaborate notation for sets that is given in these books, almost never appear in any writings in theoretical physics, in engineering, business, arithmetic, computer design, or other places where mathematics is being used.

The point became philosophical.

It was crucial, he argued, to distinguish clear language from precise language. The textbooks placed a new emphasis on precise language: distinguishing “number” from “numeral,” for example, and separating the symbol from the real object in the modern critical fashion— pupil for schoolchildren, it seemed to Feynman. He objected to a book that tried to teach a distinction between a ball and a picture of a ball— the book insisting on such language as “color the picture of the ball red.”

“I doubt that any child would make an error in this particular direction,” Feynman said, adding:

As a matter of fact, it is impossible to be precise … whereas before there was no difficulty. The picture of a ball includes a circle and includes a background. Should we color the entire square area in which the ball image appears all red? … Precision has only been pedantically increased in one particular corner when there was originally no doubt and no difficulty in the idea.

In the real world absolute precision can never be reached and the search for degrees of precision that are not possible (but are desirable) causes a lot of folly.

Feynman has his own ideas for teaching children mathematics.


Process vs. Outcome

Feynman proposed that first-graders learn to add and subtract more or less the way he worked out complicated integrals— free to select any method that seems suitable for the problem at hand.A modern-sounding notion was, The answer isn’t what matters, so long as you use the right method. To Feynman no educational philosophy could have been more wrong. The answer is all that does matter, he said. He listed some of the techniques available to a child making the transition from being able to count to being able to add. A child can combine two groups into one and simply count the combined group: to add 5 ducks and 3 ducks, one counts 8 ducks. The child can use fingers or count mentally: 6, 7, 8. One can memorize the standard combinations. Larger numbers can be handled by making piles— one groups pennies into fives, for example— and counting the piles. One can mark numbers on a line and count off the spaces— a method that becomes useful, Feynman noted, in understanding measurement and fractions. One can write larger numbers in columns and carry sums larger than 10.

To Feynman the standard texts were flawed. The problem


was considered a third-grade problem because it involved the concept of carrying. However, Feynman pointed out most first-graders could easily solve this problem by counting 30, 31, 32.

He proposed that kids be given simple algebra problems (2 times what plus 3 is 7) and be encouraged to solve them through the scientific method, which is tantamount to trial and error. This, he argued, is what real scientists do.

“We must,” Feynman said, “remove the rigidity of thought.” He continued “We must leave freedom for the mind to wander about in trying to solve the problems…. The successful user of mathematics is practically an inventor of new ways of obtaining answers in given situations. Even if the ways are well known, it is usually much easier for him to invent his own way— a new way or an old way— than it is to try to find it by looking it up.”

It was better in the end to have a bag of tricks at your disposal that could be used to solve problems than one orthodox method. Indeed, part of Feynman’s genius was his ability to solve problems that were baffling others because they were using the standard method to try and solve them. He would come along and approach the problem with a different tool, which often led to simple and beautiful solutions.


If you give some thought to how Farnam Street helps you, one of the ways is by adding to your bag of tricks so that you can pull them out when you need them to solve problems. We call these tricks mental models and they work kinda like lego — interconnecting and reinforcing one another. The more pieces you have, the more things you can build.

Complement this post with Feynman’s excellent advice on how to learn anything.

Warren Berger’s Three-Part Method for More Creativity

“A problem well stated is a problem half-solved.”
— Charles “Boss” Kettering


The whole scientific method is built on a very simple structure: If I do this, then what will happen? That’s the basic question on which more complicated, intricate, and targeted lines of inquiry are built, across a wide variety of subjects. This simple form helps us push deeper and deeper into knowledge of the world. (On a sidenote, science has become such a loaded, political word that this basic truth of how it works frequently seems to be lost!)

Individuals learn this way too. From the time you were a child, you were asking why (maybe even too much), trying to figure out all the right questions to ask to get better information about how the world works and what to do about it.

Because question-asking is such an integral part of how we know things about the world, both institutionally and individually, it seems worthy to understand how creative inquiry works, no? If we want to do things that haven’t been done or learn things that have never been learned — in short, be more creative — we must learn to ask the right questions, ones so good that they’re half-answered in the asking. And to do that, it might help to understand the process, no?

Warren Berger proposes a simple method in his book A More Beautiful Questionan interesting three-part system to help (partially) solve the problem of inquiry. He calls it The Why, What If, and How of Innovative Questioning, and reminds us why it’s worth learning about.

Each stage of the problem solving process has distinct challenges and issues–requiring a different mind-set, along with different types of questions. Expertise is helpful at certain points, not so helpful at others; wide-open, unfettered divergent thinking is critical at one stage, discipline and focus is called for at another. By thinking of questioning and problem solving in a more structured way, we can remind ourselves to shift approaches, change tools, and adjust our questions according to which stage we’re entering.

Three-Part Method for More Creativity


It starts with the Why?

A good Why? seeks true understanding. Why are things the way they are currently? Why do we do it that way? Why do we believe what we believe?

This start is essential because it gives us permission to continue down a line of inquiry fully equipped. Although we may think we have a brilliant idea in our heads for a new product, or a new answer to an old question, or a new way of doing an old thing, unless we understand why things are the way they are, we’re not yet on solid ground. We never want to operate from a position of ignorance, wasting our time on an idea that hasn’t been pushed and fleshed out. Before we say “I already know” the answer, maybe we need to step back and look for the truth.

At the same time, starting with a strong Why also opens up the idea that the current way (whether it’s our way or someone else’s) might be wrong, or at least inefficient. Let’s say a friend proposes you go to the same restaurant you’ve been to a thousand times. It might be a little agitating, but a simple “Why do we always go there?” allows two things to happen:

A. Your friend can explain why, and this gives him/her a legitimate chance at persuasion. (If you’re open minded.)

B. The two of you may agree you only go there out of habit, and might like to go somewhere else.

This whole Why? business is the realm of contrarian thinking, which not everyone enjoys doing. But Berger cites the case of George Lois:

George Lois, the renowned designer of iconic magazine covers and celebrated advertising campaigns, was also known for being a disruptive force in business meetings. It wasn’t just that he was passionate in arguing for his ideas; the real issue, Lois recalls, was that often he was the only person in the meeting willing to ask why. The gathered business executives would be anxious to proceed on a course of action assumed to be sensible. While everyone else nodded in agreement, “I would be the only guy raising his hand to say, ‘Wait a minute, this thing you want to do doesn’t make any sense. Why the hell are you doing it this way?”

Others in the room saw Lois to be slowing the meeting and stopping the group from moving forward. But Lois understood that the group was apt to be operating on habit–trotting out an idea or approach similar to what had been done in similar situations before, without questioning whether it was the best idea or the right approach in this instance. The group needed to be challenged to “step back” by someone like Lois–who had a healthy enough ego to withstand being the lone questioner in the room.

The truth is that a really good Why? type question tends to be threatening. That’s also what makes it useful. It challenges us to step back and stop thinking on autopilot. It also requires what Berger calls a step back from knowing — that recognizable feeling of knowing something but not knowing how you know it. This forced perspective is, of course, as valuable a thing as you can do.

Berger describes a valuable exercise that’s sometimes used to force perspective on people who think they already have a complete answer. After showing a drawing of a large square (seemingly) divided into 16 smaller squares, the questioner asks the audience “How many squares do you see?”

The easy answer is sixteen. But the more observant people in the group are apt to notice–especially after Srinivas allows them to have a second, longer, look–that you can find additional squares by configuring them differently. In addition to the sixteen single squares, there are nine two-by-two squares, four three-by-three squares, and one large four-by-four square, which brings the total to thirty squares.

“The squares were always there, but you didn’t find them until you looked for them.”

Point being, until you step back, re-examine, and look a little harder, you might not have seen all the damn squares yet!

What If?

The second part is where a good questioner, after using Why? to understand as deeply as possible and open a new line of inquiry, proposes a new type of solution, usually an audacious one — all great ideas tend to be, almost by definition — by asking What If…?

Berger illustrates this one well with the story of Pandora Music. The founder Tim Westergren wanted to know why good music wasn’t making it out to the masses. His search didn’t lead to a satisfactory answer, so he eventually asked himself, What if we could map the DNA of music? The result has been pretty darn good, with something close to 80 million listeners at present:

The Pandora story, like many stories of inquiry-driven startups, started with someone’s wondering about an unmet need. It concluded with the questioner, Westergren, figuring out how to bring a fully realized version of the answer into the world.

But what happened in between? That’s when the lightning struck. In Westergren’s case, ideas and influences began to come together; he combined what he knew about music with what he was learning about technology. Inspiration was drawn from a magazine article, and from a seemingly unrelated world (biology). A vision of the new possibility began to form in the mind. It all resulted in an audacious hypothetical question that might or might not have been feasible–but was exciting enough to rally people to the challenge of trying to make it work.

The What If stage is the blue-sky moment of questioning, when anything is possible. Those possibilities may not survive the more practical How stage; but it’s critical to innovation that there be time for wild, improbable ideas to surface and to inspire.

If the word Why has penetrative power, enabling the questioner to get past assumptions and dig deep into problems, the words What if have a more expansive effect–allowing us to think without limits or constraints, firing the imagination.

Clearly, Westergren had engaged in serious combinatorial creativity pulling from multiple disciplines, which led him to ask the right kind of questions. This seems to be a pretty common feature at this stage of the game, and an extremely common feature of all new ideas:

Smart recombinations are all around us. Pandora, for example, is a combination of a radio station and search engine; it also takes the biological method of genetic coding and transfers it to the domain of music […] In today’s tech world, many of the most successful products–Apple’s iPhone being just one notable example–are hybrids, melding functions and features in new ways.

Companies, too, can be smart recombinations. Netflix was started as a video-rental business that operated like a monthly membership health club (and how it has added “TV production studio” to the mix). Airbnb is a combination of an online travel agency, a social media platform, and a good old-fashioned bed-and-breakfast (the B&B itself is a smart combination from way back.)

It may be that the Why? –> What if? line of inquiry is common to all types of innovative thinking because it engages the part of our brain that starts turning over old ideas in new ways by combining them with other unrelated ideas, much of them previously sitting idle in our subconscious. That churning is where new ideas really arise.

The idea then has to be “reality-tested”, and that’s where the last major question comes in.


Once we think we’ve hit on a brilliant new idea, it’s time to see if the thing actually works. Usually and most frequently, the answer is no. But enough times to make it worth our while, we discover that the new idea has legs.

The most common problem here is that we try to perfect a new idea all at once, leading to stagnation and paralysis. That’s usually the wrong approach.

Another, often better, way is to try the idea quickly and start getting feedback. As much as possible. In the book, Berger describes a fun little experiment that drives home the point, and serves as a fairly useful business metaphor besides:

A software designer shared a story about an interesting experiment in which the organizers brought together a group of kindergarten children who were divided into small teams and given a challenge: Using uncooked spaghetti sticks, string, tape, and a marshmallow, they had to assemble the tallest structure they could, within a time limit (the marshmallow was supposed to be placed on top of the completed structure.)

Then, in a second phase of the experiment, the organizers added a new wrinkle. They brought in teams of Harvard MBA grad students to compete in the challenge against the kindergartners. The grad students, I’m told, took it seriously. They brought a highly analytical approach to the challenge, debating among themselves about how best to combine the sticks, the string, and the tape to achieve maximum altitude.

Perhaps you’ll have guessed this already, but the MBA students were no match for the kindergartners. For all their planning and discussion, the structures they carefully conceived invariably fell apart–and then they were out of time before they could get in more attempts.

The kids used their time much more efficiently by constructing right away. They tried one way of building, and if it didn’t work, they quickly tried another. They got in a lot more tries. They learned from their mistakes as they went along, instead of attempting to figure out everything in advance.

This little experiment gets run in the real world all the time by startups looking to outcompete ponderous old bureaucracies. They simply substitute velocity for scale and see what happens — it often works well.

The point is to move along the axis of Why?–>What If–>How? without too much self-censoring in the last phase. Being afraid to fail can often mean a great What If? proposition gets stuck there forever. Analysis paralysis, as it’s sometimes called. But if you can instead enter the testing of the How? stage quickly, even by showing that an idea won’t work, then you can start the loop over again, either asking a new Why? or proposing a new What If? to an existing Why?

Thus moving your creative engine forward.


Berger’s point is that there is an intense practical end to understanding productive inquiry. Just like “If I do this, then what will happen?” is a basic structure on which all manner of complex scientific questioning and testing is built, so can a simple Why, What If, and How structure catalyze a litany of new ideas.

Still Interested? Check out the book, or check out some related posts: Steve Jobs on CreativitySeneca on Gathering Ideas And Combinatorial Creativity, or for some fun with question-asking, What If? Serious Scientific Answers to Absurd Hypothetical Questions.