We came across a cool book recently called Logically Fallacious: The Ultimate Collection of Over 300 Logical Fallacies, by a social psychologist named Bo Bennett. We were a bit skeptical at first — lists like that can be lacking thoughtfulness and synthesis — but then were hooked by a sentence in the introduction that brought the book near and dear to our hearts:
This book is a crash course, meant to catapult you into a world where you start to see things how they really are, not how you think they are.
We could use the same tag line for Farnam Street. (What was that thing about great artists stealing?)
Logically Fallacious a fun little reference guide to bad thinking, but let’s try to highlight a few that seem to arise quite often without enough recognition. (To head off any objections at the pass, most of these are not strict logical fallacies in the technical sense, but more so examples of bad reasoning.)
No True Scotsman
This one is a favorite. It arises when someone makes a broad sweeping claim that a “real” or “true” so and so would only do X or would never do Y.
Example: “No true Scotsman would drink an ale like that!”
“I know dyed-in-the-wool Scotsmen who drink many such ales!”
“Well then he’s not a True Scotsman!”
Problem: The problem should be obvious: It’s a circular definition! A True Scotsman is thus defined as anyone who would not drink such ales, which then makes them a True Scotsman, and so on. It’s non-falsifiable. There’s a Puritanical aspect to this line of reasoning that almost always leads to circularity.
This doesn’t have to do with genetics per se so much as the genetic origin of an argument. The “genetic fallacy” is when you disclaim someone’s argument based solely on some aspect of their background or the motivation of the claim.
Example: “Of course Joe’s arguing that unions are good for the world, he’s the head of the Local 147 Chapter!”
Problem: Whether or not Joe is the head of his local union chapter has nothing to do with whether unions are good or bad. It certainly may influence his argument, but it doesn’t invalidate his argument. You must approach the merits of the argument rather than the merits of Joe to figure out whether it’s true or not.
Failure to Elucidate
This is when someone tries to “explain” something slippery by redefining it in an equally nebulous way, instead of actually explaining it. Hearing something stated this way is usually a strong indicator that the person doesn’t know what they’re talking about.
Example: “The Secret works because of the vibration of sub-lingual frequencies.”
“What the heck are sub-lingual frequencies?”
“They’re waves of energy that exist below the level of our consciousness.”
Problem: The claimant thinks they have explained the thing in a satisfactory way, but they haven’t — they’ve simply offered another useless definition that does no work in explaining why the claim makes any sense. Too often the challenger will simply accept the follow up, or worse, repeat it to others, without getting a satisfactory explanation. In a Feynman-like way, you must keep probing, and if the probes reveal more failures to elucidate, it’s likely that you can reject the claim, at least until real evidence is presented.
This reflects closely on Nassim Taleb’s work and the concept of the Narrative Fallacy — an undue simplifying of reality to a simple cause–> effect chain.
Example: “Warren Buffett was successful because his dad was a Congressman. He had a leg up I don’t have!”
Problem: This form of argument is used pretty frequently because the claimant wishes it was true or is otherwise comfortable with the narrative. It resolves reality into a neat little box, when actual reality is complicated. To address this particular example, extreme success on the level of a Buffett clearly would have multiple causes acting in the same direction. His father’s political affiliation is probably way down the list.
This fallacy is common in conspiracy theory-type arguments, where the proponent is convinced that because they have some inarguable facts — Howard Buffett was a congressman; being politically connected offers some advantages — their conclusion must also be correct. They ignore other explanations that are likely to be more correct, or refuse to admit that we don’t quite know the answer. Reductionism leads to a lot of wrong thinking — the antidote is learning to think more broadly and be skeptical of narratives.
“Fallacy of Composition/Fallacy of Division”
These two fallacies are two sides of the same coin: The first problem is thinking that if some part of a greater whole has certain properties, that the whole must share the same properties. The second is the reverse: Thinking that because a whole is judged to have certain properties, that its constituent parts must necessarily share those properties.
Examples: “Your brain is made of molecules, and molecules are not conscious, so your brain must not be the source of consciousness.”
“Wall Street is a dishonest place, and so my neighbor Steve, who works at Goldman Sachs, must be a crook.”
Problem: In the first example, stolen directly from the book, we’re ignoring emergent properties: Qualities that emerge upon the combination of various elements with more mundane innate qualities. (Like a great corporate culture.) In the second example, we make the same mistake in a mirrored way: We forget that greed may be emergent in the system itself, even from a group of otherwise fairly honest people. The other mistake is assuming that each constituent part of the system must necessarily share the traits of the whole system. (i.e., because Wall St. is a dishonest system, your neighbor must be dishonest.)
Still Interested? Check out the whole book. It’s fun to pick up regularly and see which fallacies you can start recognizing all around you.