Over 400,000 people visited Farnam Street last month to learn how to make better decisions, create new ideas, and avoid stupid errors. With more than 100,000 subscribers to our popular weekly digest, we've become an online intellectual hub. To learn more about what we do, start here.

The Truth Wears Off: Is there something wrong with the scientific method?

Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true.

Much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong. The findings are noise. Why? Because we're human.

Human nature ensures there are many forces nudging us to prove ourselves right and not wrong: you've got bias from insensitivity to regression, commitment and consistency bias, confirmation bias, and, of course, incentives. If you've spent years testing your hypothesis you have a lot invested in the outcomes and it becomes psychologically easier to fudge some of the data to conform to your beliefs, rather than admit the last few years of your life failed to prove your hypothesis.

Sometimes we're so blind that even after a claim has been systematically disproven, you still see some stubborn researchers citing the first few studies that show a strong effect. We see this outside the science world too.  After DNA evidence has exonerated a defendant, Police officers and prosecutors will still cling to the belief that he's guilty if they have a lot invested in the case. They can't explain why they know the person is guilty despite the evidence.

Jonah Lehrer penned an interesting article in the New Yorker bringing to light the decline effect: all sorts of well-established, multiply confirmed scientific findings are now starting to look increasingly uncertain. According to Lehrer, “it's as if our facts are losing their truth: claims that have been enshrined in textbooks are losing their truth.”

…Different scientists in different labs need to repeat the protocols and publish their results. The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.

But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.

For many scientists, the effect is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe? Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to “put nature to the question.” But it appears that nature often gives us different answers.

Continue Reading

If you liked this, you'll probably like our post on Lies, Damned Lies, and Medical Science.

Jonah Lehrer is the author of How We Decide and Proust Was a Neuroscientist