Over 400,000 people visited Farnam Street last month to learn how to make better decisions, create new ideas, and avoid stupid errors. With more than 100,000 subscribers to our popular weekly digest, we've become an online intellectual hub. To learn more about we what do, start here.

Action vs. Inaction

This is just Great!!


2 weeks ago, the U.S. Preventive Services Task Force came out with new recommendations on breast cancer screening, including, “The USPSTF recommends against routine screening mammography in women aged 40 to 49 years.”


So, what cognitive bias let this government agency move the decimal point in their head at least 4 points over from where they would normally put it?

I think the key is that this report recommended inaction rather than action. In certain contexts, inaction seems safer than action.

Imagine what would happen if the FDA were faced with an identical choice, but with action/inaction flipped: Say you have an anti-anxiety drug, which will eliminate anxiety of the same level caused by a false-positive on a mammogram, in 15% of the patients who take it – and it will kill only 1 out of every 2000 patients who take it. Per week.

Would the FDA approve this drug? Approval, after all, does not mean recommending it; it means that the decision to use it can be left to the doctor and patient. The USPSTF report stressed that such decisions must always be left up to the doctor and patient; by the same standards, the FDA should certainly approve the drug. Yet I think it would not.

A puzzle is why we have the opposite bias in other contexts. When Congress was debating the bank bailouts and the stimulus package, a lot could have been said in favor of doing nothing; but no one even suggested it. Empirically, we have a much higher success rate at intervening in health than in economics. Yet in health, we regulate actions as if they were inherently dangerous; while in economics, we see inaction as inherently dangerous. Why?