Over 400,000 people visited Farnam Street last month to learn how to make better decisions, create new ideas, and avoid stupid errors. With more than 100,000 subscribers to our popular weekly digest, we've become an online intellectual hub. To learn more about we what do, start here.



If we are to intervene in what would otherwise happen, we need an idea of not only the benefits of our interventions but also the harm. Otherwise how will we know when, despite our best intentions, we cause more harm than we do good. This is iatrogenics.

Intervening when we have no idea of the break-even point is “naive interventionism,” a phrase first brought to my attention by Nassim Taleb.

In Antifragile, he writes:

In the case of tonsillectomies, the harm to the children undergoing unnecessary treatment is coupled with the trumpeted gain for some others. The name for such net loss, the (usually bitten or delayed) damage from treatment in excess of the benefits, is iatrogenics, literally, “caused by the healer,” iatros being a healer in Greek.

Why would people do something even when the evidence points out that doing something is actually causing more harm?

I can think of a few things that contribute the bulk of the weight as to why this repeats itself over and over.

The first thing that goes through my mind is incentive caused bias. What is the incentive for action? Is there an agency gap where the outcome from person doing the intervention is disconnected from the outcome for the person experiencing it?

Another big reason I think this happens is a lack of clear feedback loops between action and outcome. It’s hard to know you’re causing harm if you can’t trace action to outcome. This allows, even encourages, some self-delusion. Given that we are prone to confirming our beliefs—and presumably we took action because we believed it to be helpful—we’re unlikely to see evidence that contradicts our beliefs. We should be seeking disconfirming evidence to our actions but we don’t because if we did, we’d be a lot less smart than we think we are.

And the third major contributor, I’d say is our bias for action (especially what we consider positive action). This is also known as, to paraphrase Charlie Munger, do something syndrome. If you’re a policy advisor or politician, or heck, even a modern office worker, social norms make it hard for you to say “I don’t know.” You’re expected to have an answer for everything.

Think about how a typical meeting starts. In response to a new product from a competitor, for example, the first question people usually ask is “What are we going to do about this?” The hidden assumption that goes unexplored is that you need to do something. It could be that the cost of doing something outweighs the benefits.

Medicine has known about iatrogenics since at least the fourth century before our era-primum non nocere (“first do no harm”) is a first principle attributed to Hippocrates and integrated in the so called Hippocratic Oath taken by every medical doctor on his commencement day.

The very notion of iatrogenics is quite absent front he discourse outside medicine (which, to repeat, has been a rather slow learner.) (Source: Antifragile)

The concept of iatrogenics applies to domains outside of medicine and relates to everything where we cause more harm than good under the guise of knowledge.

Follow your curiosity and read about inversion: stop trying to be brilliant and start trying to avoid obvious stupidity.