Iatrogenics: Why Intervention Often Leads to Worse Outcomes

Iatros

 

Iatrogenics is when a treatment causes more harm than benefit. As iatros means healer in Greek, the word means “caused by the healer” or “brought by the healer.”  Healer need not mean doctor, but anyone intervening to solve a problem—It could be a person, a government, or a coalition of the willing, anything.

Today we use the phrase iatrogenics to refer to any effect resulting from an intervention in excess of gain. Some examples are easier recognized than others. For example, when the negative effects are immediate and visible and appear to be a direct cause-effect, we can reasonably conclude that the intervention caused the negative effect. However, if the negative effects are delayed or could be explained by multiple causes, we are less likely to conclude the intervention caused them.

A great example of iatrogenics in action is the death of George Washington. In 1799, as he laying dying from a bacterial infection, his well-intentioned doctors aided or hastened his death using the standard treatment at the time, which was bloodletting (at least five pints, according to Ron Chernow).

More controversial examples exist as well, such as military interventions in the Middle East. In these cases linkages are clouded by narratives, moral arguments, and clear cause and impact. (A great book to read on this is Perilous Interventions.) And when the linkages between cause and effect are murky, the very people who caused the harm (intentional or not) are the often the people rewarded for improving the situation.

The key lesson here is that if we are to intervene , we need a solid idea of not only the benefits of our interventions but also the harm we may cause—the second order effects.  Otherwise how will we know when, despite our best intentions, we cause more harm than we do good.

Intervening when we have no idea of the break-even point is “naive interventionism,” a phrase first brought to my attention by Nassim Taleb.

In Antifragile, he writes:

In the case of tonsillectomies, the harm to the children undergoing unnecessary treatment is coupled with the trumpeted gain for some others. The name for such net loss, the (usually bitten or delayed) damage from treatment in excess of the benefits, is iatrogenics.

***

Why would people do something even when the evidence points out that doing something is actually causing more harm?

I can think of a few things that contribute the bulk of the weight as to why this repeats itself over and over.

The first thing that goes through my mind is incentive caused bias. What is the incentive for action? Is there an agency gap where the outcome from person doing the intervention is disconnected from the outcome for the person experiencing it? Does the healer have skin in the game?

Another reason is time. When there is a lack of clear feedback loops between action and outcome it's hard to know you're causing harm. This allows, even encourages, some self-delusion. Given that we are prone to confirming our beliefs—and presumably we took action because we believed it to be helpful—we're unlikely to see evidence that contradicts our beliefs. We should be seeking disconfirming evidence to our actions but we don't because if we did, we'd be a lot less smart than we think we are.

And the third major contributor, I'd say is our bias for action (especially what we consider positive action). This is also known as, to paraphrase Charlie Munger, do something syndrome. If you're a policy advisor or politician, or heck, even a modern office worker, social norms make it hard for you to say “I don't know.” You're expected to have an opinion on everything.

***

Hippocrates created the first principle of medicine era-primum non nocere (“first do no harm”), which is to avoid iatrogenic effects. This is inversion. Outside of medicine, however, this concept is little known.

Think about how a typical meeting starts. In response to a new product from a competitor, for example, the first question people usually ask is “What are we going to do about this?” The hidden assumption that goes unexplored is that you need to do something. Rarely do we even consider that the cost of doing something outweighs the benefits. And if you do nothing, it will appear to your boss that you're not doing anything. You have an incentive to be seen as doing something even if the costs of taking action are high.

***

So what can we learn from all of this?

Intervention—by people or governments—should only be used when the benefits visibly outweigh the negatives. A great example is saving a life. “Otherwise,” Nassim Taleb writes in Antifragile, “in situations in which the benefits of a particular medicine, procedure, or nutritional or lifestyle modification appear small—say, those aiming for comfort—we have a large potential sucker problem (hence putting us on the wrong side of convexity effects).”

A simple rule for the decision maker is that intervention needs to prove its benefits and those benefits need to be orders of magnitude higher than the natural (that is non-interventionist) path. We intuitively know this already. We won't switch apps or brands for a marginal increase over the status quo. Only when the benefits become orders of magnitude higher do we switch.

We must also recognize that some systems self-correct; this is the essence of homeostasis. Naive interventionists often deny that natural homeostatic mechanisms are sufficient, that “something needs to be done” — yet often the best course of action is nothing at all.