Tag: Atul Gawande

Atul Gawande and the Mistrust of Science

Continuing on with Commencement Season, Atul Gawande gave an address to the students of Cal Tech last Friday, delivering a message to future scientists, but one that applies equally to all of us as thinkers:

“Even more than what you think, how you think matters.”

Gawande addresses the current growing mistrust of “scientific authority” — the thought that because science creaks along one mistake at a time, that it isn't to be trusted. The misunderstanding of what scientific thinking is and how it works is at the root of much problematic ideology, and it's up to those who do understand it to promote its virtues.

It's important to realize that scientists, singular, are as fallible as the rest of us. Thinking otherwise only sets you up for a disappointment. The point of science is the collective, the forward advance of the hive, not the bee. It's sort of a sausage-making factory when seen up close, but when you pull back the view, it looks like a beautifully humming engine, steadily giving us more and more information about ourselves and the world around us. Science is, above all, a method of thought. A way of figuring out what's true and what we're just fooling ourselves about.

So explains Gawande:

Few working scientists can give a ground-up explanation of the phenomenon they study; they rely on information and techniques borrowed from other scientists. Knowledge and the virtues of the scientific orientation live far more in the community than the individual. When we talk of a “scientific community,” we are pointing to something critical: that advanced science is a social enterprise, characterized by an intricate division of cognitive labor. Individual scientists, no less than the quacks, can be famously bull-headed, overly enamored of pet theories, dismissive of new evidence, and heedless of their fallibility. (Hence Max Planck’s observation that science advances one funeral at a time.) But as a community endeavor, it is beautifully self-correcting.

Beautifully organized, however, it is not. Seen up close, the scientific community—with its muddled peer-review process, badly written journal articles, subtly contemptuous letters to the editor, overtly contemptuous subreddit threads, and pompous pronouncements of the academy— looks like a rickety vehicle for getting to truth. Yet the hive mind swarms ever forward. It now advances knowledge in almost every realm of existence—even the humanities, where neuroscience and computerization are shaping understanding of everything from free will to how art and literature have evolved over time.

He echoes Steven Pinker in the thought that science, traditionally left to the realm of discovering “physical” reality, is now making great inroads into what might have previously been considered philosophy, by exploring why and how our minds work the way they do. This can only be accomplished by deep critical thinking across a broad range of disciplines, and by the dual attack of specialists uncovering highly specific nuggets and great synthesizers able to suss out meaning from the big pile of facts.

The whole speech is worth a read and reflection, but Gawande's conclusion is particularly poignant for an educated individual in a Republic:

The mistake, then, is to believe that the educational credentials you get today give you any special authority on truth. What you have gained is far more important: an understanding of what real truth-seeking looks like. It is the effort not of a single person but of a group of people—the bigger the better—pursuing ideas with curiosity, inquisitiveness, openness, and discipline. As scientists, in other words.

Even more than what you think, how you think matters. The stakes for understanding this could not be higher than they are today, because we are not just battling for what it means to be scientists. We are battling for what it means to be citizens.

Still Interested? Read the rest, and read a few other of this year's commencements by Nassim Taleb and Gary Taubes. Or read about E.O. Wilson, the great Harvard biologist, and what he thought it took to become a great scientist. (Hint: The same stuff it takes for anyone to become a great critical thinker.)

Atul Gawande: The Building Industry’s Strategy for Getting Things Right in Complexity

Old_timer_structural_worker2

Checklists establish a higher level of baseline performance.

***

A useful reminder from Atul Gawande, in The Checklist Manifesto:

In a complex environment, experts are up against two main difficulties. The first is the fallibility of human memory and attention, especially when it comes to mundane, routine matters that are easily overlooked under the strain of more pressing events. (When you’ve got a patient throwing up and an upset family member asking you what’s going on, it can be easy to forget that you have not checked her pulse.) Faulty memory and distraction are a particular danger in what engineers call all-or-none processes: whether running to the store to buy ingredients for a cake, preparing an airplane for takeoff, or evaluating a sick person in the hospital, if you miss just one key thing, you might as well not have made the effort at all.

A further difficulty, just as insidious, is that people can lull themselves into skipping steps even when they remember them. In complex processes, after all, certain steps don’t always matter. … “This has never been a problem before,” people say. Until one day it is.

Checklists seem to provide protection against such failures. They remind us of the minimum necessary steps and make them explicit. They not only offer the possibility of verification but also instill a kind of discipline of higher performance.

***

How you employ the checklist is also important. In the face of complexity most organizations tend to centralize decisions, which reduces the risk for egregious error. The costs to this approach are high too. Most employees loathe feeling like they need a hall pass to use the washroom. That's why these next comments were so inspiring.

There is a particularly tantalizing aspect to the building industry’s strategy for getting things right in complex situations: it’s that it gives people power. In response to risk, most authorities tend to centralize power and decision making. That’s usually what checklists are about—dictating instructions to the workers below to ensure they do things the way we want. Indeed, the first building checklist I saw, the construction schedule on the right-hand wall of O’Sullivan’s conference room, was exactly that. It spelled out to the tiniest detail every critical step the tradesmen were expected to follow and when—which is logical if you’re confronted with simple and routine problems; you want the forcing function.

But the list on O’Sullivan’s other wall revealed an entirely different philosophy about power and what should happen to it when you’re confronted with complex, nonroutine problems—such as what to do when a difficult, potentially dangerous, and unanticipated anomaly suddenly appears on the fourteenth floor of a thirty-two-story skyscraper under construction. The philosophy is that you push the power of decision making out to the periphery and away from the center. You give people the room to adapt, based on their experience and expertise. All you ask is that they talk to one another and take responsibility. That is what works.

The strategy is unexpectedly democratic, and it has become standard nowadays, O’Sullivan told me, even in building inspections. The inspectors do not recompute the wind-force calculations or decide whether the joints in a given building should be bolted or welded, he said. Determining whether a structure like Russia Wharf or my hospital’s new wing is built to code and fit for occupancy involves more knowledge and complexity than any one inspector could possibly have. So although inspectors do what they can to oversee a building’s construction, mostly they make certain the builders have the proper checks in place and then have them sign affidavits attesting that they themselves have ensured that the structure is up to code. Inspectors disperse the power and the responsibility.

“It makes sense,” O’Sullivan said. “The inspectors have more troubles with the safety of a two-room addition from a do-it-yourselfer than they do with projects like ours. So that’s where they focus their efforts.” Also, I suspect, at least some authorities have recognized that when they don’t let go of authority they fail.

No Risky Chances: The Conversation That Matters Most

Lacking a coherent view of how people might live successfully all the way to the very end, we have allowed our fates to be controlled by medicine, technology, and strangers.

 

Atul Gawande is one of my favorite writers. Aside from the amazing work he did getting us talking about the power of simple checklists, he's also pointed out why most of us should have coaches. Now he's out with a new book, Being Mortal: Medicine and What Matters in the End, which adds to our ongoing conversation on what it means to be mortal.

I learned about a lot of things in medical school, but mortality wasn’t one of them.

Although I was given a dry, leathery corpse to dissect in anatomy class in my first term, our textbooks contained almost nothing about aging or frailty or dying. The purpose of medical schooling was to teach how to save lives, not how to tend to their demise.

I had never seen anyone die before I became a doctor, and when I did, it came as a shock. I’d seen multiple family members—my wife, my parents, and my children—go through serious, life-threatening illnesses, but medicine had always pulled them through. I knew theoretically that my patients could die, of course, but every actual instance seemed like a violation, as if the rules I thought we were playing by were broken.

Dying and death confront every new doctor and nurse. The first times, some cry. Some shut down. Some hardly notice. When I saw my first deaths, I was too guarded to weep. But I had recurring nightmares in which I’d find my patients’ corpses in my house—even in my bed.

I felt as if I’d failed. But death, of course, is not a failure. Death is normal. Death may be the enemy, but it is also the natural order of things. I knew these truths abstractly, but I didn’t know them concretely—that they could be truths not just for everyone but also for this person right in front of me, for this person I was responsible for.

You don’t have to spend much time with the elderly or those with terminal illness to see how often medicine fails the people it is supposed to help. The waning days of our lives are given over to treatments that addle our brains and sap our bodies for a sliver’s chance of benefit. These days are spent in institutions—nursing homes and intensive-care units—where regimented, anonymous routines cut us off from all the things that matter to us in life.

As recently as 1945, most deaths occurred in the home. By the 1980s, just 17 percent did. Lacking a coherent view of how people might live successfully all the way to the very end, we have allowed our fates to be controlled by medicine, technology, and strangers.

But not all of us have. That takes, however, at least two kinds of courage. The first is the courage to confront the reality of mortality—the courage to seek out the truth of what is to be feared and what is to be hoped when one is seriously ill. Such courage is difficult enough, but even more daunting is the second kind of courage—the courage to act on the truth we find.

A few years ago, I got a late night page: Jewel Douglass, a 72-year-old patient of mine receiving chemotherapy for metastatic ovarian cancer, was back in the hospital, unable to hold food down. For a week, her symptoms had mounted: They started with bloating, became waves of crampy abdominal pain, then nausea and vomiting.

Her oncologist sent her to the hospital. A scan showed that, despite treatment, her ovarian cancer had multiplied, grown, and partly obstructed her intestine. Her abdomen had also filled with fluid. The deposits of tumor had stuffed up her lymphatic system, which serves as a kind of storm drain for the lubricating fluids that the body’s internal linings secrete. When the system is blocked, the fluid has nowhere to go. The belly fills up like a rubber ball until you feel as if you will burst.

But walking into Douglass’ hospital room, I’d never have known she was so sick if I hadn’t seen the scan. “Well, look who’s here!” she said, as if I’d just arrived at a cocktail party. “How are you, doctor?”

“I think I’m supposed to ask you that,” I said.

She smiled brightly and pointed around the room. “This is my husband, Arthur, whom you know, and my son, Brett.” She got me grinning. Here it was, 11 at night, she couldn’t hold down an ounce of water, and she still had her lipstick on, her silver hair was brushed straight, and she was insisting on making introductions.

Her oncologist and I had a menu of options. A range of alternative chemotherapy regimens could be tried to shrink the tumor burden, and I had a few surgical options too. I wouldn’t be able to remove the intestinal blockage, but I might be able to bypass it, I told her. Or I could give her an ileostomy, disconnecting the bowel above the blockage and bringing it through the skin to empty into a bag. I would also put in a couple of drainage catheters—permanent spigots that could be opened to release the fluids from her blocked-up drainage ducts or intestines when necessary. Surgery risked serious complications—wound breakdown, leakage of bowel into her abdomen, infections—but it was the only way she might regain her ability to eat.

I also told her that we did not have to do either chemo or surgery. We could provide medications to control her pain and nausea and arrange for hospice care at home.

This is the moment when I would normally have reviewed the pros and cons. But we are only gradually learning in the medical profession that this is not what we need to do. The options overwhelmed her. They all sounded terrifying. So I stepped back and asked her a few questions I learned from hospice and palliative care physicians, hoping to better help both of us know what to do: What were her biggest fears and concerns? What goals were most important to her? What trade-offs was she willing to make?

Not all can answer such questions, but she did. She said she wanted to be without pain, nausea, or vomiting. She wanted to eat. Most of all, she wanted to get back on her feet. Her biggest fear was that she wouldn’t be able to return home and be with the people she loved.

I asked what sacrifices she was willing to endure now for the possibility of more time later. “Not a lot,” she said. Uppermost in her mind was a wedding that weekend that she was desperate not to miss. “Arthur’s brother is marrying my best friend,” she said. She’d set them up on their first date. The wedding was just two days away. She was supposed to be a bridesmaid. She was willing to do anything to make it, she said.

Suddenly, with just a few simple questions, I had some guidance about her priorities. So we made a plan to see if we could meet them. With a long needle, we tapped a liter of tea-colored fluid from her abdomen, which made her feel at least temporarily better. We gave her medication to control her nausea. We discharged her with instructions to drink nothing thicker than apple juice and to return to see me after the wedding.

She didn’t make it. She came back to the hospital that same night. Just the car ride, with its swaying and bumps, made her vomit, and things only got worse at home.

We agreed that surgery was the best course now and scheduled it for the next day. I would focus on restoring her ability to eat and putting drainage tubes in. Afterward, she could decide if she wanted more chemotherapy or to go on hospice.

She was as clear as I’ve seen anyone be about her goals, but she was still in doubt. The following morning, she canceled the operation. “I’m afraid,” she said. She’d tossed all night, imagining the pain, the tubes, the horrors of possible complications. “I don’t want to take risky chances,” she said.

Her difficulty wasn’t lack of courage to act in the face of risks; it was sorting out how to think about them. Her greatest fear was of suffering, she said. Couldn’t the operation make it worse rather than better?

It could, I said. Surgery offered her the possibility of being able to eat again and a very good likelihood of controlling her nausea, but it carried substantial risk of giving her only pain without improvement or adding new miseries. She had, I estimated, a 75 percent chance that surgery would make her future better, at least for a little while, and a 25 percent chance it’d make it worse.

The brain gives us two ways to evaluate experiences like suffering—how we apprehend such experiences in the moment and how we look at them afterward. People seem to have two different selves—an experiencing self who endures every moment equally and a remembering self who, as the Nobel Prize–winning researcher Daniel Kahneman has shown, gives almost all the weight of judgment afterward to just two points in time: the worst moment of an ordeal and the last moment of it. The remembering self and the experiencing self can come to radically different opinions about the same experience—so which one should we listen to?

This, at bottom, was Jewel Douglass’ torment. Should she heed her remembering self—or, in this case, anticipating self—which was focused on the worst things she might endure? Or should she listen to her experiencing self, which would likely endure a lower average amount of suffering in the days to come if she underwent surgery rather than just going home—and might even get to eat again for a while?

In the end, a person doesn’t view his life as merely the average of its moments—which, after all, is mostly nothing much, plus some sleep. Life is meaningful because it is a story, and a story’s arc is determined by the moments when something happens. Unlike your experiencing self, which is absorbed in the moment, your remembering self is attempting to recognize not only the peaks of joy and valleys of misery but also how the story works out as a whole. That is profoundly affected by how things ultimately turn out. Football fans will let a few flubbed minutes at the end of a game ruin three hours of bliss—because a football game is a story, and in stories, endings matter.

Jewel Douglass didn’t know if she was willing to face the suffering that surgery might inflict and feared being left worse off. “I don’t want to take risky chances,” she said. She didn’t want to take a high-stakes gamble on how her story would end. Suddenly I realized, she was telling me everything I needed to know.

We should go to surgery, I told her, but with the directions she’d just spelled out—to do what I could to enable her to return home to her family while not taking “risky chances.” I’d put in a small laparoscope. I’d look around. And I’d attempt to unblock her intestine only if I saw that I could do it fairly easily. If it looked risky, I’d just put in tubes to drain her backed-up pipes. I’d aim for what might sound like a contradiction in terms: a palliative operation—an operation whose overriding priority was to do only what was likely to make her feel immediately better.

Being Mortal

She remained quiet, thinking.

Her daughter took her hand. “We should do this, Mom,” she said.

“OK,” Douglass said. “But no risky chances.”

When she was under anesthesia, I made a half-inch incision above her belly button. I slipped my gloved finger inside to feel for space to insert the fiberoptic scope. But a hard loop of tumor-caked bowel blocked entry. I wasn’t even going to be able to put in a camera.

I had the resident take the knife and extend the incision upward until it was large enough to see in directly and get a hand inside. There were too many tumors to do anything to help her eat again, and now we were risking creating holes we’d never be able to repair. Leakage inside the abdomen would be a calamity. So we stopped.

No risky chances. We shifted focus and put in two long, plastic drainage tubes. One we inserted directly into her stomach to empty the contents backed up there; the other we laid in the open abdominal cavity to empty the fluid outside her gut. Then we closed up, and we were done.

I told her family we hadn’t been able to help her eat again, and when Douglass woke up, I told her too. Her daughter wept. Her husband thanked us for trying. Douglass tried to put a brave face on it. “I was never obsessed with food anyway,” she said.

The tubes relieved her nausea and abdominal pain greatly—“90 percent,” she said. The nurses taught her how to open the gastric tube into a bag when she felt sick and the abdominal tube when her belly felt too tight. We told her she could drink whatever she wanted and even eat soft food for the taste. Three days after surgery, she went home with hospice care to look after her.

Before she left, her oncologist and oncology nurse practitioner saw her. Douglass asked them how long they thought she had. “They both filled up with tears,” she told me. “It was kind of my answer.”

A few days later, she and her family allowed me to stop by her home after work. She answered the door, wearing a robe because of the tubes, for which she apologized. We sat in her living room, and I asked how she was doing.

OK, she said. “I think I have a measure that I’m slip, slip, slipping,” but she had been seeing old friends and relatives all day, and she loved it. She was taking just Tylenol for pain. Narcotics made her drowsy and weak, and that interfered with seeing people.

She said she didn’t like all the contraptions sticking out of her. But the first time she found that just opening a tube could take away her nausea, she said, “I looked at the tube and said, ‘Thank you for being there.’ ”

Mostly, we talked about good memories. She was at peace with God, she said. I left feeling that, at least this once, we had done it right. Douglass’ story was not ending the way she ever envisioned, but it was nonetheless ending with her being able to make the choices that meant the most to her.

Two weeks later, her daughter Susan sent me a note. “Mom died on Friday morning. She drifted quietly to sleep and took her last breath. It was very peaceful. My dad was alone by her side with the rest of us in the living room. This was such a perfect ending and in keeping with the relationship they shared.”

I am leery of suggesting that endings are controllable. No one ever really has control; physics and biology and accident ultimately have their way in our lives. But as Jewel Douglass taught me, we are not helpless either—and courage is the strength to recognize both of those realities. We have room to act and shape our stories—although as we get older, we do so within narrower and narrower confines.

That makes a few conclusions clear: that our most cruel failure in how we treat the sick and the aged is the failure to recognize that they have priorities beyond merely being safe and living longer; that the chance to shape one’s story is essential to sustaining meaning in life; and that we have the opportunity to refashion our institutions, culture, and conversations to transform the possibilities for the last chapters of all of our lives.

Being Mortal: Medicine and What Matters in the End

The Checklist Manifesto: How to Get Things Right

The Checklist Manifesto
Fuel Injected Cessna Checklist

It's no secret that I'm a huge fan of Atul Gawande. A reader recently pointed out that I hadn't covered his most recent book, The Checklist Manifesto: How to Get Things Right. I had only covered an interesting subset of the book—why we fail.

The Checklist Manifesto

To put us in the proper context, we're smart. Not scary smart but smart enough. We have skills and we generally put them in the most highly trained and hardworking people we can find. We have the most educated society in history. And we've accomplished a lot. Nevertheless, sometimes success escapes us for avoidable reasons. Not only are these failures common—across everything from medicine to finance—but they are also frustrating. We should know better but we don't. The reason we don't learn, Gawande argues, is evident:

the volume and complexity of what we know has exceeded our individual ability to deliver its benefits correctly, safely, or reliably. Knowledge has both saved us and burdened us.

To overcome this we need a strategy. Something that “builds on experience and takes advantage of the knowledge people have but somehow also makes up for our inevitable human inadequacies.” We need a checklist.

In response to increasing complexity we've become more specialized. We divide the problem up. It's not just the growing breadth and quantity of knowledge that makes things more complicated, although they certainly are significant contributors. It is also execution. In every field from medicine to construction there are a slew of practical procedures, policies and best practices. Gawande breaks this down for the modern medical case:

[Y]ou have a desperately sick patient and in order to have a chance of saving him you have to get the knowledge right and then you have to make sure that the 178 daily tasks that follow are done correctly—despite some monitor’s alarm going off for God knows what reason, despite the patient in the next bed crashing, despite a nurse poking his head around the curtain to ask whether someone could help “get this lady’s chest open.” There is complexity upon complexity. And even specialization has begun to seem inadequate. So what do you do?

The response of the medical profession, like most others, is to move from specialization to super-specialization. Gawande argues that these super-specialists have two advantages over ordinary specialists: greater knowledge of the things that matter and “a learned ability to handle the complexities of that particular job.” But even for these superspecialisits, avoiding mistakes is proving impossible.

Modern professions, like medicine, with their dazzling successes and spectacular failures, pose a significant: challenge: “What do you do when expertise is not enough? What do you do when even the super-specialists fail?”

The origins of the checklist.

On October 30, 1935, at Wright Air Field in Dayton, Ohio, the U.S. Army Air Corps held a competition for airplane manufacturers vying to build the next-generation of the long-range bomber. Only it wasn't supposed to be much of a competition at all. The Boeing Corporation's gleaming aluminum-alloy Model 299 was expected to steal the show, its design far superior to those of the competition. In other words, it was just a formality.

As the Model 299 test plane taxied onto the runway, a small group of army brass and manufacturing executives watched. The plane took off without a hitch. Then suddenly, at about 300 feet, it stalled, turned on one wing, and crashed killing two of the five crew members, including the pilot Major Hill.

Of course everyone wanted to know what had happened. An investigation revealed that there was nothing to indicate any problems mechanically with the plane. It was “pilot error.” The problem with the new plane, if there was one, was that it was substantially more complex than the previous aircraft. Among other things, there were 4 engines, each with its own fuel-mix, wing flaps, trim that needed constant adjustment, and propellers requiring pitch adjustment. While trying to keep up with the increased complexity, Hill had forgotten to release a new locking mechanism on the rudder controls. The new plane was too much for anyone to fly. The unexpected winner was the smaller designed Douglas.

Here is where it really gets interesting. The army, convinced of the technical superiority of the plane, ordered a few anyway. If you're thinking they'd just put the pilots through more training to fly the plane, you'd be wrong. Major Hill, the chief of flight testing, was an experienced pilot, so longer training was unlikely to result in improvement. Instead, they created a pilot's checklist.

The pilots made the list simple and short. It fit on an index card with step-by-step instructions for takeoff, flying, landing, and taxiing. It was as if someone all of a sudden gave an experienced automobile driver a checklist of things that would be obvious to them. There was nothing on the checklist they didn't know. Stuff like, check that the instruments are set, the door closed. Basics. That checklist changed the course of history and quite possibly the war. The pilots went on to fly the Model 299 a “total of 1.8 million miles” without a single accident and as a result the army ordered over 13,000 of them.

In The Checklist Manifesto, Gawande argues that most of today's work has entered a checklist phase.

Substantial parts of what software designers, financial managers, firefighters, police officers, lawyers, and most certainly clinicians do are now too complex for them to carry out reliably from memory alone.

Yet no one wants to use a checklist. We believe “our jobs are too complicated to reduce to a checklist.” After all, we don't work at McDonald's right?

In a complex environment, experts are up against two main difficulties. The first is the fallibility of human memory and attention, especially when it comes to mundane, routine matters that are easily overlooked under the strain of more pressing events. (When you’ve got a patient throwing up and an upset family member asking you what’s going on, it can be easy to forget that you have not checked her pulse.) Faulty memory and distraction are a particular danger in what engineers call all-or-none processes: whether running to the store to buy ingredients for a cake, preparing an airplane for takeoff, or evaluating a sick person in the hospital, if you miss just one key thing, you might as well not have made the effort at all.

A further difficulty, just as insidious, is that people can lull themselves into skipping steps even when they remember them. In complex processes, after all, certain steps don’t always matter. … “This has never been a problem before,” people say. Until one day it is.

Checklists seem to provide protection against such failures. They remind us of the minimum necessary steps and make them explicit. They not only offer the possibility of verification but also instill a kind of discipline of higher performance.

In news that would shock bureaucracies and governments alike, the strategy a lot of industries use to get things right in complex environments is to give employees power. Most authorities, in response to risk, tend to centralize power and decision making.

Sometimes that's even the point of a checklist—to make sure the people below you are doing things in the manner in which you want. These checklists, the McDonald's type checklists, spell out the tiniest detail of every critical step. These serve their purpose but they also create a group of employees no longer able to adapt. When things change, as they always do, you're now faced with a non-routine problem. “The philosophy,” writes Gawande, “is that you push the power of decision making out to the periphery and away from the center. You give people the room to adapt, based on their experience and expertise. All you ask is that they talk to one another and take responsibility. That is what works.”

…the real lesson is that under conditions of true complexity—where the knowledge required exceeds that of any individual and unpredictability reigns—efforts to dictate every step from the center will fail. People need room to act and adapt. Yet they cannot succeed as isolated individuals, either—that is anarchy. Instead, they require a seemingly contradictory mix of freedom and expectation—expectation to coordinate, for example, and also to measure progress toward common goals.

This was the understanding people in the skyscraper-building industry had grasped. More remarkably, they had learned to codify that understanding into simple checklists. They had made the reliable management of complexity a routine.

That routine requires balancing a number of virtues: freedom and discipline, craft and protocol, specialized ability and group collaboration. And for checklists to help achieve that balance, they have to take two almost opposing forms. They supply a set of checks to ensure the stupid but critical stuff is not overlooked, and they supply another set of checks to ensure people talk and coordinate and accept responsibility while nonetheless being left the power to manage the nuances and unpredictabilities the best they know how.

I came away from Katrina and the builders with a kind of theory: under conditions of complexity, not only are checklists a help, they are required for success. There must always be room for judgment, but judgment aided—and even enhanced—by procedure.

The Checklist Manifesto: How to Get Things Right is a fascinating read about an interesting subject by an amazing writer. Nuff said.

Atul Gawande: Why We Fail

“Failures of ignorance we can forgive. If the knowledge of the best thing to do in a given situation does not exist, we are happy to have people simply make their best effort. But if the knowledge exists and is not applied correctly, it is difficult not to be infuriated.”
— Atul Gawande

***

We fail for two reasons. The first is ignorance and the second is ineptitude.  In The Checklist Manifesto: How to Get Things Right, Atul Gawande explains:

In the 1970s, the philosophers Samuel Gorovitz and Alasdair MacIntyre published a short essay on the nature of human fallibility that I read during my surgical training and haven’t stopped pondering since. The question they sought to answer was why we fail at what we set out to do in the world. One reason, they observed, is “necessary fallibility” — some things we want to do are simply beyond our capacity. We are not omniscient or all-powerful. Even enhanced by technology, our physical and mental powers are limited. Much of the world and universe is—and will remain—outside our understanding and control.

There are substantial realms, however, in which control is within our reach. We can build skyscrapers, predict snowstorms, save people from heart attacks and stab wounds. In such realms, Gorovitz and MacIntyre point out, we have just two reasons that we may nonetheless fail.

The first is ignorance—we may err because science has given us only a partial understanding of the world and how it works. There are skyscrapers we do not yet know how to build, snowstorms we cannot predict, heart attacks we still haven’t learned how to stop. The second type of failure the philosophers call ineptitude—because in these instances the knowledge exists, yet we fail to apply it correctly. This is the skyscraper that is built wrong and collapses, the snowstorm whose signs the meteorologist just plain missed, the stab wound from a weapon the doctors forgot to ask about.

For most of history, we've failed because of ignorance. We had only a partial understanding of how things worked.

In Taking the Medicine, Druin Burch writes:

Doctors, for most of human history, have killed their patients far more often than they have saved them. Their drugs and their advice have been poisonous. They have been sincere, well-meaning and murderous.

We used to know very little about the illnesses that befell us and even less about how to treat them. But, for the most part, that's changed. Over the last several decades our knowledge has improved. This advance means that ineptitude plays a more central role in failure than ever before.

Heart attacks are a great example. “Even as recently as the 1950s,” Gawande writes, “we had little idea of how to prevent or treat them.” Back then, and some would argue even today, we knew very little about what caused heart attacks. Worse, even if we had been aware of the causes, we probably wouldn't have known what to do about it. Sure we'd give people morphine for pain and put people on bed rest, to the point where people couldn't even get out of bed to use the bathroom. We didn't want to stress a damaged heart. When knowledge doesn't exist, we do what we've always done. We pray and cross our fingers.

Fast-forward to today and Gawande says “we have at least a dozen effective ways to reduce your likelihood of having a heart attack—for instance, controlling your blood pressure, prescribing a statin to lower cholesterol and inflammation, limiting blood sugar levels, encouraging exercise regularly, helping with smoking cessation, and, if there are early signs of heart disease, getting you to a cardiologist for still further recommendations.”

If you should have a heart attack, we have a whole panel of effective therapies that can not only save your life but also limit the damage to your heart: we have clot-busting drugs that can reopen your blocked coronary arteries; we have cardiac catheters that can balloon them open; we have open heart surgery techniques that let us bypass the obstructed vessels; and we’ve learned that in some instances all we really have to do is send you to bed with some oxygen, an aspirin, a statin, and blood pressure medications—in a couple days you’ll generally be ready to go home and gradually back to your usual life.

Today we know more about heart attacks but, according to Gawande, the odds a hospital deals with them correctly and in time are less than 50%. We know what we should do and we still don't do it.

So if we know so much, why do we fail? The problem today is ineptitude. Or, maybe, simply “eptitude” — applying knowledge correctly and consistently.

The modern world has dumped a lot of complexity on us and we're struggling to keep our heads above water. Not only is the complexity of knowledge increasing but so is the velocity. The world is getting more complex. This challenge is not limited to medicine. It applies to nearly everything.

Know-how and sophistication have increased remarkably across almost all our realms of endeavor, and as a result so has our struggle to deliver on them. You see it in the frequent mistakes authorities make when hurricanes or tornadoes or other disasters hit. You see it in the 36 percent increase between 2004 and 2007 in lawsuits against attorneys for legal mistakes—the most common being simple administrative errors, like missed calendar dates and clerical screw ups, as well as errors in applying the law. You see it in flawed software design, in foreign intelligence failures, in our tottering banks—in fact, in almost any endeavor requiring mastery of complexity and of large amounts of knowledge.

Such failures carry an emotional valence that seems to cloud how we think about them. Failures of ignorance we can forgive. If the knowledge of the best thing to do in a given situation does not exist, we are happy to have people simply make their best effort. But if the knowledge exists and is not applied correctly, it is difficult not to be infuriated. What do you mean half of heart attack patients don’t get their treatment on time? What do you mean that two-thirds of death penalty cases are overturned because of errors? It is not for nothing that the philosophers gave these failures so unmerciful a name—ineptitude. Those on the receiving end use other words, like negligence or even heartlessness.

Those of us who make mistakes where knowledge is known feel like these judgments ignore how difficult today's jobs are. Failure wasn't intentional and the situation is not as black and white.

Today there is more to know, more to manage, more to keep track of. More systems to learn and unlearn as new ones come online. More emails. More calls. More distractions. On top of that, there is more to get right and more to learn. And this, of course, creates more opportunity for mistakes.

Our typical response, rather than recognising the inherent complexity of the system by which judgments are made, is to increase training and experience. Doctors, for example, go to school for many years. Engineers too. Accountants the same. And countless others. All of these professions have certifications, continuous training, some method of apprenticeship. You need to practice to achieve mastery.

In the medical field, training is longer and more intense than ever. Yet preventable failures remain.

So here we are today, the start of the twenty-first century. We have more knowledge than ever. We put that knowledge into the hands of people who are the most highly trained, hardest working, and skilled people we can find. Doing so has created impressive outcomes. As a society, we've done some amazing things.

Yet despite this, avoidable failures are common and persistent. Organisations make poor mistakes even when knowledge exists that would lead them to make different decisions. People do the same. The know-how has somehow become unmanageable. Perhaps, the velocity and complexity of information has exceeded our individual ability to deal with it. We are becoming inept.

Gawande's solution to deal with ineptitude is a checklist. The Checklist Manifesto: How to Get Things Right is fascinating and eye-opening in its entirety.

Why Boston’s Hospitals Were Ready

Dr. Atul Gawande, writing in the New Yorker on why Boston's Hospitals were prepared.

Around a hundred nurses, doctors, X-ray staff, transport staff, you name it showed up as soon as they heard the news. They wanted to help, and they knew how. As one colleague put it, they did on a large scale what they knew how to do on a small scale.

… Talking to people about that day, I was struck by how ready and almost rehearsed they were for this event. A decade earlier, nothing approaching their level of collaboration and efficiency would have occurred. We have, as one colleague put it to me, replaced our pre-9/11 naïveté with post-9/11 sobriety. Where before we’d have been struck dumb with shock about such events, now we are almost calculating about them. When ball bearings and nails were found in the wounds of the victims, everyone understood the bombs had been packed with them as projectiles. At every hospital, clinicians considered the possibility of chemical or radiation contamination, a second wave of attacks, or a direct attack on a hospital. Even nonmedical friends e-mailed and texted me to warn people about secondary and tertiary explosive devices aimed at responders. Everyone’s imaginations have come to encompass these once unimaginable events.

As Gawande concludes, “[t]his is not cause for either celebration or satisfaction. That we have come to this state of existence is a great sadness. But it is our great fortune.”

12