Tag: Failure

Chris Dixon on Why Companies Fail, The State of Venture Capital, Artificial Intelligence

ChrisDixon

“We think of ourselves the way that maybe a law firm or a talent agency or someone would where our first job is to provide services for the entrepreneur. Our secondary job is to pick the right company.”

This is the fifth episode of The Knowledge Project, our podcast aimed at acquiring wisdom through interviews with fascinating people to gain insights into how they think, live, and connect ideas.

***

On this episode, I have Chris Dixon.

Chris is a partner at perhaps the most famous venture capital firm in the world, Andreessen Horowitz or commonly known as a16z.

We talk about the history of venture capital, why companies fail, the future of artificial intelligence and the Idea Maze. I hope you like this interview as much as I did.

***

Listen

***

Show Notes

Transcript:

A complete transcript is available for members.

Books mentioned:

Links:

Turning Towards Failure

Turning toward failure

Our resistance to thinking about failure is especially curious in light of the fact that failure is so ubiquitous. ‘Failure is the distinguishing feature of corporate life,’ writes the economist Paul Ormerod, at the start of his book Why Most Things Fail, but in this sense corporate life is merely a microcosm of the whole of life. Evolution itself is driven by failure; we think of it as a matter of survival and adaptation, but it makes equal sense to think of it as a matter of not surviving and not adapting. Or perhaps more sense: of all the species that have ever existed, after all, fewer than 1 per cent of them survive today. The others failed. On an individual level, too, no matter how much success you may experience in life, your eventual story – no offence intended – will be one of failure. You bodily organs will fail, and you’ll die. (Source: The Antidote.)

If failure is so ubiquitous you would think that it would be treated as a more natural phenomenon; not exactly something to celebrate but not something that should be hidden away either. In the book The Antidote: Happiness for People Who Can't Stand Positive Thinking, Oliver Burkeman visits a ‘Museum of Failed Products’ and comes away with quite a few insights into our reluctance to accept, or even acknowledge, our less successful ventures.

By far the most striking thing about the museum of failed products, though, has to do with the fact that it exists as a viable, profit-making business in the first place. You might have assumed that any consumer product manufacturer worthy of the name would have its own such collection, a carefully stewarded resource to help it avoid repeating errors its rivals had already made. Yet the executives arriving every week … are evidence of how rarely this happens. Product developers are so focused on their next hoped-for-success – and so unwilling to invest time or energy in thinking about their industry’s past failures – that they only belatedly realize how much they need, and are willing to pay, to access (the museum of failed products). Most surprising of all is the fact that many of the designers who have found their way to the museum of failed products, over the years, have come there in order to examine – or, alternatively, have been surprised to discover – products that their own companies had created and then abandoned. These firms were apparently so averse to thinking about the unpleasant business of failure that they had neglected even to keep samples of their own disasters.

I’ve spoken about Burkeman’s book before. There is a great chapter on the flaws related to goal setting and another on the Stoic technique of negative visualisation but they all come back to the concept of turning towards the possibility of failure.

The Stoic technique of negative visualisation is, precisely, about turning towards the possibility of failure. The critics of goal setting are effectively proposing a new attitude towards failure, too, since an improvisational, trial-and-error approach necessarily entails being frequently willing to fail.

So what does it all mean? If avoiding failure is as natural as failure itself, why should you embrace it (or even attempt an Antifragile way of life).

… it is also worth considering the subject of failure directly, in order to see how the desperate efforts of the ‘cult of optimism’ to avoid it are so often counterproductive, and how we might be better off learning to embrace it. The first reason to turn towards failure is that our efforts not to think about failure leave us with a severely distorted understanding of what it takes to be successful. The second is that an openness to the emotional experience of failure can be a stepping-stone to a much richer kind of happiness than can be achieved by focusing only on success.

It’s almost jarring how simple and sensical that is, considering our aversion to failure.

Accepting failure is becoming more conversational, even if we're a ways from embracing it. ‘Learning from our mistakes’ has become the new business mantra, replacing ‘being innovative.' Although, I can see this quickly losing its shine when the mistake is idiotic.

Burkeman notes, it’s just too easy to imagine how the Museum of Failed Products gets populated (it is also worth noting that successful products have a lot to do with luck.)

Back in Ann Arbor, at the museum of failed products, it wasn’t hard to imagine how a similar aversion to confronting failure might have been responsible for the very existence of many of the products lining its shelves. Each one must have made it through a series of meetings at which nobody realised that the product was doomed. Perhaps nobody wanted to contemplate the prospect of failure; perhaps someone did, but didn’t want to bring it up for discussion. Even if the product’s likely failure was recognised … those responsible for marketing it might well have responded by ploughing more money into it. This is a common reaction when a product looks like it’s going to be a lemon, since with a big enough marketing spend, a marketing manager can at least guarantee a few sales, sparing the company total humiliation. By the time reality sets in, (Robert) McMath notes in What Were They Thinking?, it is quite possible that ‘the executives will have been promoted to another brand, or recruited by another company.’ Thanks to a collective unwillingness to face up to failure, more money will have been invested in the doomed product, and little energy will have been dedicated to examining what went wrong. Everyone involved will have conspired – perhaps without realising what they’re doing – never to think or speak of it again.

The Antidote: Happiness for People Who Can't Stand Positive Thinking is an eye-opening look at how the pursuit of happiness is causing us to be more unhappy than ever.

How Complex Systems Fail

A bit of a preface to this post. Please read the definition of Antifragile first. While the article below is interesting, the reader should read with a critical mind. Complexity ‘solved' with increased complexity generally only creates a lot of hidden risks, slowness, or fragility.

A short treatise on the nature of failure; how failure is evaluated; how failure is attributed to proximate cause; and the resulting new understanding of patient safety by Richard I. Cook.

1. Complex systems are intrinsically hazardous systems

All of the interesting systems (e.g. transportation, healthcare, power generation) are inherently and unavoidably hazardous by their own nature. The frequency of hazard exposure can sometimes be changed but the processes involved in the system are themselves intrinsically and irreducibly hazardous. It is the presence of these hazards that drives the creation of defenses against hazard that characterize these systems.

2. Complex systems are heavily and successfully defended against failure

The high consequences of failure lead over time to the construction of multiple layers of defense against failure. These defenses include obvious technical components (e.g. backup systems, ‘safety’ features of equipment) and human components (e.g. training, knowledge) but also a variety of organizational, institutional, and regulatory defenses (e.g. policies and procedures, certification, work rules, team training). The effect of these measures is to provide a series of shields that normally divert operations away from accidents.

3. Catastrophe requires multiple failures – single point failures are not enough

The array of defenses works. System operations are generally successful. Overt catastrophic failure occurs when small, apparently innocuous failures join to create opportunity for a systemic accident. Each of these small failures is necessary to cause catastrophe but only the combination is sufficient to permit failure. Put another way, there are many more failure opportunities than overt system accidents. Most initial failure trajectories are blocked by designed system safety components. Trajectories that reach the operational level are mostly blocked, usually by practitioners.

4. Complex systems contain changing mixtures of failures latent within them

The complexity of these systems makes it impossible for them to run without multiple flaws being present. Because these are individually insufficient to cause failure they are regarded as minor factors during operations. Eradication of all latent failures is limited primarily by economic cost but also because it is difficult before the fact to see how such failures might contribute to an accident. The failures change constantly because of changing technology, work organization, and efforts to eradicate failures.

5. Complex systems run in degraded mode

A corollary to the preceding point is that complex systems run as broken systems. The system continues to function because it contains so many redundancies and because people can make it function, despite the presence of many flaws. After accident reviews nearly always note that the system has a history of prior ‘proto-accidents’ that nearly generated catastrophe. Arguments that these degraded conditions should have been recognized before the overt accident are usually predicated on naïve notions of system performance. System operations are dynamic, with components (organizational, human, technical) failing and being replaced continuously.

6. Catastrophe is always just around the corner

Complex systems possess potential for catastrophic failure. Human practitioners are nearly always in close physical and temporal proximity to these potential failures – disaster can occur at any time and in nearly any place. The potential for catastrophic outcome is a hallmark of complex systems. It is impossible to eliminate the potential for such catastrophic failure; the potential for such failure is always present by the system’s own nature.

7. Post-accident attribution to a ‘root cause’ is fundamentally wrong

Because overt failure requires multiple faults, there is no isolated ‘cause’ of an accident. There are multiple contributors to accidents. Each of these is necessarily insufficient in itself to create an accident. Only jointly are these causes sufficient to create an accident. Indeed, it is the linking of these causes together that creates the circumstances required for the accident. Thus, no isolation of the ‘root cause’ of an accident is possible. The evaluations based on such reasoning as ‘root cause’ do not reflect a technical understanding of the nature of failure but rather the social, cultural need to blame specific, localized forces or events for outcomes.

8. Hindsight biases post-accident assessments of human performance

Knowledge of the outcome makes it seem that events leading to the outcome should have appeared more salient to practitioners at the time than was actually the case. This means that ex post facto accident analysis of human performance is inaccurate. The outcome knowledge poisons the ability of after-accident observers to recreate the view of practitioners before the accident of those same factors. It seems that practitioners “should have known” that the factors would “inevitably” lead to an accident. Hindsight bias remains the primary obstacle to accident investigation, especially when expert human performance is involved.

9. Human operators have dual roles: as producers & as defenders against failure

The system practitioners operate the system in order to produce its desired product and also work to forestall accidents. This dynamic quality of system operation, the balancing of demands for production against the possibility of incipient failure is unavoidable. Outsiders rarely acknowledge the duality of this role. In non-accident filled times, the production role is emphasized. After accidents, the defense against failure role is emphasized. At either time, the outsider’s view misapprehends the operator’s constant, simultaneous engagement with both roles.

10. All practitioner actions are gambles

After accidents, the overt failure often appears to have been inevitable and the practitioner’s actions as blunders or deliberate willful disregard of certain impending failure. But all practitioner actions are actually gambles, that is, acts that take place in the face of uncertain outcomes. The degree of uncertainty may change from moment to moment. That practitioner actions are gambles appears clear after accidents; in general, post hoc analysis regards these gambles as poor ones. But the converse: that successful outcomes are also the result of gambles; is not widely appreciated.

11. Actions at the sharp end resolve all ambiguity

Organizations are ambiguous, often intentionally, about the relationship between production targets, efficient use of resources, economy and costs of operations, and acceptable risks of low and high consequence accidents. All ambiguity is resolved by actions of practitioners at the sharp end of the system. After an accident, practitioner actions may be regarded as ‘errors’ or ‘violations’ but these evaluations are heavily biased by hindsight and ignore the other driving forces, especially production pressure.

12. Human practitioners are the adaptable element of complex systems

Practitioners and first line management actively adapt the system to maximize production and minimize accidents. These adaptations often occur on a moment by moment basis. Some of these adaptations include: (1) Restructuring the system in order to reduce exposure of vulnerable parts to failure. (2) Concentrating critical resources in areas of expected high demand. (3) Providing pathways for retreat or recovery from expected and unexpected faults. (4) Establishing means for early detection of changed system performance in order to allow graceful cutbacks in production or other means of increasing resiliency.

13. Human expertise in complex systems is constantly changing

Complex systems require substantial human expertise in their operation and management. This expertise changes in character as technology changes but it also changes because of the need to replace experts who leave. In every case, training and refinement of skill and expertise is one part of the function of the system itself. At any moment, therefore, a given complex system will contain practitioners and trainees with varying degrees of expertise. Critical issues related to expertise arise from (1) the need to use scarce expertise as a resource for the most difficult or demanding production needs and (2) the need to develop expertise for future use.

14. Change introduces new forms of failure

The low rate of overt accidents in reliable systems may encourage changes, especially the use of new technology, to decrease the number of low consequence but high frequency failures. These changes maybe actually create opportunities for new, low frequency but high consequence failures. When new technologies are used to eliminate well understood system failures or to gain high precision performance they often introduce new pathways to large scale, catastrophic failures. Not uncommonly, these new, rare catastrophes have even greater impact than those eliminated by the new technology. These new forms of failure are difficult to see before the fact; attention is paid mostly to the putative beneficial characteristics of the changes. Because these new, high consequence accidents occur at a low rate, multiple system changes may occur before an accident, making it hard to see the contribution of technology to the failure.

15. Views of ‘cause’ limit the effectiveness of defenses against future events

Post-accident remedies for “human error” are usually predicated on obstructing activities that can “cause” accidents. These end-of-the-chain measures do little to reduce the likelihood of further accidents. In fact that likelihood of an identical accident is already extraordinarily low because the pattern of latent failures changes constantly. Instead of increasing safety, post-accident remedies usually increase the coupling and complexity of the system. This increases the potential number of latent failures and also makes the detection and blocking of accident trajectories more difficult.

16. Safety is a characteristic of systems and not of their components

Safety is an emergent property of systems; it does not reside in a person, device or department of an organization or system. Safety cannot be purchased or manufactured; it is not a feature that is separate from the other components of the system. This means that safety cannot be manipulated like a feedstock or raw material. The state of safety in any system is always dynamic; continuous systemic change insures that hazard and its management are constantly changing.

17. People continuously create safety

Failure free operations are the result of activities of people who work to keep the system within the boundaries of tolerable performance. These activities are, for the most part, part of normal operations and superficially straightforward. But because system operations are never trouble free, human practitioner adaptations to changing conditions actually create safety from moment to moment. These adaptations often amount to just the selection of a well-rehearsed routine from a store of available responses; sometimes, however, the adaptations are novel combinations or de novo creations of new approaches.

18. Failure free operations require experience with failure

Recognizing hazard and successfully manipulating system operations to remain inside the tolerable performance boundaries requires intimate contact with failure. More robust system performance is likely to arise in systems where operators can discern the “edge of the envelope”. This is where system performance begins to deteriorate, becomes difficult to predict, or cannot be readily recovered. In intrinsically hazardous systems, operators are expected to encounter and appreciate hazards in ways that lead to overall performance that is desirable. Improved safety depends on providing operators with calibrated views of the hazards. It also depends on providing calibration about how their actions move system performance towards or away from the edge of the envelope.

Here is a video of Richard talking about how complex systems don't fail.

Being Wrong: Adventures in the Margin of Error

"It infuriates me to be wrong when I know I’m right." — Molière
“It infuriates me to be wrong when I know I’m right.” — Molière

“Why is it so fun to be right?”

That's the opening line from Kathryn Schulz' excellent book Being Wrong: Adventures in the Margin of Error.

As pleasures go, it is, after all, a second-order one at best. Unlike many of life’s other delights—chocolate, surfing, kissing—it does not enjoy any mainline access to our biochemistry: to our appetites, our adrenal glands, our limbic systems, our swoony hearts. And yet, the thrill of being right is undeniable, universal, and (perhaps most oddly) almost entirely undiscriminating.

While we take pleasure in being right, we take as much, if not more, in feeling we are right.

A whole lot of us go through life assuming that we are basically right, basically all the time, about basically everything: about our political and intellectual convictions, our religious and moral beliefs, our assessment of other people, our memories, our grasp of facts. As absurd as it sounds when we stop to think about it, our steady state seems to be one of unconsciously assuming that we are very close to omniscient.

Schulz argues this makes sense. We're right most of the time and in these moments we affirm “our sense of being smart.” But Being Wrong is about … well being wrong.

If we relish being right and regard it as our natural state, you can imagine how we feel about being wrong. For one thing, we tend to view it as rare and bizarre—an inexplicable aberration in the normal order of things. For another, it leaves us feeling idiotic and ashamed.

In our collective imagination, error is associated not just with shame and stupidity but also with ignorance, indolence, psychopathology, and moral degeneracy. This set of associations was nicely summed up by the Italian cognitive scientist Massimo Piattelli-Palmarini, who noted that we err because of (among other things) “inattention, distraction, lack of interest, poor preparation, genuine stupidity, timidity, braggadocio, emotional imbalance,…ideological, racial, social or chauvinistic prejudices, as well as aggressive or prevaricatory instincts.” In this rather despairing view—and it is the common one—our errors are evidence of our gravest social, intellectual, and moral failings.

But of all the things we are wrong about, “this idea of error might well top the list.”

It is our meta-mistake: we are wrong about what it means to be wrong. Far from being a sign of intellectual inferiority, the capacity to err is crucial to human cognition. Far from being a moral flaw, it is inextricable from some of our most humane and honorable qualities: empathy, optimism, imagination, conviction, and courage. And far from being a mark of indifference or intolerance, wrongness is a vital part of how we learn and change. Thanks to error, we can revise our understanding of ourselves and amend our ideas about the world.

“As with dying,” Schulz pens, “we recognize erring as something that happens to everyone, without feeling that it is either plausible or desirable that it will happen to us.”

Being wrong is something we have a hard time culturally admitting.

As a culture, we haven’t even mastered the basic skill of saying “I was wrong.” This is a startling deficiency, given the simplicity of the phrase, the ubiquity of error, and the tremendous public service that acknowledging it can provide. Instead, what we have mastered are two alternatives to admitting our mistakes that serve to highlight exactly how bad we are at doing so. The first involves a small but strategic addendum: “I was wrong, but…”—a blank we then fill in with wonderfully imaginative explanations for why we weren’t so wrong after all. The second (infamously deployed by, among others, Richard Nixon regarding Watergate and Ronald Reagan regarding the Iran-Contra affair) is even more telling: we say, “mistakes were made.” As that evergreen locution so concisely demonstrates, all we really know how to do with our errors is not acknowledge them as our own.

Being wrong feels a lot like being right.

This is the problem of error-blindness. Whatever falsehoods each of us currently believes are necessarily invisible to us. Think about the telling fact that error literally doesn’t exist in the first person present tense: the sentence “I am wrong” describes a logical impossibility. As soon as we know that we are wrong, we aren’t wrong anymore, since to recognize a belief as false is to stop believing it. Thus we can only say “I was wrong.” Call it the Heisenberg Uncertainty Principle of Error: we can be wrong, or we can know it, but we can’t do both at the same time.

Error-blindness goes some way toward explaining our persistent difficulty with imagining that we could be wrong. It’s easy to ascribe this difficulty to various psychological factors—arrogance, insecurity, and so forth—and these plainly play a role. But error-blindness suggests that another, more structural issue might be at work as well. If it is literally impossible to feel wrong—if our current mistakes remain imperceptible to us even when we scrutinize our innermost being for signs of them—then it makes sense for us to conclude that we are right.

If our current mistakes are necessarily invisible to us, our past errors have an oddly slippery status as well. Generally speaking, they are either impossible to remember or impossible to forget. This wouldn’t be particularly strange if we consistently forgot our trivial mistakes and consistently remembered the momentous ones, but the situation isn’t quite that simple.

It’s hard to say which is stranger: the complete amnesia for the massive error, or the perfect recall for the trivial one. On the whole, though, our ability to forget our mistakes seems keener than our ability to remember them.

Part of what’s going on here is, in essence, a database-design flaw. Most of us don’t have a mental category called “Mistakes I Have Made.”

Like our inability to say “I was wrong,” this lack of a category called “error” is a communal as well as an individual problem. As someone who tried to review the literature on wrongness, I can tell you that, first, it is vast; and, second, almost none of it is filed under classifications having anything to do with error. Instead, it is distributed across an extremely diverse set of disciplines: philosophy, psychology, behavioral economics, law, medicine, technology, neuroscience, political science, and the history of science, to name just a few. So too with the errors in our own lives. We file them under a range of headings—“embarrassing moments,” “lessons I’ve learned,” “stuff I used to believe”—but very seldom does an event live inside us with the simple designation “wrong.”

This category problem is only one reason why our past mistakes can be so elusive. Another is that (as we’ll see in more detail later) realizing that we are wrong about a belief almost always involves acquiring a replacement belief at the same time: something else instantly becomes the new right.

What with error-blindness, our amnesia for our mistakes, the lack of a category called “error,” and our tendency to instantly overwrite rejected beliefs, it’s no wonder we have so much trouble accepting that wrongness is a part of who we are. Because we don’t experience, remember, track, or retain mistakes as a feature of our inner landscape, wrongness always seems to come at us from left field—that is, from outside ourselves. But the reality could hardly be more different. Error is the ultimate inside job.

For us to learn from error, we have to see it differently. The goal of Being Wrong then is “to foster an intimacy with our own fallibility, to expand our vocabulary for and interest in talking about our mistakes, and to linger for a while inside the normally elusive and ephemeral experience of being wrong.”

Goal Induced Blindness

Everst

In 1996 a disaster of historic proportion happened on the peak of Mount Everest. In the entire climbing season of 1996 fifteen climbers died. Eight of those deaths took place on a single day. Journalist and mountain climber Jon Krakauer captured this story in his breathtaking book Into Thin Air. Krakauer didn't just uncover the story after the fact, he was on the mountain that day.

You would think that by now Everest would have become such a commercial expedition that anyone with sufficient money and a little climbing ability could make it to the summit and back. While that's largely true, it's not that unusual to hear of people dying. The 1996 disaster was different. Aside from the number of people dying on the same day, it was inexplicable.

The weather on the summit can kill you in the blink of an eye. Weather changes everything. Only the weather on this day was no different than usual. No sudden avalanches pushed a group towards death. No freak snow storms blew them away. No, their failure was entirely human.

Into Thin Air puts part of the blame on the stubbornness of Anatoli Boukreev, a Kazakhstani climbing guide. While there is some evidence to support this claim, most climbers are,  by definition, stubborn and arrogant. Despite this, disasters of this magnitude are rare. There was something more at play.

We'll never know for sure what happened but it looks like an example of mass irrationality.

Only 720 feet from the summit, in an event that has since become known as ‘the traffic jam,' teams from New Zealand, the United States, and Taiwan, representing 34 climbers in total, were all attempting to summit that day. Their departure point was Camp 4, at 26,000 feet. The summit was 29,000 feet. Those 3,000 feet are quite possibly one of the most dangerous spots on the planet. As such, preparation is key. The Americans and New Zealanders co-ordinated their efforts. The last thing you want is people walking on each other impeding a smooth progression up and if you're fortunate, down the mountain. The Taiwanese climbers, however, were not supposed to climb that day. Either reneging or misunderstanding, they proceeded on the same day.

Now the advance team also made a mistake, perhaps from confusion about the number of climbers. They failed to secure safety ropes at Hillary Step. This wouldn't have been such a big deal if there were not 34 climbers trying to reach the summit at the same time. As a result of the ropes not being laid, progression was choppy and bottlenecked.

The most important thing to keep in mind in any attempt at Everest is time. Climbers have limited oxygen. Weather can change in a heartbeat and you don't want to be on the summit at night. If you leave Camp 4 at midnight and things go your way, you might be able to reach the summit 12 hours later. But, importantly, you also have a turnaround time, which depends on weather and oxygen levels.

This is the time that no matter where you are, you're supposed to turn around and come home. If you're 200 feet from the summit and it hits your turnaround time, you have a very important choice to make. You can attempt to climb the last 200 feet or you can turn around. If you don't turn around you increase the odds running out of oxygen and descending in some of Everest's most dangerous weather.

In this case, the teams encountered a traffic jam at Hilary pass that slowed progression. They disregarded their turnaround time which had just passed. American Ed Viesturs, watching from a telescope at Camp 4, was in disbelief. ‘They’ve already been climbing for hours, and they still aren’t on the summit,’ he said to himself, with rising alarm. ‘Why haven’t they turned around?’

On that day and with those oxygen supplies the last safe turnaround time was two o'clock. Members, however, continued on reaching the summit upwards of two hours past this time. Doug Hansen, a postal service worker from the New Zealand group, was the last to summit. It was just after four. While he made it to the top, the odds were against him ever coming back.

Like seven others, he died on the descent. Descents are normally difficult and prone to mistakes: you're tired, oxygen is low, and you drop your guard.  In this case, weather added another variable. A blizzard had come in quickly. Going down was nearly impossible.  Rescue workers saved as many people as they could but the  -40 temperatures, blizzard, and darkness combined to make the elements too strong.

The death toll on Everest in 1996 was the highest recored in history. And we still don't clearly understand why. Chris Kayes, a former stockbroker turned expert on organizational behaviour, has an idea though.

Kayes suspected the Everest climbers had been ‘lured into destruction by their passion for goals.' They were too fixated on achieving their goal of successfully summiting the mountain. The closer they got to their goal, he reasons, the harder it would be to turn around. This isn't just an external goal. It's an internal one. The more we see ourselves as accomplished climbers or guides, the harder it is to turn around.

“In theology,” writes Oliver Burkeman in, The Antidote: Happiness for People Who Can't Stand Positive Thinking,  where a version of this Everest story appears, “the term ‘theodicy’ refers to the effort to maintain belief in a benevolent god, despite the prevalence of evil in the world; the phrase is occasionally used to describe the effort to maintain any belief in the face of contradictory evidence.”

Borrowing from that, Chris Kayes termed goalodicy. He also wrote a book on it called Destructive Goal Pursuit: The Mount Everest Disaster.

In the corporate world we're often focused on achieving our goals at all costs. This eventually reaches the status of dogma.

This insight is the core of an important chapter in Burkeman's book, The Antidote:

[W]hat motivates our investment in goals and planning for the future, much of the time, isn’t any sober recognition of the virtues of preparation and looking ahead. Rather, it’s something much more emotional: how deeply uncomfortable we are made by feelings of uncertainty. Faced with the anxiety of not knowing what the future holds, we invest ever more fiercely in our preferred vision of that future – not because it will help us achieve it, but because it helps rid us of feelings of uncertainty in the present. ‘Uncertainty prompts us to idealise the future,’ Kayes told me. ‘We tell ourselves that everything will be OK, just as long as I can reach this projection of the future.’


We fear the feeling of uncertainty to an extraordinary degree – the psychologist Dorothy Rowe argues that we fear it more than death itself – and we will go to extraordinary lengths, even fatal ones, to get rid of it.

There is an alternative, of course. Burkeman argues that “we could learn to become more comfortable with uncertainty, and to exploit the potential hidden within it, both to feel better in the present and to achieve more success in the future.” (In fact, this is the strategy Henry Singleton, one of the most successful businessmen ever, pursued.)

Burkeman argues that a lot of our major life decisions are made with the goal of minimizing the “present-moment emotional discomfort.” Try this “potentially mortifying” exercise in self-examination:

Consider any significant decision you’ve ever taken that you subsequently came to regret: a relationship you entered despite being dimly aware that it wasn’t for you, or a job you accepted even though, looking back, it’s clear that it was mismatched to your interests or abilities. If it felt like a difficult decision at the time , then it’s likely that, prior to taking it, you felt the gut-knotting ache of uncertainty ; afterwards, having made a decision, did those feelings subside? If so, this points to the troubling possibility that your primary motivation in taking the decision wasn’t any rational consideration of its rightness for you, but simply the urgent need to get rid of your feelings of uncertainty.

Goals Gone Wild

The goalsetting that worked so well in (Gary) Latham and (Edwin) Locke’s studies, … had various nasty side effects in their own experiments. For example: clearly defined goals seemed to motivate people to cheat. In one such study, participants were given the task of making words from a set of random letters, as in Scrabble; the experiment gave them opportunities to report their progress anonymously. Those given a target to reach lied far more frequently than did those instructed merely to ‘do your best’. More important, though, (Lisa) Ordóñez and her fellow heretics argued, goalsetting worked vastly less well outside the psychology lab settings in which such studies took place. In real life, an obsession with goals seemed far more often to land people and organisations in trouble.

The General Motors Example

One illuminating example of the problem concerns the American automobile behemoth General Motors. The turn of the millennium found GM in a serious predicament, losing customers and profits to more nimble, primarily Japanese, competitors. Following Latham and Locke’s philosophy to the letter, executives at GM’s headquarters in Detroit came up with a goal, crystallised in a number: twenty-nine. Twenty-nine, the company announced amid much media fanfare, was the percentage of the American car market that it would recapture, reasserting its old dominance. Twenty-nine was also the number displayed upon small gold lapel pins, worn by senior figures at GM to demonstrate their commitment to the plan. At corporate gatherings, and in internal GM documents, twenty-nine was the target drummed into everyone from salespeople to engineers to public-relations officers.

Yet the plan not only failed to work – it made things worse. Obsessed with winning back market share, GM spent its dwindling finances on money-off schemes and clever advertising, trying to lure drivers into purchasing its unpopular cars, rather than investing in the more speculative and open-ended – and thus more uncertain – research that might have resulted in more innovative and more popular vehicles.

When we reach our goals but fail to achieve the intended results we usually chalk this up to having the wrong goals. While it's true that some goals are better than others, how could it be otherwise? But the “more profound hazard here affects virtually any form of future planning.”

Formulating a vision of the future requires, by definition, that you isolate some aspect or aspects of your life, or your organisation, or your society, and focus on those at the expense of others. But problems arise thanks to the law of unintended consequences, sometimes expressed using the phrase ‘you can never change only one thing’. In any even slightly complex system, it’s extremely hard to predict how altering one variable will affect the others. ‘When we try to pick out any thing by itself,’ the naturalist and philosopher John Muir observed, ‘we find it hitched to everything else in the universe.’

Turning Towards Uncertainty

What would it look like to embrace uncertainty?

For this Burkeman turns to Saras Sarasvathy, who interviewed forty-five “successful” entrepreneurs. Saravathy's findings are surprising. She found a disconnect between our thoughts on entrepreneurs as successfully pursuing a goal-oriented approach and reality.

We tend to imagine that the special skill of an entrepreneur lies in having a powerfully original idea and then fighting to turn that vision into reality. But the outlook of (Saras) Sarasvathy’s interviewees rarely bore this out. Their precise endpoint was often mysterious to them, and their means of proceeding reflected this. Overwhelmingly, they scoffed at the goals-first doctrine of Locke and Latham. Almost none of them suggested creating a detailed business plan or doing comprehensive market research to hone the details of the product they were aiming to release.

The most valuable skill of a successful entrepreneur, “isn't vision or passion or a steadfast insistence on destroying every barrier between yourself and some prize.”

Rather, it’s the ability to adopt an unconventional approach to learning: an improvisational flexibility not merely about which route to take towards some predetermined objective, but also a willingness to change the destination itself. This is a flexibility that might be squelched by rigid focus on any one goal.

Underpinning Sarasvathy's “anti-goal” approach is a set of principles she calls ‘effectuation.'

‘Causally minded’ people, to use Sarasvathy’s terminology, are those who select or are given a specific goal, and then choose from whatever means are available to make a plan for achieving it. Effectually minded people, on the other hand, examine what means and materials are at their disposal, then imagine what possible ends or provisional next directions those means might make possible. The effectualists include the cook who scours the fridge for leftover ingredients; the chemist who figured out that the insufficiently sticky glue he had developed could be used to create the Post-it note; or the unhappy lawyer who realises that her spare-time photography hobby, for which she already possesses the skills and the equipment, could be turned into a job. One foundation of effectuation is the “bird in hand” principle: “Start with your means. Don’t wait for the perfect opportunity. Start taking action, based on what you have readily available: what you are, what you know and who you know.” A second is the “principle of affordable loss”: Don’t be guided by thoughts of how wonderful the rewards might be if you were spectacularly successful at any given next step. Instead — and there are distinct echoes, here, of the Stoic focus on the worst-case scenario — ask how big the loss would be if you failed. So long as it would be tolerable, that’s all you need to know. Take that next step, and see what happens.

Burkeman concludes

‘See what happens’, indeed, might be the motto of this entire approach to working and living, and it is a hard-headed message, not a woolly one. ‘The quest for certainty blocks the search for meaning,’ argued the social psychologist Erich Fromm. ‘Uncertainty is the very condition to impel man to unfold his powers.’ Uncertainty is where things happen. It is where the opportunities – for success, for happiness, for really living – are waiting.

The Antidote: Happiness for People Who Can't Stand Positive Thinking is a counter-balance to our modern belief that happiness is only a click away.

Image source: http://tr.wikipedia.org

Inspired by brain pickings

Atul Gawande: Why We Fail

“Failures of ignorance we can forgive. If the knowledge of the best thing to do in a given situation does not exist, we are happy to have people simply make their best effort. But if the knowledge exists and is not applied correctly, it is difficult not to be infuriated.”
— Atul Gawande

***

We fail for two reasons. The first is ignorance and the second is ineptitude.  In The Checklist Manifesto: How to Get Things Right, Atul Gawande explains:

In the 1970s, the philosophers Samuel Gorovitz and Alasdair MacIntyre published a short essay on the nature of human fallibility that I read during my surgical training and haven’t stopped pondering since. The question they sought to answer was why we fail at what we set out to do in the world. One reason, they observed, is “necessary fallibility” — some things we want to do are simply beyond our capacity. We are not omniscient or all-powerful. Even enhanced by technology, our physical and mental powers are limited. Much of the world and universe is—and will remain—outside our understanding and control.

There are substantial realms, however, in which control is within our reach. We can build skyscrapers, predict snowstorms, save people from heart attacks and stab wounds. In such realms, Gorovitz and MacIntyre point out, we have just two reasons that we may nonetheless fail.

The first is ignorance—we may err because science has given us only a partial understanding of the world and how it works. There are skyscrapers we do not yet know how to build, snowstorms we cannot predict, heart attacks we still haven’t learned how to stop. The second type of failure the philosophers call ineptitude—because in these instances the knowledge exists, yet we fail to apply it correctly. This is the skyscraper that is built wrong and collapses, the snowstorm whose signs the meteorologist just plain missed, the stab wound from a weapon the doctors forgot to ask about.

For most of history, we've failed because of ignorance. We had only a partial understanding of how things worked.

In Taking the Medicine, Druin Burch writes:

Doctors, for most of human history, have killed their patients far more often than they have saved them. Their drugs and their advice have been poisonous. They have been sincere, well-meaning and murderous.

We used to know very little about the illnesses that befell us and even less about how to treat them. But, for the most part, that's changed. Over the last several decades our knowledge has improved. This advance means that ineptitude plays a more central role in failure than ever before.

Heart attacks are a great example. “Even as recently as the 1950s,” Gawande writes, “we had little idea of how to prevent or treat them.” Back then, and some would argue even today, we knew very little about what caused heart attacks. Worse, even if we had been aware of the causes, we probably wouldn't have known what to do about it. Sure we'd give people morphine for pain and put people on bed rest, to the point where people couldn't even get out of bed to use the bathroom. We didn't want to stress a damaged heart. When knowledge doesn't exist, we do what we've always done. We pray and cross our fingers.

Fast-forward to today and Gawande says “we have at least a dozen effective ways to reduce your likelihood of having a heart attack—for instance, controlling your blood pressure, prescribing a statin to lower cholesterol and inflammation, limiting blood sugar levels, encouraging exercise regularly, helping with smoking cessation, and, if there are early signs of heart disease, getting you to a cardiologist for still further recommendations.”

If you should have a heart attack, we have a whole panel of effective therapies that can not only save your life but also limit the damage to your heart: we have clot-busting drugs that can reopen your blocked coronary arteries; we have cardiac catheters that can balloon them open; we have open heart surgery techniques that let us bypass the obstructed vessels; and we’ve learned that in some instances all we really have to do is send you to bed with some oxygen, an aspirin, a statin, and blood pressure medications—in a couple days you’ll generally be ready to go home and gradually back to your usual life.

Today we know more about heart attacks but, according to Gawande, the odds a hospital deals with them correctly and in time are less than 50%. We know what we should do and we still don't do it.

So if we know so much, why do we fail? The problem today is ineptitude. Or, maybe, simply “eptitude” — applying knowledge correctly and consistently.

The modern world has dumped a lot of complexity on us and we're struggling to keep our heads above water. Not only is the complexity of knowledge increasing but so is the velocity. The world is getting more complex. This challenge is not limited to medicine. It applies to nearly everything.

Know-how and sophistication have increased remarkably across almost all our realms of endeavor, and as a result so has our struggle to deliver on them. You see it in the frequent mistakes authorities make when hurricanes or tornadoes or other disasters hit. You see it in the 36 percent increase between 2004 and 2007 in lawsuits against attorneys for legal mistakes—the most common being simple administrative errors, like missed calendar dates and clerical screw ups, as well as errors in applying the law. You see it in flawed software design, in foreign intelligence failures, in our tottering banks—in fact, in almost any endeavor requiring mastery of complexity and of large amounts of knowledge.

Such failures carry an emotional valence that seems to cloud how we think about them. Failures of ignorance we can forgive. If the knowledge of the best thing to do in a given situation does not exist, we are happy to have people simply make their best effort. But if the knowledge exists and is not applied correctly, it is difficult not to be infuriated. What do you mean half of heart attack patients don’t get their treatment on time? What do you mean that two-thirds of death penalty cases are overturned because of errors? It is not for nothing that the philosophers gave these failures so unmerciful a name—ineptitude. Those on the receiving end use other words, like negligence or even heartlessness.

Those of us who make mistakes where knowledge is known feel like these judgments ignore how difficult today's jobs are. Failure wasn't intentional and the situation is not as black and white.

Today there is more to know, more to manage, more to keep track of. More systems to learn and unlearn as new ones come online. More emails. More calls. More distractions. On top of that, there is more to get right and more to learn. And this, of course, creates more opportunity for mistakes.

Our typical response, rather than recognising the inherent complexity of the system by which judgments are made, is to increase training and experience. Doctors, for example, go to school for many years. Engineers too. Accountants the same. And countless others. All of these professions have certifications, continuous training, some method of apprenticeship. You need to practice to achieve mastery.

In the medical field, training is longer and more intense than ever. Yet preventable failures remain.

So here we are today, the start of the twenty-first century. We have more knowledge than ever. We put that knowledge into the hands of people who are the most highly trained, hardest working, and skilled people we can find. Doing so has created impressive outcomes. As a society, we've done some amazing things.

Yet despite this, avoidable failures are common and persistent. Organisations make poor mistakes even when knowledge exists that would lead them to make different decisions. People do the same. The know-how has somehow become unmanageable. Perhaps, the velocity and complexity of information has exceeded our individual ability to deal with it. We are becoming inept.

Gawande's solution to deal with ineptitude is a checklist. The Checklist Manifesto: How to Get Things Right is fascinating and eye-opening in its entirety.