No Risky Chances: The Conversation That Matters Most

Lacking a coherent view of how people might live successfully all the way to the very end, we have allowed our fates to be controlled by medicine, technology, and strangers.


Atul Gawande is one of my favorite writers. Aside from the amazing work he did getting us talking about the power of simple checklists, he’s also pointed out why most of us should have coaches. Now he’s out with a new book, Being Mortal: Medicine and What Matters in the End, which adds to our ongoing conversation on what it means to be mortal.

I learned about a lot of things in medical school, but mortality wasn’t one of them.

Although I was given a dry, leathery corpse to dissect in anatomy class in my first term, our textbooks contained almost nothing about aging or frailty or dying. The purpose of medical schooling was to teach how to save lives, not how to tend to their demise.

I had never seen anyone die before I became a doctor, and when I did, it came as a shock. I’d seen multiple family members—my wife, my parents, and my children—go through serious, life-threatening illnesses, but medicine had always pulled them through. I knew theoretically that my patients could die, of course, but every actual instance seemed like a violation, as if the rules I thought we were playing by were broken.

Dying and death confront every new doctor and nurse. The first times, some cry. Some shut down. Some hardly notice. When I saw my first deaths, I was too guarded to weep. But I had recurring nightmares in which I’d find my patients’ corpses in my house—even in my bed.

I felt as if I’d failed. But death, of course, is not a failure. Death is normal. Death may be the enemy, but it is also the natural order of things. I knew these truths abstractly, but I didn’t know them concretely—that they could be truths not just for everyone but also for this person right in front of me, for this person I was responsible for.

You don’t have to spend much time with the elderly or those with terminal illness to see how often medicine fails the people it is supposed to help. The waning days of our lives are given over to treatments that addle our brains and sap our bodies for a sliver’s chance of benefit. These days are spent in institutions—nursing homes and intensive-care units—where regimented, anonymous routines cut us off from all the things that matter to us in life.

As recently as 1945, most deaths occurred in the home. By the 1980s, just 17 percent did. Lacking a coherent view of how people might live successfully all the way to the very end, we have allowed our fates to be controlled by medicine, technology, and strangers.

But not all of us have. That takes, however, at least two kinds of courage. The first is the courage to confront the reality of mortality—the courage to seek out the truth of what is to be feared and what is to be hoped when one is seriously ill. Such courage is difficult enough, but even more daunting is the second kind of courage—the courage to act on the truth we find.

A few years ago, I got a late night page: Jewel Douglass, a 72-year-old patient of mine receiving chemotherapy for metastatic ovarian cancer, was back in the hospital, unable to hold food down. For a week, her symptoms had mounted: They started with bloating, became waves of crampy abdominal pain, then nausea and vomiting.

Her oncologist sent her to the hospital. A scan showed that, despite treatment, her ovarian cancer had multiplied, grown, and partly obstructed her intestine. Her abdomen had also filled with fluid. The deposits of tumor had stuffed up her lymphatic system, which serves as a kind of storm drain for the lubricating fluids that the body’s internal linings secrete. When the system is blocked, the fluid has nowhere to go. The belly fills up like a rubber ball until you feel as if you will burst.

But walking into Douglass’ hospital room, I’d never have known she was so sick if I hadn’t seen the scan. “Well, look who’s here!” she said, as if I’d just arrived at a cocktail party. “How are you, doctor?”

“I think I’m supposed to ask you that,” I said.

She smiled brightly and pointed around the room. “This is my husband, Arthur, whom you know, and my son, Brett.” She got me grinning. Here it was, 11 at night, she couldn’t hold down an ounce of water, and she still had her lipstick on, her silver hair was brushed straight, and she was insisting on making introductions.

Her oncologist and I had a menu of options. A range of alternative chemotherapy regimens could be tried to shrink the tumor burden, and I had a few surgical options too. I wouldn’t be able to remove the intestinal blockage, but I might be able to bypass it, I told her. Or I could give her an ileostomy, disconnecting the bowel above the blockage and bringing it through the skin to empty into a bag. I would also put in a couple of drainage catheters—permanent spigots that could be opened to release the fluids from her blocked-up drainage ducts or intestines when necessary. Surgery risked serious complications—wound breakdown, leakage of bowel into her abdomen, infections—but it was the only way she might regain her ability to eat.

I also told her that we did not have to do either chemo or surgery. We could provide medications to control her pain and nausea and arrange for hospice care at home.

This is the moment when I would normally have reviewed the pros and cons. But we are only gradually learning in the medical profession that this is not what we need to do. The options overwhelmed her. They all sounded terrifying. So I stepped back and asked her a few questions I learned from hospice and palliative care physicians, hoping to better help both of us know what to do: What were her biggest fears and concerns? What goals were most important to her? What trade-offs was she willing to make?

Not all can answer such questions, but she did. She said she wanted to be without pain, nausea, or vomiting. She wanted to eat. Most of all, she wanted to get back on her feet. Her biggest fear was that she wouldn’t be able to return home and be with the people she loved.

I asked what sacrifices she was willing to endure now for the possibility of more time later. “Not a lot,” she said. Uppermost in her mind was a wedding that weekend that she was desperate not to miss. “Arthur’s brother is marrying my best friend,” she said. She’d set them up on their first date. The wedding was just two days away. She was supposed to be a bridesmaid. She was willing to do anything to make it, she said.

Suddenly, with just a few simple questions, I had some guidance about her priorities. So we made a plan to see if we could meet them. With a long needle, we tapped a liter of tea-colored fluid from her abdomen, which made her feel at least temporarily better. We gave her medication to control her nausea. We discharged her with instructions to drink nothing thicker than apple juice and to return to see me after the wedding.

She didn’t make it. She came back to the hospital that same night. Just the car ride, with its swaying and bumps, made her vomit, and things only got worse at home.

We agreed that surgery was the best course now and scheduled it for the next day. I would focus on restoring her ability to eat and putting drainage tubes in. Afterward, she could decide if she wanted more chemotherapy or to go on hospice.

She was as clear as I’ve seen anyone be about her goals, but she was still in doubt. The following morning, she canceled the operation. “I’m afraid,” she said. She’d tossed all night, imagining the pain, the tubes, the horrors of possible complications. “I don’t want to take risky chances,” she said.

Her difficulty wasn’t lack of courage to act in the face of risks; it was sorting out how to think about them. Her greatest fear was of suffering, she said. Couldn’t the operation make it worse rather than better?

It could, I said. Surgery offered her the possibility of being able to eat again and a very good likelihood of controlling her nausea, but it carried substantial risk of giving her only pain without improvement or adding new miseries. She had, I estimated, a 75 percent chance that surgery would make her future better, at least for a little while, and a 25 percent chance it’d make it worse.

The brain gives us two ways to evaluate experiences like suffering—how we apprehend such experiences in the moment and how we look at them afterward. People seem to have two different selves—an experiencing self who endures every moment equally and a remembering self who, as the Nobel Prize–winning researcher Daniel Kahneman has shown, gives almost all the weight of judgment afterward to just two points in time: the worst moment of an ordeal and the last moment of it. The remembering self and the experiencing self can come to radically different opinions about the same experience—so which one should we listen to?

This, at bottom, was Jewel Douglass’ torment. Should she heed her remembering self—or, in this case, anticipating self—which was focused on the worst things she might endure? Or should she listen to her experiencing self, which would likely endure a lower average amount of suffering in the days to come if she underwent surgery rather than just going home—and might even get to eat again for a while?

In the end, a person doesn’t view his life as merely the average of its moments—which, after all, is mostly nothing much, plus some sleep. Life is meaningful because it is a story, and a story’s arc is determined by the moments when something happens. Unlike your experiencing self, which is absorbed in the moment, your remembering self is attempting to recognize not only the peaks of joy and valleys of misery but also how the story works out as a whole. That is profoundly affected by how things ultimately turn out. Football fans will let a few flubbed minutes at the end of a game ruin three hours of bliss—because a football game is a story, and in stories, endings matter.

Jewel Douglass didn’t know if she was willing to face the suffering that surgery might inflict and feared being left worse off. “I don’t want to take risky chances,” she said. She didn’t want to take a high-stakes gamble on how her story would end. Suddenly I realized, she was telling me everything I needed to know.

We should go to surgery, I told her, but with the directions she’d just spelled out—to do what I could to enable her to return home to her family while not taking “risky chances.” I’d put in a small laparoscope. I’d look around. And I’d attempt to unblock her intestine only if I saw that I could do it fairly easily. If it looked risky, I’d just put in tubes to drain her backed-up pipes. I’d aim for what might sound like a contradiction in terms: a palliative operation—an operation whose overriding priority was to do only what was likely to make her feel immediately better.

Being Mortal

She remained quiet, thinking.

Her daughter took her hand. “We should do this, Mom,” she said.

“OK,” Douglass said. “But no risky chances.”

When she was under anesthesia, I made a half-inch incision above her belly button. I slipped my gloved finger inside to feel for space to insert the fiberoptic scope. But a hard loop of tumor-caked bowel blocked entry. I wasn’t even going to be able to put in a camera.

I had the resident take the knife and extend the incision upward until it was large enough to see in directly and get a hand inside. There were too many tumors to do anything to help her eat again, and now we were risking creating holes we’d never be able to repair. Leakage inside the abdomen would be a calamity. So we stopped.

No risky chances. We shifted focus and put in two long, plastic drainage tubes. One we inserted directly into her stomach to empty the contents backed up there; the other we laid in the open abdominal cavity to empty the fluid outside her gut. Then we closed up, and we were done.

I told her family we hadn’t been able to help her eat again, and when Douglass woke up, I told her too. Her daughter wept. Her husband thanked us for trying. Douglass tried to put a brave face on it. “I was never obsessed with food anyway,” she said.

The tubes relieved her nausea and abdominal pain greatly—“90 percent,” she said. The nurses taught her how to open the gastric tube into a bag when she felt sick and the abdominal tube when her belly felt too tight. We told her she could drink whatever she wanted and even eat soft food for the taste. Three days after surgery, she went home with hospice care to look after her.

Before she left, her oncologist and oncology nurse practitioner saw her. Douglass asked them how long they thought she had. “They both filled up with tears,” she told me. “It was kind of my answer.”

A few days later, she and her family allowed me to stop by her home after work. She answered the door, wearing a robe because of the tubes, for which she apologized. We sat in her living room, and I asked how she was doing.

OK, she said. “I think I have a measure that I’m slip, slip, slipping,” but she had been seeing old friends and relatives all day, and she loved it. She was taking just Tylenol for pain. Narcotics made her drowsy and weak, and that interfered with seeing people.

She said she didn’t like all the contraptions sticking out of her. But the first time she found that just opening a tube could take away her nausea, she said, “I looked at the tube and said, ‘Thank you for being there.’ ”

Mostly, we talked about good memories. She was at peace with God, she said. I left feeling that, at least this once, we had done it right. Douglass’ story was not ending the way she ever envisioned, but it was nonetheless ending with her being able to make the choices that meant the most to her.

Two weeks later, her daughter Susan sent me a note. “Mom died on Friday morning. She drifted quietly to sleep and took her last breath. It was very peaceful. My dad was alone by her side with the rest of us in the living room. This was such a perfect ending and in keeping with the relationship they shared.”

I am leery of suggesting that endings are controllable. No one ever really has control; physics and biology and accident ultimately have their way in our lives. But as Jewel Douglass taught me, we are not helpless either—and courage is the strength to recognize both of those realities. We have room to act and shape our stories—although as we get older, we do so within narrower and narrower confines.

That makes a few conclusions clear: that our most cruel failure in how we treat the sick and the aged is the failure to recognize that they have priorities beyond merely being safe and living longer; that the chance to shape one’s story is essential to sustaining meaning in life; and that we have the opportunity to refashion our institutions, culture, and conversations to transform the possibilities for the last chapters of all of our lives.

Being Mortal: Medicine and What Matters in the End

To Give or Take? The Surprising Science Behind Success

Adam Grant - Give and Take

​​“The principle of give and take; that is diplomacy—give one and take ten” — Mark Twain

Was Twain right? It certainly seems so. The world is full of people who operate with that fuel. For them it’s all about taking. Lest you lose your faith in humanity, the world is also full of people who believe that on some level, karma or otherwise, it pays to be nice. The question arises as to which is the better strategy. Is it better to take or to give?

So much of life depends on how we interact with others. We all want to be friends with givers. We have a way of eliminating takers from our social circles and generally filtering them out of our life. Yet when it comes to the workplace, things change. We can’t rid ourselves of the takers and they often seem to get ahead at the expense of the givers. Even givers often behave differently in the workplace, argues Adam Grant in Give and Take: A Revolutionary Approach to Success.

According to conventional wisdom, highly successful people have three things in common: motivation, ability, and opportunity. If we want to succeed, we need a combination of hard work, talent, and luck. [Yet there is] a fourth ingredient, one that’s critical but often neglected: success depends heavily on how we approach our interactions with other people. Every time we interact with another person at work, we have a choice to make: do we try to claim as much value as we can, or contribute value without worrying about what we receive in return?

And part of how we approach our interactions with others has to do with our preference for reciprocity — our desired mix of taking and giving.

Grant introduces us to two kinds of people that fall at opposite ends of the reciprocity spectrum: givers and takers.

Takers have a distinctive signature: they like to get more than they give. They tilt reciprocity in their own favor, putting their own interests ahead of others’ needs. Takers believe that the world is a competitive, dog-eat-dog place. They feel that to succeed, they need to be better than others. To prove their competence, they self-promote and make sure they get plenty of credit for their efforts. Garden-variety takers aren’t cruel or cutthroat; they’re just cautious and self-protective. “If I don’t look out for myself first,” takers think, “no one will.”

[...]

In the workplace, givers are a relatively rare breed. They tilt reciprocity in the other direction, preferring to give more than they get. Whereas takers tend to be self-focused, evaluating what other people can offer them, givers are other-focused, paying more attention to what other people need from them. These preferences aren’t about money: givers and takers aren’t distinguished by how much they donate to charity or the compensation that they command from their employers. Rather, givers and takers differ in their attitudes and actions toward other people. If you’re a taker, you help others strategically, when the benefits to you outweigh the personal costs. If you’re a giver, you might use a different cost-benefit analysis: you help whenever the benefits to others exceed the personal costs. Alternatively, you might not think about the personal costs at all, helping others without expecting anything in return. If you’re a giver at work, you simply strive to be generous in sharing your time, energy, knowledge, skills, ideas, and connections with other people who can benefit from them.

… being a giver doesn’t require extraordinary acts of sacrifice. It just involves a focus on acting in the interests of others, such as by giving help, providing mentoring, sharing credit, or making connections for others. Outside the workplace, this type of behavior is quite common. According to research led by Yale psychologist Margaret Clark, most people act like givers in close relationships. In marriages and friendships, we contribute whenever we can without keeping score.

In the workplace things change. Things get more complicated. Subconsciously employing game theory, we become matchers.

Professionally, few of us act purely like givers or takers, adopting a third style instead. We become matchers, striving to preserve an equal balance of giving and getting. Matchers operate on the principle of fairness: when they help others, they protect themselves by seeking reciprocity. If you’re a matcher, you believe in tit for tat, and your relationships are governed by even exchanges of favors.

Despite that, we develop a “primary reciprocity style” at work, which “captures how (we) approach most of the people most of the time. And that style can play as much a role in our success as hard work, talent, and luck.”

If you were to guess who was to end up at the bottom of the success ladder, what would you say? Givers? Takers? Matchers?

Research demonstrates that givers sink to the bottom of the success ladder. Across a wide range of important occupations, givers are at a disadvantage: they make others better off but sacrifice their own success in the process.

But if givers are at the bottom, who is at the top? It’s the givers.

This pattern holds up across the board. The Belgian medical students with the lowest grades have unusually high giver scores, but so do the students with the highest grades. Over the course of medical school, being a giver accounts for 11 percent higher grades. Even in sales, I found that the least productive salespeople had 25 percent higher giver scores than average performers—but so did the most productive salespeople. The top performers were givers, and they averaged 50 percent more annual revenue than the takers and matchers. Givers dominate the bottom and the top of the success ladder. Across occupations, if you examine the link between reciprocity styles and success, the givers are more likely to become champs—not chumps.

A lot of life strategies that work in the hundred-yard dash fail in the marathon. Grant convincingly argues that we underestimate the success of givers. We stereotype them as “chumps and doormats,” yet they also turn out to be some of the most successful people. So what separates the champs from the chumps?

The answer is less about raw talent or aptitude, and more about the strategies givers use and the choices they make. … We all have goals for our own individual achievements, and it turns out that successful givers are every bit as ambitious as takers and matchers. They simply have a different way of pursuing their goals.

Givers are the win-win people. When takers win, someone loses. As the venture capitalist Randy Komisar remarks, “It’s easier to win if everybody wants you to win. If you don’t make enemies out there, it’s easier to succeed.” Or as Charlie Munger says, “The best way to get success is to deserve success.”

Givers are non-linear.

[g]ivers, takers, and matchers all can—and do—achieve success. But there’s something distinctive that happens when givers succeed: it spreads and cascades. When takers win, there’s usually someone else who loses. Research shows that people tend to envy successful takers and look for ways to knock them down a notch. In contrast, when [givers] win, people are rooting for them and supporting them, rather than gunning for them. Givers succeed in a way that creates a ripple effect, enhancing the success of people around them. You’ll see that the difference lies in how giver success creates value, instead of just claiming it.

And, Grant argues that we live in a world where giving matters more than ever.

The fact that the long run is getting shorter isn’t the only force that makes giving more professionally productive today. We live in an era when massive changes in the structure of work—and the technology that shapes it have further amplified the advantages of being a giver.

Givers thrive in teams, takers as the lone wolf. As the structure of success changes—as we move out of school and into the workplace—a new sense of teamwork emerges that favors the givers. Takers focus on wealth, power, pleasure, and winning. Values that are constantly getting attention from the media. Givers are interested in helping, being dependable, social justice, and compassion (notably things that get much less attention in today’s sensationalist page-view world.)

In the first part of Give and Take, Grant shows us what makes giving “both powerful and dangerous.” The second part shows us the benefits and costs of giving and how they can be managed. Before you put the book down, you’ll be rethinking your assumptions about success.

What Book has the Most Page-for-Page Wisdom?

Here is what happened when I asked twenty-seven thousand people “What is page for page the book with the most wisdom you’ve ever read?”

My thinking was, and still is, that you need to filter what you read. Reading, I mean really reading, is not simple. It’s time consuming. So aside from finding time and remembering what you read, you want to make sure you’re reading the right things. There are a few approaches to this filtering. One is to employ the Lindy Effect. But another approach that I use personally is, and this is really going to sound simple, to ask smart people what they’re reading, what they learned from, or, in this case, what book has the most page-per-page wisdom.

The results are often surprising and I usually find one or two books that I’ve never heard of that offer a lot of value.

In no particular order, here is what twitter had to say:

Seeking Wisdom, by Peter Bevelin
This is number 8 on the list of books that changed my life. It is also the book I give away most often, sending innumerable copies around the globe.

Cosmos, by Carl Sagan
This is one of the best-selling science books of all time. I’ve never read it, so I ordered it after reading the blurb: “retraces the fourteen billion years of cosmic evolution that have transformed matter into consciousness, exploring such topics as the origin of life, the human brain, Egyptian hieroglyphics, spacecraft missions, the death of the Sun, the evolution of galaxies, and the forces and individuals who helped to shape modern science.”

To Kill a Mockingbird, by Harper Lee
A book that a lot of people, myself included, talk about but have never read. It’s time to change that.

Do the Work!, by Steven Pressfield
I liked Pressfield’s, The War of Art enough to pick this manifesto arguing that ideas are not enough, you actually have to do the work.

Zen and the Art of Motorcycle Maintenance by Robert Pirsig
I’ve picked this book up at least 3 different times in my life and stopped reading it for one reason or another. Considered a cult classic by many, I haven’t found the right time to read it … yet.

The Conquest of Happiness, Bertrand Russell
First published in 1930, this book attempts to “diagnose the myriad causes of unhappiness in modern life and chart a path out of the seemingly inescapable malaise.” The book remains as relevant today as ever, and in this edition Daniel Dennett, who showed us how to how to criticize with kindness, re-introduces Russell’s wisdom to a new generation of readers and thinkers calling the work “a prototype of the flood of self-help books that have more recently been published, few of them as well worth reading today as Russell’s little book.”

This is Water by David Foster Wallace
This is one of the best things you will ever read (and hopefully periodically re-read). I wholeheartedly agree with this selection.

Meditations, by Marcus Aurelius
Another of the books that changed my life and also one of the books that I gave away at the Re:Think Innovation workshop. Translation matters enormously with this book, get this one.

Letters from a Stoic, Seneca
Love love love. As relevant today as it was when it was written.

Influence: The Psychology of Persuasion by Robert Cialdini
The person who recommended this book said “you can’t throw away any one page of this book.” You can read a quick overview of the book, but I’d recommend digging in.

Dr Seuss, Oh, The Places You’ll Go!
I agree. Don’t write it off because it’s a kids’ book. I love this book.

An Intimate History of Humanity, by Theodore Zeldin
I’d never heard of this work exploring the evolution of emotions before. Time magazine called it “An intellectually dazzling view of our past and future.”

The Road Less Traveled, M. Scott Peck
I’d never heard of this book (seriously) either and it’s sold 7 million copies. A book to “help us explore the very nature of loving relationships and lead us toward a new serenity and fullness of life.”

The Hitchhiker’s Guide to the Galaxy, by Douglas Adams
“For all the answers, stick your thumb to the stars!”

Google and Combinatorial Innovation

Innovaiton
In his new book, How Google Works, Eric Schmidt argues that “we are entering … a new period of combinatorial innovation.” This happens, he says, when “there is a great availability of different component parts that can be combined or recombined to create new inventions.”

For example, in the 1800s, the standardization of design of mechanical devices such as gears, pulleys, chains, and cams led to a manufacturing boom. In the 1900s, the gasoline engine led to innovations in automobiles, motorcycles, and airplanes. By the 1950s, it was the integrated circuit proliferating in numerous applications. In each of these cases, the development of complementary components led to a wave of inventions.

Today’s components are often about information, technology, and computing.

Would-be inventors have all the world’s information, global reach, and practically infinite computing power. They have open-source software and abundant APIs that allow them to build easily on each other’s work. They can use standard protocols and languages. They can access information platforms with data about things ranging from traffic to weather to economic transactions to human genetics to who is socially connected with whom, either on an aggregate or (with permission) individual basis. So one way of developing technical insights is to use some of these accessible technologies and data and apply them in an industry to solve an existing problem in a new way.

Regardless of your business there is a core of knowledge and conventional wisdom that your industry is based upon. Maybe it’s logistics, maybe it’s biology, chemistry or storytelling. Whatever that core is, “that’s your technology. Find the geeks, find the stuff, and that’s where you’ll find the technical insights you need to drive success.”

That’s also the area to look for where conventional wisdom might be wrong. What was once common sense becomes common practice. When everyone agrees on some fundamental assumption about how the industry works, the opposite point of view can lead toward disruption.

Another possible source of innovation is to start with a solution to one problem and then look at ways to use the same solution on other problems.

New technologies tend to come into the world in a very primitive condition, often designed for very specific problems. The steam engine was used as a nifty way to pump water out of mines long before it found its calling powering locomotives. Marconi sold radio as a means of ship-to-shore communications, not as a place to hear phrases like “Baba Booey!” and “all the children are above average.” Bell labs was so underwhelmed by the commercial potential of the laser when it was invented in the ‘60s that it initially put off patenting it. Even the Internet was initially conceived as a way for scientists and academics to share research. As smart as its creators were, they could never have imagined its future functionality as a place to share pictures and videos, stay in touch with friends, learn anything about anything, or do the other amazing things we use it for today.

Schmidt gives his favorite example of building upon a solution developed for a narrow problem.

When Google search started to ramp up, some of our most popular queries were related to adult-oriented topics. Porn filters at the time were notoriously ineffective, so we put a small team of engineers on the problem of algorithmically capturing Supreme Court Justice Potter Stewart’s definition of porn, “I know it when I see it.” They were successful by combining a couple of technical insights: They got very good at understanding the content of an image (aka skin), and could judge its context by seeing how users interacted with it. (When someone searches for a pornography-related term and the image is from a medical textbook, they are unlikely to click on it, and if they do they won’t stay on the site for long.) Soon we had a filter called SafeSearch that was far more effective in blocking inappropriate images than anything else on the web—a solution (SafeSearch) to a narrow problem (filtering adult content).

But why stop there? Over the next couple of years we took the technology that had been developed to address the porn problem and used it to serve broader purposes. We improved our ability to rate the relevance of images (any images, not just porn) to search queries by using the millions of content-based models (the models of how users react to different images) that we had developed for SafeSearch. Then we added features that let users search for images similar to the ones they find in their search results (“I like that shot of Yosemite-go find more that look just like that”). Finally, we developed the ability to start a search not with a written query (“half dome, yosemite”), but a photograph (that snapshot you took of Half Dome when you visited Yosemite). All of these features evolved from technology that had initially developed for the SafeSearch porn filter. So when you are looking at screen upon screen of Yosemite photos that are nearly identical to the ones you took, you can thank the adult entertainment industry for helping launch the technology that is bringing them to you.

How Google Works is full of interesting insights into the inner workings of a company we’re all fascinated with.

E.B. White’s Beautiful Letter to Someone Who Lost Faith in Humanity

eb white

In March of 1973, a Mr. Nadeau sent a letter to E. B. White, the author of greats such as Charlotte’s Web and Stuart Little, expressing his bleak hope for humanity.

White’s beautiful reply, found in Letters of Note, attempts to raise the man’s spirits.

North Brooklin, Maine,
30 March 1973

Dear Mr. Nadeau:

As long as there is one upright man, as long as there is one compassionate woman, the contagion may spread and the scene is not desolate. Hope is the thing that is left to us, in a bad time. I shall get up Sunday morning and wind the clock, as a contribution to order and steadfastness.

Sailors have an expression about the weather: they say, the weather is a great bluffer. I guess the same is true of our human society — things can look dark, then a break shows in the clouds, and all is changed, sometimes rather suddenly. It is quite obvious that the human race has made a queer mess of life on this planet. But as a people we probably harbor seeds of goodness that have lain for a long time waiting to sprout when the conditions are right. Man’s curiosity, his relentlessness, his inventiveness, his ingenuity have led him into deep trouble. We can only hope that these same traits will enable him to claw his way out.

Hang on to your hat. Hang on to your hope. And wind the clock, for tomorrow is another day.

Sincerely,
E. B. White

What If? Serious Scientific Answers to Absurd Hypothetical Questions

xkcd-title

Randall Munroe, creator of xkcd, has written a book: What If?: Serious Scientific Answers to Absurd Hypothetical Questions

Here are a few questions, which I loved, that are sure to spark your curiosity and imagination.

What would happen if you tried to hit a baseball pitched at 90 percent the speed of light?

xkcd-baseball 1

The answer turns out to be “a lot of things ,” and they all happen very quickly, and it doesn’t end well for the batter (or the pitcher). I sat down with some physics books, a Nolan Ryan action figure, and a bunch of videotapes of nuclear tests and tried to sort it all out. What follows is my best guess at a nanosecond-by-nanosecond portrait.

The ball would be going so fast that everything else would be practically stationary. Even the molecules in the air would stand still. Air molecules would vibrate back and forth at a few hundred miles per hour, but the ball would be moving through them at 600 million miles per hour. This means that as far as the ball is concerned, they would just be hanging there, frozen.

The ideas of aerodynamics wouldn’t apply here. Normally, air would flow around anything moving through it. But the air molecules in front of this ball wouldn’t have time to be jostled out of the way. The ball would smack into them so hard that the atoms in the air molecules would actually fuse with the atoms in the ball’s surface. Each collision would release a burst of gamma rays and scattered particles.

xkcd-baseball 2

These gamma rays and debris would expand outward in a bubble centered on the pitcher’s mound. They would start to tear apart the molecules in the air, ripping the electrons from the nuclei and turning the air in the stadium into an expanding bubble of incandescent plasma. The wall of this bubble would approach the batter at about the speed of light— only slightly ahead of the ball itself.

The constant fusion at the front of the ball would push back on it, slowing it down, as if the ball were a rocket flying tail-first while firing its engines. Unfortunately, the ball would be going so fast that even the tremendous force from this ongoing thermonuclear explosion would barely slow it down at all. It would, however, start to eat away at the surface, blasting tiny fragments of the ball in all directions. These fragments would be going so fast that when they hit air molecules, they would trigger two or three more rounds of fusion.

After about 70 nanoseconds the ball would arrive at home plate. The batter wouldn’t even have seen the pitcher let go of the ball, since the light carrying that information would arrive at about the same time the ball would. Collisions with the air would have eaten the ball away almost completely, and it would now be a bullet-shaped cloud of expanding plasma (mainly carbon, oxygen, hydrogen, and nitrogen) ramming into the air and triggering more fusion as it went. The shell of x-rays would hit the batter first, and a handful of nanoseconds later the debris cloud would hit.

When it would reach home plate, the center of the cloud would still be moving at an appreciable fraction of the speed of light. It would hit the bat first, but then the batter, plate, and catcher would all be scooped up and carried backward through the backstop as they disintegrated. The shell of x-rays and superheated plasma would expand outward and upward, swallowing the backstop, both teams, the stands, and the surrounding neighborhood— all in the first microsecond.

Suppose you’re watching from a hilltop outside the city. The first thing you would see would be a blinding light, far outshining the sun. This would gradually fade over the course of a few seconds, and a growing fireball would rise into a mushroom cloud. Then, with a great roar, the blast wave would arrive, tearing up trees and shredding houses.

Everything within roughly a mile of the park would be leveled, and a firestorm would engulf the surrounding city. The baseball diamond, now a sizable crater, would be centered a few hundred feet behind the former location of the backstop.

xkcd-baseball3

Major League Baseball Rule 6.08( b) suggests that in this situation, the batter would be considered “hit by pitch,” and would be eligible to advance to first base.

***

What would happen if everyone on Earth stood as close to each other as they could and jumped, everyone landing on the ground at the same instant?

This is one the most popular questions submitted through my website. It’s been examined before, including by ScienceBlogs and The Straight Dope. They cover the kinematics pretty well. However, they don’t tell the whole story.

Let’s take a closer look.

At the start of the scenario, the entire Earth’s population has been magically transported together into one place.

xkcd-prejump

This crowd takes up an area the size of Rhode Island. But there’s no reason to use the vague phrase “an area the size of Rhode Island.” This is our scenario; we can be specific. They’re actually in Rhode Island.

At the stroke of noon, everyone jumps.

xkcd-jumping

As discussed elsewhere, it doesn’t really affect the planet. Earth outweighs us by a factor of over ten trillion. On average, we humans can vertically jump maybe half a meter on a good day. Even if the Earth were rigid and responded instantly, it would be pushed down by less than an atom’s width.

Next, everyone falls back to the ground.

Technically, this delivers a lot of energy into the Earth, but it’s spread out over a large enough area that it doesn’t do much more than leave footprints in a lot of gardens. A slight pulse of pressure spreads through the North American continental crust and dissipates with little effect. The sound of all those feet hitting the ground creates a loud, drawn-out roar lasting many seconds.

Eventually, the air grows quiet.

Seconds pass. Everyone looks around. There are a lot of uncomfortable glances. Someone coughs.

A cell phone comes out of a pocket. Within seconds, the rest of the world’s five billion phones follow. All of them —even those compatible with the region’s towers— are displaying some version of “NO SIGNAL.” The cell networks have all collapsed under the unprecedented load. Outside Rhode Island, abandoned machinery begins grinding to a halt.

The T. F. Green Airport in Warwick, Rhode Island, handles a few thousand passengers a day. Assuming they got things organized (including sending out scouting missions to retrieve fuel), they could run at 500 percent capacity for years without making a dent in the crowd.

The addition of all the nearby airports doesn’t change the equation much. Nor does the region’s light rail system. Crowds climb on board container ships in the deep-water port of Providence, but stocking sufficient food and water for a long sea voyage proves a challenge.

Rhode Island’s half-million cars are commandeered. Moments later, I-95, I-195, and I-295 become the sites of the largest traffic jam in the history of the planet. Most of the cars are engulfed by the crowds, but a lucky few get out and begin wandering the abandoned road network.

Some make it past New York or Boston before running out of fuel. Since the electricity is probably not on at this point, rather than find a working gas pump, it’s easier to just abandon the car and steal a new one. Who can stop you? All the cops are in Rhode Island.

The edge of the crowd spreads outward into southern Massachusetts and Connecticut. Any two people who meet are unlikely to have a language in common, and almost nobody knows the area. The state becomes a chaotic patchwork of coalescing and collapsing social hierarchies. Violence is common. Everybody is hungry and thirsty. Grocery stores are emptied. Fresh water is hard to come by and there’s no efficient system for distributing it.

Within weeks, Rhode Island is a graveyard of billions.

The survivors spread out across the face of the world and struggle to build a new civilization atop the pristine ruins of the old. Our species staggers on, but our population has been greatly reduced. Earth’s orbit is completely unaffected— it spins along exactly as it did before our species-wide jump.

But at least now we know.

What If?: Serious Scientific Answers to Absurd Hypothetical Questions is sure to spark your imagination and reignite your creativity.

Eight Things I Learned from Peter Thiel’s Zero To One

peter-thiel
Peter Thiel is an entrepreneur and investor. He co-founded PayPal and Palantir. He also made the first outside investment in Facebook and was an early investor in companies like SpaceX and LinkedIn. And now he’s written a book, Zero to One: Notes on Startups, or How to Build the Future, with the goal of helping us “see beyond the tracks laid down” to the “broader future that there is to create.”

The book is an exercise in thinking. It’s about questioning and rethinking received wisdom in order to create the future.

Here are eight lessons I took away from the book.

1. Like Heraclitus, who said that you can only step into the same river once, Thiel believes that each moment in business happens only once.

The next Bill Gates will not build an operating system. The next Larry Page or Sergey Brin won’t make a search engine. And the next Mark Zuckerberg won’t create a social network. If you are copying these guys, you aren’t learning from them.

Of course, it’s easier to copy a model than to make something new. Doing what we already know how to do takes the world from 1 to n, adding more of something familiar. But every time we create something new, we go from 0 to 1. The act of creation is singular, as is the moment of creation, and the result is something fresh and strange.

2. There is no formula for innovation.

The paradox of teaching entrepreneurship is that such a formula (for innovation) cannot exist; because every innovation is new and unique, no authority can prescribe in concrete terms how to be more innovative. Indeed, the single most powerful pattern I have noticed is that successful people find value in unexpected places, and they do this by thinking about business from first principles instead of formulas.

3. The best interview question you can ask.

Whenever I interview someone for a job, I like to ask this question: “What important truth do very few people agree with you on?”

This is a question that sounds easy because it’s straightforward. Actually, it’s very hard to answer. It’s intellectually difficult because the knowledge that everyone is taught in school is by definition agreed upon. And it’s psychologically difficult because anyone trying to answer must say something she knows to be unpopular. Brilliant thinking is rare, but courage is in even shorter supply than genius.

Most commonly, I hear answers like the following:

“Our educational system is broken and urgently needs to be fixed.”

“America is exceptional.”

“There is no God.”

These are bad answers. The first and the second statements might be true, but many people already agree with them. The third statement simply takes one side in a familiar debate. A good answer takes the following form: “Most people believe in x, but the truth is the opposite of x.”

What does this have to do with the future?

In the most minimal sense, the future is simply the set of all moments yet to come. But what makes the future distinctive and important isn’t that it hasn’t happened yet, but rather that it will be a time when the world looks different from today. … Most answers to the contrarian questions are different ways of seeing the present; good answers are as close as we can come to looking into the future.

4. A new company’s most important strength

Properly defined, a startup is the largest group of people you can convince of a plan to build a different future. A new company’s most important strength is new thinking: even more important than nimbleness, small size affords space to think.

5. The first step to thinking clearly

Our contrarian question – What important truth do very few people agree with you on? — is difficult to answer directly. It may be easier to start with a preliminary: what does everybody agree on?”

“Madness is rare in individuals
—but in groups, parties, nations and ages it is the rule.”
— Nietzche (before he went mad)

If you can identify a delusional popular belief, you can find what lies hidden behind it: the contrarian truth.

[…]

Conventional beliefs only ever come to appear arbitrary and wrong in retrospect; whenever one collapses we call the old belief a bubble, but the distortions caused by bubbles don’t disappear when they pop. The internet bubble of the ‘90s was the biggest of the last two decades, and the lessons learned afterward define and distort almost all thinking about technology today. The first step to thinking clearly is to question what we think we know about the past.

Here is an example Thiel gives to help illuminate this idea.

The entrepreneurs who stuck with Silicon Valley learned four big lessons from the dot-com crash that still guide business thinking today:

1. Make incremental advances — “Grand visions inflated the bubble, so they should not be indulged. Anyone who claims to be able to do something great is suspect, and anyone who wants to change the world should be more humble. Small, incremental steps are the only safe path forward.”

2. Stay lean and flexible — “All companies must be lean, which is code for unplanned. You should not know what your business will do; planning is arrogant and inflexible. Instead you should try things out, iterate, and treat entrepreneurship as agnostic experimentation.”

3. Improve on the competition — “Don’t try to create a new market prematurely. The only way to know that you have a real business is to start with an already existing customer, so you should build your company by improving on recognizable products already offered by successful competitors.”

4. Focus on product, not sales — “If your product requires advertising or salespeople to sell it, it’s not good enough: technology is primarily about product development, not distribution. Bubble-era advertising was obviously wasteful, so the only sustainable growth is viral growth.”

These lessons have become dogma in the startup world; those who would ignore them are presumed to invite the justified doom visited upon technology in the great crash of 2000. And yet the opposite principles are probably more correct.

1. It is better to risk boldness than triviality.
2. A bad plan is better than no plan.
3. Competitive markets destroy profits.
4. Sales matters just as much as product.”

To build the future we need to challenge the dogmas that shape our view of the past. That doesn’t mean the opposite of what is believed is necessarily true, it means that you need to rethink what is and is not true and determine how that shapes how we see the world today. As Thiel says, “The most contrarian thing of all is not to oppose the crowd but to think for yourself.

6. Progress comes from monopoly, not competition.

The problem with a competitive business goes beyond lack of profits. Imagine you’re running one of those restaurants in Mountain View. You’re not that different from dozens of your competitors, so you’ve got to fight hard to survive. If you offer affordable food with low margins, you can probably pay employees only minimum wage. And you’ll need to squeeze out every efficiency: That is why small restaurants put Grandma to work at the register and make the kids wash dishes in the back.

A monopoly like Google is different. Since it doesn’t have to worry about competing with anyone, it has wider latitude to care about its workers, its products and its impact on the wider world. Google’s motto—”Don’t be evil”—is in part a branding ploy, but it is also characteristic of a kind of business that is successful enough to take ethics seriously without jeopardizing its own existence. In business, money is either an important thing or it is everything. Monopolists can afford to think about things other than making money; non-monopolists can’t. In perfect competition, a business is so focused on today’s margins that it can’t possibly plan for a long-term future. Only one thing can allow a business to transcend the daily brute struggle for survival: monopoly profits.

So a monopoly is good for everyone on the inside, but what about everyone on the outside? Do outsize profits come at the expense of the rest of society? Actually, yes: Profits come out of customers’ wallets, and monopolies deserve their bad reputation—but only in a world where nothing changes.

In a static world, a monopolist is just a rent collector. If you corner the market for something, you can jack up the price; others will have no choice but to buy from you. Think of the famous board game: Deeds are shuffled around from player to player, but the board never changes. There is no way to win by inventing a better kind of real-estate development. The relative values of the properties are fixed for all time, so all you can do is try to buy them up.

But the world we live in is dynamic: We can invent new and better things. Creative monopolists give customers more choices by adding entirely new categories of abundance to the world. Creative monopolies aren’t just good for the rest of society; they’re powerful engines for making it better.

7. Rivalry causes us to overemphasize old opportunities and slavishly copy what has worked in the past.

Marx and Shakespeare provide two models that we can use to understand almost every kind of conflict.

According to Marx, people fight because they are different. The proletariat fights the bourgeoisie because they have completely different ideas and goals (generated, for Marx, by their very different material circumstances). The greater the difference, the greater the conflict.

To Shakespeare, by contrast, all combatants look more or less alike. It’s not at all clear why they should be fighting since they have nothing to fight about. Consider the opening to Romeo and Juliet: “Two households, both alike in dignity.” The two houses are alike, yet they hate each other. They grow even more similar as the feud escalates. Eventually, they lose sight of why they started fighting in the first place.”

In the world of business, at least, Shakespeare proves the superior guide. Inside a firm, people become obsessed with their competitors for career advancement. Then the firms themselves become obsessed with their competitors in the marketplace. Amid all the human drama, people lose sight of what matters and focus on their rivals instead.

[…]

Rivalry causes us to overemphasize old opportunities and slavishly copy what has worked in the past.

8. Last can be first

You’ve probably heard about “first mover advantage”: if you’re the first entrant into a market, you can capture significant market share while competitors scramble to get started. That can work, but moving first is a tactic, not a goal. What really matters is generating cash flows in the future, so being the first mover doesn’t do you any good if someone else comes along and unseats you. It’s much better to be the last mover – that is, to make the last great development in a specific market and enjoy years or even decades of monopoly profits.

Grandmaster José Raúl Capablanca put it well: to succeed, “you must study the endgame before everything else.”

Zero to One is full of counterintuitive insights that will help your thinking and ignite possibility.

(image source)

The History of Cognitive Overload

The Organized Mind

The Organized Mind: Thinking Straight in the Age of Information Overload, a book by Daniel Levitin, has an interesting section on cognitive overload.

Each day we are confronted with hundreds, probably thousands of decisions. Most of which are insignificant or unimportant or both. Do we really need a whole aisle for toothpaste?

In response to all of these decisions most of us adopt a strategy of satisficing, a term coined by Nobel Prize winner Herbert Simon to describe something that is perhaps not the best but good enough. For things that don’t matter, this is a good approach. You don’t know which pizza place is the best but you know which ones are good enough.

Satisficing is one of the foundations of productive human behavior; it prevails when we don’t waste time on decisions that don’t matter, or more accurately, when we don’t waste time trying to find improvements that are not going to make a significant difference in our happiness or satisfaction.

All of us, Levitin argues, engage in satisficing every time we clean our homes.

If we got down on the floor with a toothbrush every day to clean the grout, if we scrubbed the windows and walls every single day, the house would be spotless. But few of us go to this much trouble even on a weekly basis (and when we do, we’re likely to be labeled obsessive-compulsive). For most of us, we clean our houses until they are clean enough, reaching a kind of equilibrium between effort and benefit. It is this cost-benefits analysis that is at the heart of satisficing.

The easiest way to be happy is to want what you already have. “Happy people engage in satisficing all the time, even if they don’t know it.”

Satisficing is a tool that allows you not to waste time on things that don’t really matter. Who cares if you pick Colgate or Crest? For other decisions, “the old-fashioned pursuit of excellence remains the right strategy.”

We now spend an unusual amount of time and energy ignoring and filtering. Consider the supermarket.

In 1976, the average supermarket stocked 9,000 unique products; today that number has ballooned to 40,000 of them, yet the average person gets 80%– 85% of their needs in only 150 different supermarket items. That means that we need to ignore 39,850 items in the store.

This comes with a cost.

Neuroscientists have discovered that unproductivity and loss of drive can result from decision overload. Although most of us have no trouble ranking the importance of decisions if asked to do so, our brains don’t automatically do this.

We have a limited number of decisions. There are only so many we can make in a day. Once we’ve hit that limit it doesn’t matter how important they are.

The decision-making network in our brain doesn’t prioritize.

Our world has exploded. Information is abundant. I didn’t think we could process it all but Levitin argues that we can, at a cost.

We can have trouble separating the trivial from the important, and all this information processing makes us tired. Neurons are living cells with a metabolism; they need oxygen and glucose to survive and when they’ve been working hard, we experience fatigue. Every status update you read on Facebook, every tweet or text message you get from a friend, is competing for resources in your brain with important things like whether to put your savings in stocks or bonds, where you left your passport, or how best to reconcile with a close friend you just had an argument with.

The processing capacity of the conscious mind has been estimated at 120 bits per second. That bandwidth, or window, is the speed limit for the traffic of information we can pay conscious attention to at any one time. While a great deal occurs below the threshold of our awareness, and this has an impact on how we feel and what our life is going to be like, in order for something to become encoded as part of your experience, you need to have paid conscious attention to it.

What does this mean?

In order to understand one person speaking to us, we need to process 60 bits of information per second. With a processing limit of 120 bits per second, this means you can barely understand two people talking to you at the same time. Under most circumstances, you will not be able to understand three people talking at the same time. …

With such attentional restrictions, it’s clear why many of us feel overwhelmed by managing some of the most basic aspects of life. Part of the reason is that our brains evolved to help us deal with life during the hunter-gatherer phase of human history, a time when we might encounter no more than a thousand people across the entire span of our lifetime. Walking around midtown Manhattan, you’ll pass that number of people in half an hour.

Attention is the most essential mental resource for any organism. It determines which aspects of the environment we deal with, and most of the time, various automatic, subconscious processes make the correct choice about what gets passed through to our conscious awareness. For this to happen, millions of neurons are constantly monitoring the environment to select the most important things for us to focus on. These neurons are collectively the attentional filter. They work largely in the background, outside of our conscious awareness. This is why most of the perceptual detritus of our daily lives doesn’t register, or why, when you’ve been driving on the freeway for several hours at a stretch, you don’t remember much of the scenery that has whizzed by: Your attentional system “protects” you from registering it because it isn’t deemed important. This unconscious filter follows certain principles about what it will let through to your conscious awareness.

The attentional filter is one of evolution’s greatest achievements. In nonhumans, it ensures that they don’t get distracted by irrelevancies. Squirrels are interested in nuts and predators, and not much else. Dogs, whose olfactory sense is one million times more sensitive than ours, use smell to gather information about the world more than they use sound, and their attentional filter has evolved to make that so. If you’ve ever tried to call your dog while he is smelling something interesting, you know that it is very difficult to grab his attention with sound— smell trumps sound in the dog brain. No one has yet worked out all of the hierarchies and trumping factors in the human attentional filter, but we’ve learned a great deal about it. When our protohuman ancestors left the cover of the trees to seek new sources of food, they simultaneously opened up a vast range of new possibilities for nourishment and exposed themselves to a wide range of new predators. Being alert and vigilant to threatening sounds and visual cues is what allowed them to survive; this meant allowing an increasing amount of information through the attentional filter.

Levitin points out an interesting fact on how highly successful people (HSP) differ from the rest of us when it comes to attentional filters.

Successful people— or people who can afford it— employ layers of people whose job it is to narrow the attentional filter. That is, corporate heads, political leaders, spoiled movie stars, and others whose time and attention are especially valuable have a staff of people around them who are effectively extensions of their own brains, replicating and refining the functions of the prefrontal cortex’s attentional filter.

These highly successful persons have many of the daily distractions of life handled for them, allowing them to devote all of their attention to whatever is immediately before them. They seem to live completely in the moment. Their staff handle correspondence, make appointments, interrupt those appointments when a more important one is waiting, and help to plan their days for maximum efficiency (including naps!). Their bills are paid on time, their car is serviced when required, they’re given reminders of projects due, and their assistants send suitable gifts to the HSP’s loved ones on birthdays and anniversaries. Their ultimate prize if it all works? A Zen-like focus.

Levitin argues that if we organize our minds and our lives “following the new neuroscience of attention and memory, we can all deal with the world in ways that provide the sense of freedom that these highly successful people enjoy.”

To do that, however, we need to understand the architecture of our attentional system. “To better organize our mind, we need to know how it has organized itself.”

Change and importance are two crucial principles used by our attentional filter.

The brain’s change detector is at work all the time, whether you know it or not. If a close friend or relative calls on the phone, you might detect that her voice sounds different and ask if she’s congested or sick with the flu. When your brain detects the change, this information is sent to your consciousness, but your brain doesn’t explicitly send a message when there is no change. If your friend calls and her voice sounds normal, you don’t immediately think, “Oh, her voice is the same as always.” Again, this is the attentional filter doing its job, detecting change, not constancy.

Importance can also filter information. But it’s not objective or absolute importance but something personal and relevant to you.

If you’re driving, a billboard for your favorite music group might catch your eye (really, we should say catch your mind) while other billboards go ignored. If you’re in a crowded room, at a party for instance, certain words to which you attach high importance might suddenly catch your attention, even if spoken from across the room. If someone says “fire” or “sex” or your own name, you’ll find that you’re suddenly following a conversation far away from where you’re standing, with no awareness of what those people were talking about before your attention was captured.

The attentional filter lets us live on autopilot most of the time coming out of it only when we need to. In so doing, we “do not register the complexities, nuances, and often the beauty of what is right in front of us.”

A great number of failures of attention occur because we are not using these two principles to our advantage.

Simply put, attention is limited.

A critical point that bears repeating is that attention is a limited-capacity resource— there are definite limits to the number of things we can attend to at once. We see this in everyday activities. If you’re driving, under most circumstances, you can play the radio or carry on a conversation with someone else in the car. But if you’re looking for a particular street to turn onto, you instinctively turn down the radio or ask your friend to hang on for a moment, to stop talking. This is because you’ve reached the limits of your attention in trying to do these three things. The limits show up whenever we try to do too many things at once.

Our brain hides things from us.

The human brain has evolved to hide from us those things we are not paying attention to. In other words, we often have a cognitive blind spot: We don’t know what we’re missing because our brain can completely ignore things that are not its priority at the moment— even if they are right in front of our eyes. Cognitive psychologists have called this blind spot various names, including inattentional blindness.

One of the most famous demonstrations of this is the basketball video (for more see: The Invisible Gorilla: How Our Intuitions Deceive Us.)

A lot of instances of losing things like car keys, passports, money, receipts, and so on occur because our attentional systems are overloaded and they simply can’t keep track of everything. The average American owns thousands of times more possessions than the average hunter-gatherer. In a real biological sense, we have more things to keep track of than our brains were designed to handle. Even towering intellectuals such as Kant and Wordsworth complained of information excess and sheer mental exhaustion induced by too much sensory input or mental overload.

But we need not fear this cognitive overload, Levitin argues. “More than ever, effective external systems are available for organizing, categorizing, and keeping track of things.”

Information Overload, Then and Now

We’ve been around a long time. For most of that time we didn’t do much of anything other than “procreate and survive.” Then we discovered farming and irrigation and gave up our fairly nomadic lifestyle. Farming allowed us to specialize. I could grow potatoes and you could grow tomatoes and we could trade. This created a dependency on each other and markets for trading. All of this trading, in turn required an accounting system to keep tabs on inventory and trades. This was the birthplace of writing.

With the growth of trade, cities, and writing, people soon discovered architecture, government, and the other refinements of being that collectively add up to what we think of as civilization. The appearance of writing some 5,000 years ago was not met with unbridled enthusiasm; many contemporaries saw it as technology gone too far, a demonic invention that would rot the mind and needed to be stopped. Then, as now, printed words were promiscuous— it was impossible to control where they went or who would receive them, and they could circulate easily without the author’s knowledge or control. Lacking the opportunity to hear information directly from a speaker’s mouth, the antiwriting contingent complained that it would be impossible to verify the accuracy of the writer’s claims, or to ask follow-up questions. Plato was among those who voiced these fears; his King Thamus decried that the dependence on written words would “weaken men’s characters and create forgetfulness in their souls.” Such externalization of facts and stories meant people would no longer need to mentally retain large quantities of information themselves and would come to rely on stories and facts as conveyed, in written form, by others. Thamus, king of Egypt, argued that the written word would infect the Egyptian people with fake knowledge. The Greek poet Callimachus said books are “a great evil.” The Roman philosopher Seneca the Younger ( tutor to Nero) complained that his peers were wasting time and money accumulating too many books, admonishing that “the abundance of books is a distraction.” Instead, Seneca recommended focusing on a limited number of good books, to be read thoroughly and repeatedly. Too much information could be harmful to your mental health.

Cue the printing press, which allowed for the rapid copying of books. This further complicated intellectual life.

The printing press was introduced in the mid 1400s, allowing for the more rapid proliferation of writing, replacing laborious (and error-prone) hand copying. Yet again, many complained that intellectual life as we knew it was done for. Erasmus, in 1525, went on a tirade against the “swarms of new books,” which he considered a serious impediment to learning. He blamed printers whose profit motive sought to fill the world with books that were “foolish, ignorant, malignant, libelous, mad, impious and subversive.” Leibniz complained about “that horrible mass of books that keeps on growing ” and that would ultimately end in nothing less than a “return to barbarism.” Descartes famously recommended ignoring the accumulated stock of texts and instead relying on one’s own observations. Presaging what many say today, Descartes complained that “even if all knowledge could be found in books, where it is mixed in with so many useless things and confusingly heaped in such large volumes, it would take longer to read those books than we have to live in this life and more effort to select the useful things than to find them oneself.”

A steady flow of complaints about the proliferation of books reverberated into the late 1600s. Intellectuals warned that people would stop talking to each other, burying themselves in books, polluting their minds with useless, fatuous ideas.

There is an argument that this generation is at the same crossroads — our Gutenburg moment.

iPhones and iPads, email, and Twitter are the new revolution.

Each was decried as an addiction, an unnecessary distraction, a sign of weak character, feeding an inability to engage with real people and the real-time exchange of ideas.

The industrial revolution brought along a rapid rise in discovery and advancement. Scientific information increased at a staggering clip.

Today, someone with a PhD in biology can’t even know all that is known about the nervous system of the squid! Google Scholar reports 30,000 research articles on that topic, with the number increasing exponentially. By the time you read this, the number will have increased by at least 3,000. The amount of scientific information we’ve discovered in the last twenty years is more than all the discoveries up to that point, from the beginning of language.

This is taxing all of us as we filter what we need to know from what we don’t. This ties in nicely with Tyler Cowen’s argument that the future of work is changing and we will need to add value to computers.

To cope with information overload we create to-do lists and email ourselves reminders. I have lists of lists. Right now there are over 800 unread emails in my inbox. Many of these are reminders to myself to look into something or to do something, links that I need to go back and read, or books I want to add to my wishlist. I see those emails and think, yes I want to do that but not right now. So they sit in my inbox. Occasionally I’ll create a to-do list, which starts off with the best intentions and rapidly becomes a brain dump. Eventually I remember the 18 minute plan for managing your day and I re-focus, scheduling time for the most important things. No matter what I do I always feel like I’m on the border between organized and chaos.

A large part of this feeling of being overwhelmed can be traced back to our evolutionarily outdated attentional system. I mentioned earlier the two principles of the attentional filter: change and importance. There is a third principle of attention— not specific to the attentional filter— that is relevant now more than ever. It has to do with the difficulty of attentional switching. We can state the principle this way: Switching attention comes with a high cost.

Our brains evolved to focus on one thing at a time. This enabled our ancestors to hunt animals, to create and fashion tools, to protect their clan from predators and invading neighbors. The attentional filter evolved to help us to stay on task, letting through only information that was important enough to deserve disrupting our train of thought. But a funny thing happened on the way to the twenty-first century: The plethora of information and the technologies that serve it changed the way we use our brains. Multitasking is the enemy of a focused attentional system. Increasingly, we demand that our attentional system try to focus on several things at once, something that it was not evolved to do. We talk on the phone while we’re driving, listening to the radio, looking for a parking place, planning our mom’s birthday party, trying to avoid the road construction signs, and thinking about what’s for lunch. We can’t truly think about or attend to all these things at once, so our brains flit from one to the other, each time with a neurobiological switching cost. The system does not function well that way. Once on a task, our brains function best if we stick to that task.

When you pay attention to something it means you don’t see something else. David Foster Wallace hit upon this in his speech, The Truth With A Whole Lot Of Rhetorical Bullshit Pared Away. He said:

Learning how to think really means learning how to exercise some control over how and what you think. It means being conscious and aware enough to choose what you pay attention to and to choose how you construct meaning from experience. Because if you cannot exercise this kind of choice in adult life, you will be totally hosed. Think of the old cliché about the mind being an excellent servant but a terrible master. This, like many clichés, so lame and unexciting on the surface, actually expresses a great and terrible truth.

And Winifred Gallagher, author of the book Rapt: Attention and the Focused Life, wrote:

That your experience largely depends on the material objects and mental subjects that you choose to pay attention to or ignore is not an imaginative notion, but a physiological fact. When you focus on a stop sign or a sonnet, a waft of perfume or a stock-market tip, your brain registers that “target,” which enables it to affect your behavior. In contrast, the things that you don’t attend to in a sense don’t exist, at least for you.

All day long, you are selectively paying attention to something, and much more often than you may suspect, you can take charge of this process to good effect. Indeed, your ability to focus on this and suppress that is the key to controlling your experience and, ultimately, your well-being.

When you walk into the front door of your house after a long day of work to screaming kids and a ringing phone you’re not thinking about where you left your car keys.

Attention is created by networks of neurons in the prefrontal cortex (just behind your forehead) that are sensitive only to dopamine. When dopamine is released, it unlocks them, like a key in your front door, and they start firing tiny electrical impulses that stimulate other neurons in their network. But what causes that initial release of dopamine? Typically, one of two different triggers:

1. Something can grab your attention automatically, usually something that is salient to your survival, with evolutionary origins. This vigilance system incorporating the attentional filter is always at work, even when you’re asleep, monitoring the environment for important events. This can be a loud sound or bright light (the startle reflex), something moving quickly (that might indicate a predator), a beverage when you’re thirsty, or an attractively shaped potential sexual partner.

2. You effectively will yourself to focus only on that which is relevant to a search or scan of the environment. This deliberate filtering has been shown in the laboratory to actually change the sensitivity of neurons in the brain. If you’re trying to find your lost daughter at the state fair, your visual system reconfigures to look only for things of about her height, hair color, and body build, filtering everything else out. Simultaneously, your auditory system retunes itself to hear only frequencies in that band where her voice registers. You could call it the Where’s Waldo? filtering network.

It all comes back to Waldo.

If it has red in it, our red-sensitive neurons are involved in the imagining. They then automatically tune themselves, and inhibit other neurons (the ones for the colors you’re not interested in) to facilitate the search. Where’s Waldo? trains children to set and exercise their visual attentional filters to locate increasingly subtle cues in the environment, much as our ancestors might have trained their children to track animals through the forest, starting with easy-to-see and easy-to -differentiate animals and working up to camouflaging animals that are more difficult to pick out from the surrounding environment. The system also works for auditory filtering— if we are expecting a particular pitch or timbre in a sound, our auditory neurons become selectively tuned to those characteristics.

When we willfully retune sensory neurons in this way, our brains engage in top-down processing, originating in a higher, more advanced part of the brain than sensory processing.

But if we have an effective attention filter, why do we find it so hard to filter out distractions? Cue technology.

For one thing, we’re doing more work than ever before. The promise of a computerized society, we were told, was that it would relegate to machines all of the repetitive drudgery of work, allowing us humans to pursue loftier purposes and to have more leisure time. It didn’t work out this way. Instead of more time, most of us have less. Companies large and small have off-loaded work onto the backs of consumers. Things that used to be done for us, as part of the value-added service of working with a company, we are now expected to do ourselves. With air travel, we’re now expected to complete our own reservations and check-in, jobs that used to be done by airline employees or travel agents. At the grocery store, we’re expected to bag our own groceries and, in some supermarkets, to scan our own purchases. We pump our own gas at filling stations. Telephone operators used to look up numbers for us. Some companies no longer send out bills for their services— we’re expected to log in to their website, access our account, retrieve our bill, and initiate an electronic payment; in effect, do the job of the company for them. Collectively, this is known as shadow work— it represents a kind of parallel, shadow economy in which a lot of the service we expect from companies has been transferred to the customer. Each of us is doing the work of others and not getting paid for it. It is responsible for taking away a great deal of the leisure time we thought we would all have in the twenty-first century.

Beyond doing more work, we are dealing with more changes in information technology than our parents did, and more as adults than we did as children. The average American replaces her cell phone every two years, and that often means learning new software, new buttons, new menus. We change our computer operating systems every three years, and that requires learning new icons and procedures, and learning new locations for old menu items.

It’s not a coincidence that highly successful people tend to offload these tasks to others, allowing them to focus.

As knowledge becomes more available— and decentralized through the Internet— the notions of accuracy and authoritativeness have become clouded. Conflicting viewpoints are more readily available than ever, and in many cases they are disseminated by people who have no regard for facts or truth. Many of us find we don’t know whom to believe, what is true, what has been modified, and what has been vetted.

[...]

My teacher, the Stanford cognitive psychologist Amos Tversky, encapsulates this in “the Volvo story.” A colleague was shopping for a new car and had done a great deal of research. Consumer Reports showed through independent tests that Volvos were among the best built and most reliable cars in their class. Customer satisfaction surveys showed that Volvo owners were far happier with their purchase after several years. The surveys were based on tens of thousands of customers. The sheer number of people polled meant that any anomaly— like a specific vehicle that was either exceptionally good or exceptionally bad— would be drowned out by all the other reports. In other words, a survey such as this has statistical and scientific legitimacy and should be weighted accordingly when one makes a decision. It represents a stable summary of the average experience, and the most likely best guess as to what your own experience will be (if you’ve got nothing else to go on, your best guess is that your experience will be most like the average).

Amos ran into his colleague at a party and asked him how his automobile purchase was going. The colleague had decided against the Volvo in favor of a different, lower-rated car. Amos asked him what made him change his mind after all that research pointed to the Volvo. Was it that he didn’t like the price? The color options? The styling? No, it was none of those reasons, the colleague said. Instead, the colleague said, he found out that his brother-in-law had owned a Volvo and that it was always in the shop.

From a strictly logical point of view, the colleague is being irrational. The brother-in-law’s bad Volvo experience is a single data point swamped by tens of thousands of good experiences— it’s an unusual outlier. But we are social creatures. We are easily swayed by first-person stories and vivid accounts of a single experience. Although this is statistically wrong and we should learn to overcome the bias, most of us don’t. Advertisers know this, and this is why we see so many first-person testimonial advertisements on TV. “I lost twenty pounds in two weeks by eating this new yogurt— and it was delicious, too!” Or “I had a headache that wouldn’t go away. I was barking at the dog and snapping at my loved ones. Then I took this new medication and I was back to my normal self.” Our brains focus on vivid, social accounts more than dry, boring, statistical accounts.

So not only does knowledge become easier to access than ever before (frictionless) but as it becomes more available our brains need to cope with it, which they do by magnifying our pre-existing cognitive biases.

illusions

In Roger Shepard’s version of the famous “Ponzo illusion,” the monster at the top seems larger than the one at the bottom, but a ruler will show that they’re the same size. In the Ebbinghaus illusion below it, the white circle on the left seems larger than the white circle on the right, but they’re the same size. We say that our eyes are playing tricks on us, but in fact, our eyes aren’t playing tricks on us, our brain is. The visual system uses heuristics or shortcuts to piece together an understanding of the world, and it sometimes gets things wrong.

We are prone to cognitive illusions when we make decisions. The same type of shortcuts are at play.

The Organized Mind: Thinking Straight in the Age of Information Overload is a wholly fascinating look at our minds.