What If? Serious Scientific Answers to Absurd Hypothetical Questions

xkcd-title

Randall Munroe, creator of xkcd, has written a book: What If?: Serious Scientific Answers to Absurd Hypothetical Questions

Here are a few questions, which I loved, that are sure to spark your curiosity and imagination.

What would happen if you tried to hit a baseball pitched at 90 percent the speed of light?

xkcd-baseball 1

The answer turns out to be “a lot of things ,” and they all happen very quickly, and it doesn’t end well for the batter (or the pitcher). I sat down with some physics books, a Nolan Ryan action figure, and a bunch of videotapes of nuclear tests and tried to sort it all out. What follows is my best guess at a nanosecond-by-nanosecond portrait.

The ball would be going so fast that everything else would be practically stationary. Even the molecules in the air would stand still. Air molecules would vibrate back and forth at a few hundred miles per hour, but the ball would be moving through them at 600 million miles per hour. This means that as far as the ball is concerned, they would just be hanging there, frozen.

The ideas of aerodynamics wouldn’t apply here. Normally, air would flow around anything moving through it. But the air molecules in front of this ball wouldn’t have time to be jostled out of the way. The ball would smack into them so hard that the atoms in the air molecules would actually fuse with the atoms in the ball’s surface. Each collision would release a burst of gamma rays and scattered particles.

xkcd-baseball 2

These gamma rays and debris would expand outward in a bubble centered on the pitcher’s mound. They would start to tear apart the molecules in the air, ripping the electrons from the nuclei and turning the air in the stadium into an expanding bubble of incandescent plasma. The wall of this bubble would approach the batter at about the speed of light— only slightly ahead of the ball itself.

The constant fusion at the front of the ball would push back on it, slowing it down, as if the ball were a rocket flying tail-first while firing its engines. Unfortunately, the ball would be going so fast that even the tremendous force from this ongoing thermonuclear explosion would barely slow it down at all. It would, however, start to eat away at the surface, blasting tiny fragments of the ball in all directions. These fragments would be going so fast that when they hit air molecules, they would trigger two or three more rounds of fusion.

After about 70 nanoseconds the ball would arrive at home plate. The batter wouldn’t even have seen the pitcher let go of the ball, since the light carrying that information would arrive at about the same time the ball would. Collisions with the air would have eaten the ball away almost completely, and it would now be a bullet-shaped cloud of expanding plasma (mainly carbon, oxygen, hydrogen, and nitrogen) ramming into the air and triggering more fusion as it went. The shell of x-rays would hit the batter first, and a handful of nanoseconds later the debris cloud would hit.

When it would reach home plate, the center of the cloud would still be moving at an appreciable fraction of the speed of light. It would hit the bat first, but then the batter, plate, and catcher would all be scooped up and carried backward through the backstop as they disintegrated. The shell of x-rays and superheated plasma would expand outward and upward, swallowing the backstop, both teams, the stands, and the surrounding neighborhood— all in the first microsecond.

Suppose you’re watching from a hilltop outside the city. The first thing you would see would be a blinding light, far outshining the sun. This would gradually fade over the course of a few seconds, and a growing fireball would rise into a mushroom cloud. Then, with a great roar, the blast wave would arrive, tearing up trees and shredding houses.

Everything within roughly a mile of the park would be leveled, and a firestorm would engulf the surrounding city. The baseball diamond, now a sizable crater, would be centered a few hundred feet behind the former location of the backstop.

xkcd-baseball3

Major League Baseball Rule 6.08( b) suggests that in this situation, the batter would be considered “hit by pitch,” and would be eligible to advance to first base.

***

What would happen if everyone on Earth stood as close to each other as they could and jumped, everyone landing on the ground at the same instant?

This is one the most popular questions submitted through my website. It’s been examined before, including by ScienceBlogs and The Straight Dope. They cover the kinematics pretty well. However, they don’t tell the whole story.

Let’s take a closer look.

At the start of the scenario, the entire Earth’s population has been magically transported together into one place.

xkcd-prejump

This crowd takes up an area the size of Rhode Island. But there’s no reason to use the vague phrase “an area the size of Rhode Island.” This is our scenario; we can be specific. They’re actually in Rhode Island.

At the stroke of noon, everyone jumps.

xkcd-jumping

As discussed elsewhere, it doesn’t really affect the planet. Earth outweighs us by a factor of over ten trillion. On average, we humans can vertically jump maybe half a meter on a good day. Even if the Earth were rigid and responded instantly, it would be pushed down by less than an atom’s width.

Next, everyone falls back to the ground.

Technically, this delivers a lot of energy into the Earth, but it’s spread out over a large enough area that it doesn’t do much more than leave footprints in a lot of gardens. A slight pulse of pressure spreads through the North American continental crust and dissipates with little effect. The sound of all those feet hitting the ground creates a loud, drawn-out roar lasting many seconds.

Eventually, the air grows quiet.

Seconds pass. Everyone looks around. There are a lot of uncomfortable glances. Someone coughs.

A cell phone comes out of a pocket. Within seconds, the rest of the world’s five billion phones follow. All of them —even those compatible with the region’s towers— are displaying some version of “NO SIGNAL.” The cell networks have all collapsed under the unprecedented load. Outside Rhode Island, abandoned machinery begins grinding to a halt.

The T. F. Green Airport in Warwick, Rhode Island, handles a few thousand passengers a day. Assuming they got things organized (including sending out scouting missions to retrieve fuel), they could run at 500 percent capacity for years without making a dent in the crowd.

The addition of all the nearby airports doesn’t change the equation much. Nor does the region’s light rail system. Crowds climb on board container ships in the deep-water port of Providence, but stocking sufficient food and water for a long sea voyage proves a challenge.

Rhode Island’s half-million cars are commandeered. Moments later, I-95, I-195, and I-295 become the sites of the largest traffic jam in the history of the planet. Most of the cars are engulfed by the crowds, but a lucky few get out and begin wandering the abandoned road network.

Some make it past New York or Boston before running out of fuel. Since the electricity is probably not on at this point, rather than find a working gas pump, it’s easier to just abandon the car and steal a new one. Who can stop you? All the cops are in Rhode Island.

The edge of the crowd spreads outward into southern Massachusetts and Connecticut. Any two people who meet are unlikely to have a language in common, and almost nobody knows the area. The state becomes a chaotic patchwork of coalescing and collapsing social hierarchies. Violence is common. Everybody is hungry and thirsty. Grocery stores are emptied. Fresh water is hard to come by and there’s no efficient system for distributing it.

Within weeks, Rhode Island is a graveyard of billions.

The survivors spread out across the face of the world and struggle to build a new civilization atop the pristine ruins of the old. Our species staggers on, but our population has been greatly reduced. Earth’s orbit is completely unaffected— it spins along exactly as it did before our species-wide jump.

But at least now we know.

What If?: Serious Scientific Answers to Absurd Hypothetical Questions is sure to spark your imagination and reignite your creativity.

Harper Lee on Reading and Loving Books

Harper Lee

Harper Lee, author of the much-loved novel To Kill a Mockingbird, wrote the following letter to Oprah Winfrey,

May 7, 2006

Dear Oprah,

Do you remember when you learned to read, or like me, can you not even remember a time when you didn’t know how? I must have learned from having been read to by my family. My sisters and brother, much older, read aloud to keep me from pestering them; my mother read me a story every day, usually a children’s classic, and my father read from the four newspapers he got through every evening. Then, of course, it was Uncle Wiggily at bedtime.

So I arrived in the first grade, literate, with a curious cultural assimilation of American history, romance, the Rover Boys, Rapunzel, and The Mobile Press. Early signs of genius? Far from it. Reading was an accomplishment I shared with several local contemporaries. Why this endemic precocity? Because in my hometown, a remote village in the early 1930s, youngsters had little to do but read. A movie? Not often — movies weren’t for small children. A park for games? Not a hope. We’re talking unpaved streets here, and the Depression.

Books were scarce. There was nothing you could call a public library, we were a hundred miles away from a department store’s books section, so we children began to circulate reading material among ourselves until each child had read another’s entire stock. There were long dry spells broken by the new Christmas books, which started the rounds again.

As we grew older, we began to realize what our books were worth: Anne of Green Gables was worth two Bobbsey Twins; two Rover Boys were an even swap for two Tom Swifts. Aesthetic frissons ran a poor second to the thrills of acquisition. The goal, a full set of a series, was attained only once by an individual of exceptional greed — he swapped his sister’s doll buggy.

We were privileged. There were children, mostly from rural areas, who had never looked into a book until they went to school. They had to be taught to read in the first grade, and we were impatient with them for having to catch up. We ignored them.

And it wasn’t until we were grown, some of us, that we discovered what had befallen the children of our African-American servants. In some of their schools, pupils learned to read three-to-one — three children to one book, which was more than likely a cast-off primer from a white grammar school. We seldom saw them until, older, they came to work for us.

Now, 75 years later in an abundant society where people have laptops, cell phones, iPods, and minds like empty rooms, I still plod along with books. Instant information is not for me. I prefer to search library stacks because when I work to learn something, I remember it.

And, Oprah, can you imagine curling up in bed to read a computer? Weeping for Anna Karenina and being terrified by Hannibal Lecter, entering the heart of darkness with Mistah Kurtz, having Holden Caulfield ring you up — some things should happen on soft pages, not cold metal.

The village of my childhood is gone, with it most of the book collectors, including the dodgy one who swapped his complete set of Seckatary Hawkinses for a shotgun and kept it until it was retrieved by an irate parent.

Now we are three in number and live hundreds of miles away from each other. We still keep in touch by telephone conversations of recurrent theme: “What is your name again?” followed by “What are you reading?” We don’t always remember.

Much love,
Harper

(Sources: Letters of Note; image)

Mistakes

“Forgetting your mistakes is a terrible error if you are trying to improve your cognition… Why not celebrate stupidities!” — Charlie Munger

“If anyone can refute me – show me I’m making a mistake or looking at things from the wrong perspective – I’ll gladly change. It’s the truth I’m after and the truth never harmed anyone. What harms us is to persist in self-deceit and ignorance” — Marcus Aurelius in Meditations

Sometimes we lose our way.

We make mistakes. We focus on the wrong things. We pursue goals at all costs. We teeter on ethical and moral cliffs. We get too far down a slippery slope. We steal. We cheat. We lie. We deceive others. We deceive ourselves. We don’t open ourselves up to our friends. We see crime or fraud and don’t speak out.

You can be a good person and still exercise poor judgment.

In these moments we’re not the friend others deserve, the partner others choose, the child our parents raised, the exemplar we wish to be, nor the person we’re capable of being.

It can happen to the best of us. We’re human. We all make mistakes.

Just because we’ve lost our way doesn’t mean that we are lost forever. In the end, it’s not the failures that define us so much as how we respond.

Many of us get steered off course at some point in our lives, but what really counts is the choices that follow those mistakes. A teen who gets in trouble with the law, for example, can accept responsibility for his actions, change his behaviour, and go on to lead the nation, or he can see only failure and tumble into a vicious cycle of committing ever-larger crimes.

It’s not that you stumble, it’s that you get back up. It’s not that you did something wrong but that you realize what’s happening and change. It’s not that you messed up as a friend or lover, it’s that you see ways you can be better. Having the wrong priorities is bad enough, but realizing that and refusing to change is worse. It’s not that you never took the time to smell the roses and admire the sunset, it’s that once you realize this you take the time to notice.

Mistakes are bad, no doubt, but not learning from them is worse. The key to learning from mistakes is to admit them without excuses or defensiveness, rub your nose in them a little, and make the changes you need to make to grow going forward. If you can’t admit your mistakes, you won’t grow.

The Darwin Economy – Why Smith’s Invisible Hand Breaks Down

The Darwin Economy

In The Darwin Economy: Liberty, Competition, and The Common Good Robert H. Frank, an economics professor at Cornell’s Johnson Graduate School of Management, takes on the debate of who was a better economist—Adam Smith or Charles Darwin. Frank, surprisingly, sides with Darwin, arguing that within the next century Darwin will unseat Smith as the intellectual founder of economics.

Why does the invisible hand, “which says that competition challenges self-interest for the common good” break down?

Without question, Adam Smith’s invisible hand was a genuinely ground breaking insight. Producers rush to introduce improved product designs and cost-saving innovations for the sole purpose of capturing market share and profits from their rivals. In the short run, these steps work just as the producers had hoped. But rival firms are quick to mimic the innovations, and the resulting competition quickly causes prices to fall in line with the new, lower costs. In the end, Smith argued, consumers are the ultimate beneficiaries of all this churning.

But many of Smith’s modern disciples believe he made the much bolder claim that markets always harness individual self-interest to produce the greatest good for society as a whole. Smith’s own account, however, was far more circumspect. He wrote, for example, that the profit-seeking business owner “intends only his own gain, and he is in this, as in many other cases, led by an invisible hand to promote an end which was no part of his intention. Nor is it always the worse for the society that it was not part of it [emphasis added].”

Smith never believed that the invisible hand guaranteed good outcomes in all circumstances. His skepticism was on full display, for example, when he wrote, “People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices.” To him, what was remarkable was that self-interested actions often led to socially benign outcomes.

Like Smith, modern progressive critics of the market system tend to attribute its failings to conspiracies to restrain competition. But competition was much more easily restrained in Smith’s day than it is now. The real challenge to the invisible hand is rooted in the very logic of the competitive process itself.

Charles Darwin was one of the first to perceive the underlying problem clearly. One of his central insights was that natural selection favors traits and behaviors primarily according to their effect on individual organisms, not larger groups. Sometimes individual and group interests coincide, he recognized, and in such cases we often get invisible hand-like results. A mutation that codes for keener eyesight in one particular hawk, for example, serves the interests of that individual, but its inevitable spread also makes hawks as a species more successful.

In other cases, however, mutations that help the individual prove quite harmful to the larger group. This is in fact the expected result for mutations that confer advantage in head-to-head competition among members of the same species. Male body mass is a case in point. Most vertebrate species are polygynous, meaning that males take more than one mate if they can. The qualifier is important, because when some take multiple mates, others get none. The latter don’t pass their genes along, making them the ultimate losers in Darwinian terms. So it’s no surprise that males often battle furiously for access to mates. Size matters in those battles, and hence the evolutionary arms races that produce larger males.

Elephant seals are an extreme but instructive example.10 Bulls of the species often weigh almost six thousand pounds, more than five times as much as females and almost as much as a Lincoln Navigator SUV. During the mating season, pairs of mature bulls battle one another ferociously for hours on end, until one finally trudges off in defeat, bloodied and exhausted. The victor claims near-exclusive sexual access to a harem that may number as many as a hundred cows. But while being larger than his rival makes an individual bull more likely to prevail in such battles, prodigious size is a clear handicap for bulls as a group, making them far more vulnerable to sharks and other predators.

Given an opportunity to vote on a proposal to reduce every animal’s weight by half, bulls would have every reason to favor it. Since it’s relative size, not absolute size, that matters in battle, the change would not affect the outcome of any given head-to-head contest, but it would reduce each animal’s risk of being eaten by sharks. There’s no practical way, of course, that elephant seals could implement such a proposal. Nor could any bull solve this problem unilaterally, since a bull that weighed much less than others would never win a mate.

Similar conflicts pervade human interactions when individual rewards depend on relative performance. Their essence is nicely captured in a celebrated example by the economist Thomas Schelling. Schelling noted that hockey players who are free to choose for themselves invariably skate without helmets, yet when they’re permitted to vote on the matter, they support rules that require them. If helmets are so great, he wondered, why don’t players just wear them? Why do they need a rule?

His answer began with the observation that skating without a helmet confers a small competitive edge—perhaps by enabling players to see or hear a little better, or perhaps by enabling them to intimidate their opponents. The immediate lure of gaining a competitive edge trumps more abstract concerns about the possibility of injury, so players eagerly embrace the additional risk. The rub, of course, is that when every player skates without a helmet, no one gains a competitive advantage—hence the attraction of the rule.

As Schelling’s diagnosis makes clear, the problem confronting hockey players has nothing to do with imperfect information, lack of self-control, or poor cognitive skills—shortcomings that are often cited as grounds for government intervention. And it clearly does not stem from exploitation or any insufficiency of competition. Rather, it’s a garden-variety collective action problem. Players favor helmet rules because that’s the only way they’re able to play under reasonably safe conditions. A simple nudge—say, a sign in the locker room reminding players that helmets reduce the risk of serious injury—just won’t solve their problem. They need a mandate.

What about the libertarians’ complaint that helmet rules deprive individuals of the right to choose? This objection is akin to objecting that a military arms control agreement robs the signatories of their right to choose for themselves how much to spend on bombs. Of course, but that’s the whole point of such agreements! Parties who confront a collective action problem often realize that the only way to get what they want is to constrain their own ability to do as they please.

As John Stuart Mill argued in On Liberty, it’s permissible to constrain an individual’s freedom of action only when there’s no less intrusive way to prevent undue harm to others. The hockey helmet rule appears to meet this test. By skating without a helmet, a player imposes harm on rival players by making them less likely to win the game, an outcome that really matters to them. If the helmet rule itself somehow imposed even greater harm, it wouldn’t be justified. But that’s a simple practical question, not a matter of deep philosophical principle.

Rewards that depend on relative performance spawn collective action problems that can cause markets to fail. For instance, the same wedge that separates individual and group interests in Darwinian arms races also helps explain why the invisible hand might not automatically lead to the best possible levels of safety in the workplace. The traditional invisible-hand account begins with the observation that, all other factors the same, riskier jobs tend to pay more, for two reasons. Because of the money employers save by not installing additional safety equipment, they can pay more; and because workers like safety, they will choose safer jobs unless riskier jobs do, in fact, pay more. According to the standard invisible-hand narrative, the fact that a worker is willing to accept lower safety for higher wages implies that the extra income was sufficient compensation for the decrement in safety. But that account rests on the assumption that extra income is valued only for the additional absolute consumption it makes possible. When a worker gets a higher wage, however, there is also a second important benefit. He is able to consume more in absolute terms, yes—but he is also able to consume more relative to others.

Most parents, for example, want to send their children to the best possible schools. Some workers might thus decide to accept a riskier job at a higher wage because that would enable them to meet the monthly payments on a house in a better school district. But other workers are in the same boat, and school quality is an inherently relative concept. So if other workers also traded safety for higher wages, the ultimate outcome would be merely to bid up the prices of houses in better school districts. Everyone would end up with less safety, yet no one would achieve the goal that made that trade seem acceptable in the first place. As in a military arms race, when all parties build more arms, none is any more secure than before.

Workers confronting these incentives might well prefer an alternative state of the world in which all enjoyed greater safety, even at the expense of all having lower wages. But workers can control only their own job choices, not the choices of others. If any individual worker accepted a safer job while others didn’t, that worker would be forced to send her children to inferior schools. To get the outcome they desire, workers must act in unison. Again, a mere nudge won’t do. Merely knowing that individual actions are self- canceling doesn’t eliminate the incentive to take those actions.

The Darwin Economy goes on to explore the consequences and implications of Darwin’s theory being a better model for economics than Smith’s invisible hand.

Eight Things I Learned from Peter Thiel’s Zero To One

peter-thiel
Peter Thiel is an entrepreneur and investor. He co-founded PayPal and Palantir. He also made the first outside investment in Facebook and was an early investor in companies like SpaceX and LinkedIn. And now he’s written a book, Zero to One: Notes on Startups, or How to Build the Future, with the goal of helping us “see beyond the tracks laid down” to the “broader future that there is to create.”

The book is an exercise in thinking. It’s about questioning and rethinking received wisdom in order to create the future.

Here are eight lessons I took away from the book.

1. Like Heraclitus, who said that you can only step into the same river once, Thiel believes that each moment in business happens only once.

The next Bill Gates will not build an operating system. The next Larry Page or Sergey Brin won’t make a search engine. And the next Mark Zuckerberg won’t create a social network. If you are copying these guys, you aren’t learning from them.

Of course, it’s easier to copy a model than to make something new. Doing what we already know how to do takes the world from 1 to n, adding more of something familiar. But every time we create something new, we go from 0 to 1. The act of creation is singular, as is the moment of creation, and the result is something fresh and strange.

2. There is no formula for innovation.

The paradox of teaching entrepreneurship is that such a formula (for innovation) cannot exist; because every innovation is new and unique, no authority can prescribe in concrete terms how to be more innovative. Indeed, the single most powerful pattern I have noticed is that successful people find value in unexpected places, and they do this by thinking about business from first principles instead of formulas.

3. The best interview question you can ask.

Whenever I interview someone for a job, I like to ask this question: “What important truth do very few people agree with you on?”

This is a question that sounds easy because it’s straightforward. Actually, it’s very hard to answer. It’s intellectually difficult because the knowledge that everyone is taught in school is by definition agreed upon. And it’s psychologically difficult because anyone trying to answer must say something she knows to be unpopular. Brilliant thinking is rare, but courage is in even shorter supply than genius.

Most commonly, I hear answers like the following:

“Our educational system is broken and urgently needs to be fixed.”

“America is exceptional.”

“There is no God.”

These are bad answers. The first and the second statements might be true, but many people already agree with them. The third statement simply takes one side in a familiar debate. A good answer takes the following form: “Most people believe in x, but the truth is the opposite of x.”

What does this have to do with the future?

In the most minimal sense, the future is simply the set of all moments yet to come. But what makes the future distinctive and important isn’t that it hasn’t happened yet, but rather that it will be a time when the world looks different from today. … Most answers to the contrarian questions are different ways of seeing the present; good answers are as close as we can come to looking into the future.

4. A new company’s most important strength

Properly defined, a startup is the largest group of people you can convince of a plan to build a different future. A new company’s most important strength is new thinking: even more important than nimbleness, small size affords space to think.

5. The first step to thinking clearly

Our contrarian question – What important truth do very few people agree with you on? — is difficult to answer directly. It may be easier to start with a preliminary: what does everybody agree on?”

“Madness is rare in individuals
—but in groups, parties, nations and ages it is the rule.”
— Nietzche (before he went mad)

If you can identify a delusional popular belief, you can find what lies hidden behind it: the contrarian truth.

[…]

Conventional beliefs only ever come to appear arbitrary and wrong in retrospect; whenever one collapses we call the old belief a bubble, but the distortions caused by bubbles don’t disappear when they pop. The internet bubble of the ‘90s was the biggest of the last two decades, and the lessons learned afterward define and distort almost all thinking about technology today. The first step to thinking clearly is to question what we think we know about the past.

Here is an example Thiel gives to help illuminate this idea.

The entrepreneurs who stuck with Silicon Valley learned four big lessons from the dot-com crash that still guide business thinking today:

1. Make incremental advances — “Grand visions inflated the bubble, so they should not be indulged. Anyone who claims to be able to do something great is suspect, and anyone who wants to change the world should be more humble. Small, incremental steps are the only safe path forward.”

2. Stay lean and flexible — “All companies must be lean, which is code for unplanned. You should not know what your business will do; planning is arrogant and inflexible. Instead you should try things out, iterate, and treat entrepreneurship as agnostic experimentation.”

3. Improve on the competition — “Don’t try to create a new market prematurely. The only way to know that you have a real business is to start with an already existing customer, so you should build your company by improving on recognizable products already offered by successful competitors.”

4. Focus on product, not sales — “If your product requires advertising or salespeople to sell it, it’s not good enough: technology is primarily about product development, not distribution. Bubble-era advertising was obviously wasteful, so the only sustainable growth is viral growth.”

These lessons have become dogma in the startup world; those who would ignore them are presumed to invite the justified doom visited upon technology in the great crash of 2000. And yet the opposite principles are probably more correct.

1. It is better to risk boldness than triviality.
2. A bad plan is better than no plan.
3. Competitive markets destroy profits.
4. Sales matters just as much as product.”

To build the future we need to challenge the dogmas that shape our view of the past. That doesn’t mean the opposite of what is believed is necessarily true, it means that you need to rethink what is and is not true and determine how that shapes how we see the world today. As Thiel says, “The most contrarian thing of all is not to oppose the crowd but to think for yourself.

6. Progress comes from monopoly, not competition.

The problem with a competitive business goes beyond lack of profits. Imagine you’re running one of those restaurants in Mountain View. You’re not that different from dozens of your competitors, so you’ve got to fight hard to survive. If you offer affordable food with low margins, you can probably pay employees only minimum wage. And you’ll need to squeeze out every efficiency: That is why small restaurants put Grandma to work at the register and make the kids wash dishes in the back.

A monopoly like Google is different. Since it doesn’t have to worry about competing with anyone, it has wider latitude to care about its workers, its products and its impact on the wider world. Google’s motto—”Don’t be evil”—is in part a branding ploy, but it is also characteristic of a kind of business that is successful enough to take ethics seriously without jeopardizing its own existence. In business, money is either an important thing or it is everything. Monopolists can afford to think about things other than making money; non-monopolists can’t. In perfect competition, a business is so focused on today’s margins that it can’t possibly plan for a long-term future. Only one thing can allow a business to transcend the daily brute struggle for survival: monopoly profits.

So a monopoly is good for everyone on the inside, but what about everyone on the outside? Do outsize profits come at the expense of the rest of society? Actually, yes: Profits come out of customers’ wallets, and monopolies deserve their bad reputation—but only in a world where nothing changes.

In a static world, a monopolist is just a rent collector. If you corner the market for something, you can jack up the price; others will have no choice but to buy from you. Think of the famous board game: Deeds are shuffled around from player to player, but the board never changes. There is no way to win by inventing a better kind of real-estate development. The relative values of the properties are fixed for all time, so all you can do is try to buy them up.

But the world we live in is dynamic: We can invent new and better things. Creative monopolists give customers more choices by adding entirely new categories of abundance to the world. Creative monopolies aren’t just good for the rest of society; they’re powerful engines for making it better.

7. Rivalry causes us to overemphasize old opportunities and slavishly copy what has worked in the past.

Marx and Shakespeare provide two models that we can use to understand almost every kind of conflict.

According to Marx, people fight because they are different. The proletariat fights the bourgeoisie because they have completely different ideas and goals (generated, for Marx, by their very different material circumstances). The greater the difference, the greater the conflict.

To Shakespeare, by contrast, all combatants look more or less alike. It’s not at all clear why they should be fighting since they have nothing to fight about. Consider the opening to Romeo and Juliet: “Two households, both alike in dignity.” The two houses are alike, yet they hate each other. They grow even more similar as the feud escalates. Eventually, they lose sight of why they started fighting in the first place.”

In the world of business, at least, Shakespeare proves the superior guide. Inside a firm, people become obsessed with their competitors for career advancement. Then the firms themselves become obsessed with their competitors in the marketplace. Amid all the human drama, people lose sight of what matters and focus on their rivals instead.

[…]

Rivalry causes us to overemphasize old opportunities and slavishly copy what has worked in the past.

8. Last can be first

You’ve probably heard about “first mover advantage”: if you’re the first entrant into a market, you can capture significant market share while competitors scramble to get started. That can work, but moving first is a tactic, not a goal. What really matters is generating cash flows in the future, so being the first mover doesn’t do you any good if someone else comes along and unseats you. It’s much better to be the last mover – that is, to make the last great development in a specific market and enjoy years or even decades of monopoly profits.

Grandmaster José Raúl Capablanca put it well: to succeed, “you must study the endgame before everything else.”

Zero to One is full of counterintuitive insights that will help your thinking and ignite possibility.

(image source)

The History of Cognitive Overload

The Organized Mind

The Organized Mind: Thinking Straight in the Age of Information Overload, a book by Daniel Levitin, has an interesting section on cognitive overload.

Each day we are confronted with hundreds, probably thousands of decisions. Most of which are insignificant or unimportant or both. Do we really need a whole aisle for toothpaste?

In response to all of these decisions most of us adopt a strategy of satisficing, a term coined by Nobel Prize winner Herbert Simon to describe something that is perhaps not the best but good enough. For things that don’t matter, this is a good approach. You don’t know which pizza place is the best but you know which ones are good enough.

Satisficing is one of the foundations of productive human behavior; it prevails when we don’t waste time on decisions that don’t matter, or more accurately, when we don’t waste time trying to find improvements that are not going to make a significant difference in our happiness or satisfaction.

All of us, Levitin argues, engage in satisficing every time we clean our homes.

If we got down on the floor with a toothbrush every day to clean the grout, if we scrubbed the windows and walls every single day, the house would be spotless. But few of us go to this much trouble even on a weekly basis (and when we do, we’re likely to be labeled obsessive-compulsive). For most of us, we clean our houses until they are clean enough, reaching a kind of equilibrium between effort and benefit. It is this cost-benefits analysis that is at the heart of satisficing.

The easiest way to be happy is to want what you already have. “Happy people engage in satisficing all the time, even if they don’t know it.”

Satisficing is a tool that allows you not to waste time on things that don’t really matter. Who cares if you pick Colgate or Crest? For other decisions, “the old-fashioned pursuit of excellence remains the right strategy.”

We now spend an unusual amount of time and energy ignoring and filtering. Consider the supermarket.

In 1976, the average supermarket stocked 9,000 unique products; today that number has ballooned to 40,000 of them, yet the average person gets 80%– 85% of their needs in only 150 different supermarket items. That means that we need to ignore 39,850 items in the store.

This comes with a cost.

Neuroscientists have discovered that unproductivity and loss of drive can result from decision overload. Although most of us have no trouble ranking the importance of decisions if asked to do so, our brains don’t automatically do this.

We have a limited number of decisions. There are only so many we can make in a day. Once we’ve hit that limit it doesn’t matter how important they are.

The decision-making network in our brain doesn’t prioritize.

Our world has exploded. Information is abundant. I didn’t think we could process it all but Levitin argues that we can, at a cost.

We can have trouble separating the trivial from the important, and all this information processing makes us tired. Neurons are living cells with a metabolism; they need oxygen and glucose to survive and when they’ve been working hard, we experience fatigue. Every status update you read on Facebook, every tweet or text message you get from a friend, is competing for resources in your brain with important things like whether to put your savings in stocks or bonds, where you left your passport, or how best to reconcile with a close friend you just had an argument with.

The processing capacity of the conscious mind has been estimated at 120 bits per second. That bandwidth, or window, is the speed limit for the traffic of information we can pay conscious attention to at any one time. While a great deal occurs below the threshold of our awareness, and this has an impact on how we feel and what our life is going to be like, in order for something to become encoded as part of your experience, you need to have paid conscious attention to it.

What does this mean?

In order to understand one person speaking to us, we need to process 60 bits of information per second. With a processing limit of 120 bits per second, this means you can barely understand two people talking to you at the same time. Under most circumstances, you will not be able to understand three people talking at the same time. …

With such attentional restrictions, it’s clear why many of us feel overwhelmed by managing some of the most basic aspects of life. Part of the reason is that our brains evolved to help us deal with life during the hunter-gatherer phase of human history, a time when we might encounter no more than a thousand people across the entire span of our lifetime. Walking around midtown Manhattan, you’ll pass that number of people in half an hour.

Attention is the most essential mental resource for any organism. It determines which aspects of the environment we deal with, and most of the time, various automatic, subconscious processes make the correct choice about what gets passed through to our conscious awareness. For this to happen, millions of neurons are constantly monitoring the environment to select the most important things for us to focus on. These neurons are collectively the attentional filter. They work largely in the background, outside of our conscious awareness. This is why most of the perceptual detritus of our daily lives doesn’t register, or why, when you’ve been driving on the freeway for several hours at a stretch, you don’t remember much of the scenery that has whizzed by: Your attentional system “protects” you from registering it because it isn’t deemed important. This unconscious filter follows certain principles about what it will let through to your conscious awareness.

The attentional filter is one of evolution’s greatest achievements. In nonhumans, it ensures that they don’t get distracted by irrelevancies. Squirrels are interested in nuts and predators, and not much else. Dogs, whose olfactory sense is one million times more sensitive than ours, use smell to gather information about the world more than they use sound, and their attentional filter has evolved to make that so. If you’ve ever tried to call your dog while he is smelling something interesting, you know that it is very difficult to grab his attention with sound— smell trumps sound in the dog brain. No one has yet worked out all of the hierarchies and trumping factors in the human attentional filter, but we’ve learned a great deal about it. When our protohuman ancestors left the cover of the trees to seek new sources of food, they simultaneously opened up a vast range of new possibilities for nourishment and exposed themselves to a wide range of new predators. Being alert and vigilant to threatening sounds and visual cues is what allowed them to survive; this meant allowing an increasing amount of information through the attentional filter.

Levitin points out an interesting fact on how highly successful people (HSP) differ from the rest of us when it comes to attentional filters.

Successful people— or people who can afford it— employ layers of people whose job it is to narrow the attentional filter. That is, corporate heads, political leaders, spoiled movie stars, and others whose time and attention are especially valuable have a staff of people around them who are effectively extensions of their own brains, replicating and refining the functions of the prefrontal cortex’s attentional filter.

These highly successful persons have many of the daily distractions of life handled for them, allowing them to devote all of their attention to whatever is immediately before them. They seem to live completely in the moment. Their staff handle correspondence, make appointments, interrupt those appointments when a more important one is waiting, and help to plan their days for maximum efficiency (including naps!). Their bills are paid on time, their car is serviced when required, they’re given reminders of projects due, and their assistants send suitable gifts to the HSP’s loved ones on birthdays and anniversaries. Their ultimate prize if it all works? A Zen-like focus.

Levitin argues that if we organize our minds and our lives “following the new neuroscience of attention and memory, we can all deal with the world in ways that provide the sense of freedom that these highly successful people enjoy.”

To do that, however, we need to understand the architecture of our attentional system. “To better organize our mind, we need to know how it has organized itself.”

Change and importance are two crucial principles used by our attentional filter.

The brain’s change detector is at work all the time, whether you know it or not. If a close friend or relative calls on the phone, you might detect that her voice sounds different and ask if she’s congested or sick with the flu. When your brain detects the change, this information is sent to your consciousness, but your brain doesn’t explicitly send a message when there is no change. If your friend calls and her voice sounds normal, you don’t immediately think, “Oh, her voice is the same as always.” Again, this is the attentional filter doing its job, detecting change, not constancy.

Importance can also filter information. But it’s not objective or absolute importance but something personal and relevant to you.

If you’re driving, a billboard for your favorite music group might catch your eye (really, we should say catch your mind) while other billboards go ignored. If you’re in a crowded room, at a party for instance, certain words to which you attach high importance might suddenly catch your attention, even if spoken from across the room. If someone says “fire” or “sex” or your own name, you’ll find that you’re suddenly following a conversation far away from where you’re standing, with no awareness of what those people were talking about before your attention was captured.

The attentional filter lets us live on autopilot most of the time coming out of it only when we need to. In so doing, we “do not register the complexities, nuances, and often the beauty of what is right in front of us.”

A great number of failures of attention occur because we are not using these two principles to our advantage.

Simply put, attention is limited.

A critical point that bears repeating is that attention is a limited-capacity resource— there are definite limits to the number of things we can attend to at once. We see this in everyday activities. If you’re driving, under most circumstances, you can play the radio or carry on a conversation with someone else in the car. But if you’re looking for a particular street to turn onto, you instinctively turn down the radio or ask your friend to hang on for a moment, to stop talking. This is because you’ve reached the limits of your attention in trying to do these three things. The limits show up whenever we try to do too many things at once.

Our brain hides things from us.

The human brain has evolved to hide from us those things we are not paying attention to. In other words, we often have a cognitive blind spot: We don’t know what we’re missing because our brain can completely ignore things that are not its priority at the moment— even if they are right in front of our eyes. Cognitive psychologists have called this blind spot various names, including inattentional blindness.

One of the most famous demonstrations of this is the basketball video (for more see: The Invisible Gorilla: How Our Intuitions Deceive Us.)

A lot of instances of losing things like car keys, passports, money, receipts, and so on occur because our attentional systems are overloaded and they simply can’t keep track of everything. The average American owns thousands of times more possessions than the average hunter-gatherer. In a real biological sense, we have more things to keep track of than our brains were designed to handle. Even towering intellectuals such as Kant and Wordsworth complained of information excess and sheer mental exhaustion induced by too much sensory input or mental overload.

But we need not fear this cognitive overload, Levitin argues. “More than ever, effective external systems are available for organizing, categorizing, and keeping track of things.”

Information Overload, Then and Now

We’ve been around a long time. For most of that time we didn’t do much of anything other than “procreate and survive.” Then we discovered farming and irrigation and gave up our fairly nomadic lifestyle. Farming allowed us to specialize. I could grow potatoes and you could grow tomatoes and we could trade. This created a dependency on each other and markets for trading. All of this trading, in turn required an accounting system to keep tabs on inventory and trades. This was the birthplace of writing.

With the growth of trade, cities, and writing, people soon discovered architecture, government, and the other refinements of being that collectively add up to what we think of as civilization. The appearance of writing some 5,000 years ago was not met with unbridled enthusiasm; many contemporaries saw it as technology gone too far, a demonic invention that would rot the mind and needed to be stopped. Then, as now, printed words were promiscuous— it was impossible to control where they went or who would receive them, and they could circulate easily without the author’s knowledge or control. Lacking the opportunity to hear information directly from a speaker’s mouth, the antiwriting contingent complained that it would be impossible to verify the accuracy of the writer’s claims, or to ask follow-up questions. Plato was among those who voiced these fears; his King Thamus decried that the dependence on written words would “weaken men’s characters and create forgetfulness in their souls.” Such externalization of facts and stories meant people would no longer need to mentally retain large quantities of information themselves and would come to rely on stories and facts as conveyed, in written form, by others. Thamus, king of Egypt, argued that the written word would infect the Egyptian people with fake knowledge. The Greek poet Callimachus said books are “a great evil.” The Roman philosopher Seneca the Younger ( tutor to Nero) complained that his peers were wasting time and money accumulating too many books, admonishing that “the abundance of books is a distraction.” Instead, Seneca recommended focusing on a limited number of good books, to be read thoroughly and repeatedly. Too much information could be harmful to your mental health.

Cue the printing press, which allowed for the rapid copying of books. This further complicated intellectual life.

The printing press was introduced in the mid 1400s, allowing for the more rapid proliferation of writing, replacing laborious (and error-prone) hand copying. Yet again, many complained that intellectual life as we knew it was done for. Erasmus, in 1525, went on a tirade against the “swarms of new books,” which he considered a serious impediment to learning. He blamed printers whose profit motive sought to fill the world with books that were “foolish, ignorant, malignant, libelous, mad, impious and subversive.” Leibniz complained about “that horrible mass of books that keeps on growing ” and that would ultimately end in nothing less than a “return to barbarism.” Descartes famously recommended ignoring the accumulated stock of texts and instead relying on one’s own observations. Presaging what many say today, Descartes complained that “even if all knowledge could be found in books, where it is mixed in with so many useless things and confusingly heaped in such large volumes, it would take longer to read those books than we have to live in this life and more effort to select the useful things than to find them oneself.”

A steady flow of complaints about the proliferation of books reverberated into the late 1600s. Intellectuals warned that people would stop talking to each other, burying themselves in books, polluting their minds with useless, fatuous ideas.

There is an argument that this generation is at the same crossroads — our Gutenburg moment.

iPhones and iPads, email, and Twitter are the new revolution.

Each was decried as an addiction, an unnecessary distraction, a sign of weak character, feeding an inability to engage with real people and the real-time exchange of ideas.

The industrial revolution brought along a rapid rise in discovery and advancement. Scientific information increased at a staggering clip.

Today, someone with a PhD in biology can’t even know all that is known about the nervous system of the squid! Google Scholar reports 30,000 research articles on that topic, with the number increasing exponentially. By the time you read this, the number will have increased by at least 3,000. The amount of scientific information we’ve discovered in the last twenty years is more than all the discoveries up to that point, from the beginning of language.

This is taxing all of us as we filter what we need to know from what we don’t. This ties in nicely with Tyler Cowen’s argument that the future of work is changing and we will need to add value to computers.

To cope with information overload we create to-do lists and email ourselves reminders. I have lists of lists. Right now there are over 800 unread emails in my inbox. Many of these are reminders to myself to look into something or to do something, links that I need to go back and read, or books I want to add to my wishlist. I see those emails and think, yes I want to do that but not right now. So they sit in my inbox. Occasionally I’ll create a to-do list, which starts off with the best intentions and rapidly becomes a brain dump. Eventually I remember the 18 minute plan for managing your day and I re-focus, scheduling time for the most important things. No matter what I do I always feel like I’m on the border between organized and chaos.

A large part of this feeling of being overwhelmed can be traced back to our evolutionarily outdated attentional system. I mentioned earlier the two principles of the attentional filter: change and importance. There is a third principle of attention— not specific to the attentional filter— that is relevant now more than ever. It has to do with the difficulty of attentional switching. We can state the principle this way: Switching attention comes with a high cost.

Our brains evolved to focus on one thing at a time. This enabled our ancestors to hunt animals, to create and fashion tools, to protect their clan from predators and invading neighbors. The attentional filter evolved to help us to stay on task, letting through only information that was important enough to deserve disrupting our train of thought. But a funny thing happened on the way to the twenty-first century: The plethora of information and the technologies that serve it changed the way we use our brains. Multitasking is the enemy of a focused attentional system. Increasingly, we demand that our attentional system try to focus on several things at once, something that it was not evolved to do. We talk on the phone while we’re driving, listening to the radio, looking for a parking place, planning our mom’s birthday party, trying to avoid the road construction signs, and thinking about what’s for lunch. We can’t truly think about or attend to all these things at once, so our brains flit from one to the other, each time with a neurobiological switching cost. The system does not function well that way. Once on a task, our brains function best if we stick to that task.

When you pay attention to something it means you don’t see something else. David Foster Wallace hit upon this in his speech, The Truth With A Whole Lot Of Rhetorical Bullshit Pared Away. He said:

Learning how to think really means learning how to exercise some control over how and what you think. It means being conscious and aware enough to choose what you pay attention to and to choose how you construct meaning from experience. Because if you cannot exercise this kind of choice in adult life, you will be totally hosed. Think of the old cliché about the mind being an excellent servant but a terrible master. This, like many clichés, so lame and unexciting on the surface, actually expresses a great and terrible truth.

And Winifred Gallagher, author of the book Rapt: Attention and the Focused Life, wrote:

That your experience largely depends on the material objects and mental subjects that you choose to pay attention to or ignore is not an imaginative notion, but a physiological fact. When you focus on a stop sign or a sonnet, a waft of perfume or a stock-market tip, your brain registers that “target,” which enables it to affect your behavior. In contrast, the things that you don’t attend to in a sense don’t exist, at least for you.

All day long, you are selectively paying attention to something, and much more often than you may suspect, you can take charge of this process to good effect. Indeed, your ability to focus on this and suppress that is the key to controlling your experience and, ultimately, your well-being.

When you walk into the front door of your house after a long day of work to screaming kids and a ringing phone you’re not thinking about where you left your car keys.

Attention is created by networks of neurons in the prefrontal cortex (just behind your forehead) that are sensitive only to dopamine. When dopamine is released, it unlocks them, like a key in your front door, and they start firing tiny electrical impulses that stimulate other neurons in their network. But what causes that initial release of dopamine? Typically, one of two different triggers:

1. Something can grab your attention automatically, usually something that is salient to your survival, with evolutionary origins. This vigilance system incorporating the attentional filter is always at work, even when you’re asleep, monitoring the environment for important events. This can be a loud sound or bright light (the startle reflex), something moving quickly (that might indicate a predator), a beverage when you’re thirsty, or an attractively shaped potential sexual partner.

2. You effectively will yourself to focus only on that which is relevant to a search or scan of the environment. This deliberate filtering has been shown in the laboratory to actually change the sensitivity of neurons in the brain. If you’re trying to find your lost daughter at the state fair, your visual system reconfigures to look only for things of about her height, hair color, and body build, filtering everything else out. Simultaneously, your auditory system retunes itself to hear only frequencies in that band where her voice registers. You could call it the Where’s Waldo? filtering network.

It all comes back to Waldo.

If it has red in it, our red-sensitive neurons are involved in the imagining. They then automatically tune themselves, and inhibit other neurons (the ones for the colors you’re not interested in) to facilitate the search. Where’s Waldo? trains children to set and exercise their visual attentional filters to locate increasingly subtle cues in the environment, much as our ancestors might have trained their children to track animals through the forest, starting with easy-to-see and easy-to -differentiate animals and working up to camouflaging animals that are more difficult to pick out from the surrounding environment. The system also works for auditory filtering— if we are expecting a particular pitch or timbre in a sound, our auditory neurons become selectively tuned to those characteristics.

When we willfully retune sensory neurons in this way, our brains engage in top-down processing, originating in a higher, more advanced part of the brain than sensory processing.

But if we have an effective attention filter, why do we find it so hard to filter out distractions? Cue technology.

For one thing, we’re doing more work than ever before. The promise of a computerized society, we were told, was that it would relegate to machines all of the repetitive drudgery of work, allowing us humans to pursue loftier purposes and to have more leisure time. It didn’t work out this way. Instead of more time, most of us have less. Companies large and small have off-loaded work onto the backs of consumers. Things that used to be done for us, as part of the value-added service of working with a company, we are now expected to do ourselves. With air travel, we’re now expected to complete our own reservations and check-in, jobs that used to be done by airline employees or travel agents. At the grocery store, we’re expected to bag our own groceries and, in some supermarkets, to scan our own purchases. We pump our own gas at filling stations. Telephone operators used to look up numbers for us. Some companies no longer send out bills for their services— we’re expected to log in to their website, access our account, retrieve our bill, and initiate an electronic payment; in effect, do the job of the company for them. Collectively, this is known as shadow work— it represents a kind of parallel, shadow economy in which a lot of the service we expect from companies has been transferred to the customer. Each of us is doing the work of others and not getting paid for it. It is responsible for taking away a great deal of the leisure time we thought we would all have in the twenty-first century.

Beyond doing more work, we are dealing with more changes in information technology than our parents did, and more as adults than we did as children. The average American replaces her cell phone every two years, and that often means learning new software, new buttons, new menus. We change our computer operating systems every three years, and that requires learning new icons and procedures, and learning new locations for old menu items.

It’s not a coincidence that highly successful people tend to offload these tasks to others, allowing them to focus.

As knowledge becomes more available— and decentralized through the Internet— the notions of accuracy and authoritativeness have become clouded. Conflicting viewpoints are more readily available than ever, and in many cases they are disseminated by people who have no regard for facts or truth. Many of us find we don’t know whom to believe, what is true, what has been modified, and what has been vetted.

[...]

My teacher, the Stanford cognitive psychologist Amos Tversky, encapsulates this in “the Volvo story.” A colleague was shopping for a new car and had done a great deal of research. Consumer Reports showed through independent tests that Volvos were among the best built and most reliable cars in their class. Customer satisfaction surveys showed that Volvo owners were far happier with their purchase after several years. The surveys were based on tens of thousands of customers. The sheer number of people polled meant that any anomaly— like a specific vehicle that was either exceptionally good or exceptionally bad— would be drowned out by all the other reports. In other words, a survey such as this has statistical and scientific legitimacy and should be weighted accordingly when one makes a decision. It represents a stable summary of the average experience, and the most likely best guess as to what your own experience will be (if you’ve got nothing else to go on, your best guess is that your experience will be most like the average).

Amos ran into his colleague at a party and asked him how his automobile purchase was going. The colleague had decided against the Volvo in favor of a different, lower-rated car. Amos asked him what made him change his mind after all that research pointed to the Volvo. Was it that he didn’t like the price? The color options? The styling? No, it was none of those reasons, the colleague said. Instead, the colleague said, he found out that his brother-in-law had owned a Volvo and that it was always in the shop.

From a strictly logical point of view, the colleague is being irrational. The brother-in-law’s bad Volvo experience is a single data point swamped by tens of thousands of good experiences— it’s an unusual outlier. But we are social creatures. We are easily swayed by first-person stories and vivid accounts of a single experience. Although this is statistically wrong and we should learn to overcome the bias, most of us don’t. Advertisers know this, and this is why we see so many first-person testimonial advertisements on TV. “I lost twenty pounds in two weeks by eating this new yogurt— and it was delicious, too!” Or “I had a headache that wouldn’t go away. I was barking at the dog and snapping at my loved ones. Then I took this new medication and I was back to my normal self.” Our brains focus on vivid, social accounts more than dry, boring, statistical accounts.

So not only does knowledge become easier to access than ever before (frictionless) but as it becomes more available our brains need to cope with it, which they do by magnifying our pre-existing cognitive biases.

illusions

In Roger Shepard’s version of the famous “Ponzo illusion,” the monster at the top seems larger than the one at the bottom, but a ruler will show that they’re the same size. In the Ebbinghaus illusion below it, the white circle on the left seems larger than the white circle on the right, but they’re the same size. We say that our eyes are playing tricks on us, but in fact, our eyes aren’t playing tricks on us, our brain is. The visual system uses heuristics or shortcuts to piece together an understanding of the world, and it sometimes gets things wrong.

We are prone to cognitive illusions when we make decisions. The same type of shortcuts are at play.

The Organized Mind: Thinking Straight in the Age of Information Overload is a wholly fascinating look at our minds.

Brené Brown: The Power of Vulnerability

In this TED talk, Brené Brown, who studies vulnerability, brings us into how we can live a more meaningful life.

Brown went back to the research and spent years trying to understand what choices whole-hearted people, who live from a deep sense of worthiness, were making. “What are we doing with vulnerability? Why do we struggle with it so much? Am I alone in struggling with vulnerability?” Here is what she learned:

We numb vulnerability — when we’re waiting for the call. It was funny, I sent something out on Twitter and on Facebook that says, “How would you define vulnerability? What makes you feel vulnerable?” And within an hour and a half, I had 150 responses. Because I wanted to know what’s out there. Having to ask my husband for help because I’m sick, and we’re newly married; initiating sex with my husband; initiating sex with my wife; being turned down; asking someone out; waiting for the doctor to call back; getting laid off; laying off people — this is the world we live in. We live in a vulnerable world. And one of the ways we deal with it is we numb vulnerability.

[...]

One of the things that I think we need to think about is why and how we numb. And it doesn’t just have to be addiction. The other thing we do is we make everything that’s uncertain certain. Religion has gone from a belief in faith and mystery to certainty. I’m right, you’re wrong. Shut up. That’s it. Just certain. The more afraid we are, the more vulnerable we are, the more afraid we are. This is what politics looks like today. There’s no discourse anymore. There’s no conversation. There’s just blame. You know how blame is described in the research? A way to discharge pain and discomfort. We perfect. If there’s anyone who wants their life to look like this, it would be me, but it doesn’t work. Because what we do is we take fat from our butts and put it in our cheeks. (Laughter) Which just, I hope in 100 years, people will look back and go, “Wow.”

And we perfect, most dangerously, our children. Let me tell you what we think about children. They’re hardwired for struggle when they get here. And when you hold those perfect little babies in your hand, our job is not to say, “Look at her, she’s perfect. My job is just to keep her perfect — make sure she makes the tennis team by fifth grade and Yale by seventh grade.” That’s not our job. Our job is to look and say, “You know what? You’re imperfect, and you’re wired for struggle, but you are worthy of love and belonging.” That’s our job. Show me a generation of kids raised like that, and we’ll end the problems I think that we see today. We pretend that what we do doesn’t have an effect on people. We do that in our personal lives. We do that corporate — whether it’s a bailout, an oil spill, a recall — we pretend like what we’re doing doesn’t have a huge impact on other people. I would say to companies, this is not our first rodeo, people. We just need you to be authentic and real and say, “We’re sorry. We’ll fix it.”

But there’s another way, and I’ll leave you with this. This is what I have found: to let ourselves be seen, deeply seen, vulnerably seen; to love with our whole hearts, even though there’s no guarantee — and that’s really hard, and I can tell you as a parent, that’s excruciatingly difficult — to practice gratitude and joy in those moments of terror, when we’re wondering, “Can I love you this much? Can I believe in this this passionately? Can I be this fierce about this?” just to be able to stop and, instead of catastrophizing what might happen, to say, “I’m just so grateful, because to feel this vulnerable means I’m alive.”

Still curious? Brown is the author of Daring Greatly: How the Courage to Be Vulnerable Transforms the Way We Live, Love, Parent, and Lead. I think this talk also ties in nicely to True Refuge: Finding Peace and Freedom in Your Own Awakened Heart.

The Power of Noticing: What the Best Leaders See

The Power of Noticing, Max Bazerman

In The Power of Noticing: What the Best Leaders See, Harvard Professor Max Bazerman, opines about how the failure to notice things leads to “poor personal decisions, organizational crises, and societal disasters.” He walks us through the details of each of these, highlighting recent research and how it impacts our awareness of information we’re prone to ignore. Bazerman presents a blueprint to help us be more aware of critical information that we otherwise would have ignored. It causes us to ask the questions, typically found in hindsight but rarely in foresight, “How could that have happened” and “Why didn’t I see it coming?”

Even the best of us fail to notice things, even critical and readily available information in our environment “due to the human tendency to wear blinders that focus us on a limited set of information.” This additional information, however, is essential to success and Bazerman argues that “in the future it will prove a defining quality of leadership.”

Noticing is a system 2 process.

In his best-selling book from 2011, Thinking, Fast and Slow, Nobel laureate Daniel Kahneman discusses Stanovich and West’s distinction between System 1 and System 2 thinking. System 1 is our intuitive system: it is quick, automatic, effortless, implicit, and emotional. Most of our decisions occur in System 1. By contrast, System 2 thinking is slower and more conscious, effortful, explicit, and logical. My colleague Dolly Chugh of New York University notes that the frantic pace of managerial life requires that executives typically rely on System 1 thinking. Readers of this book doubtless are busy people who depend on System 1 when making many decisions. Unfortunately we are generally more affected by biases that restrict our awareness when we rely on System 1 thinking than when we use System 2 thinking.

Noticing important information in contexts where many people do not is generally a System 2 process.

Logic and other strategic thinking tools, like game theory, are also generally system 2 thinking. This requires that we step away from the heat of the moment and think a few steps ahead – imagining how others will respond. This is something that “system 1 intuition typically fails to do adequately.”

So a lot of what Bazerman spends time on is moving toward system 2 thinking when making important judgements.

When you do so, you will find yourself noticing more pertinent information from your environment than you would have otherwise. Noticing what is not immediately in front of you is often counterintuitive and the province of System 2. Here, then, is the purpose and promise of this book: your broadened perspective as a result of System 2 thinking will guide you toward more effective decisions and fewer disappointments.

Rejecting What’s Available

Often the best decisions require that you look beyond what’s available and reject the presented options. Bazerman didn’t always think this way, he needed some help from his colleague Richard Zeckhauser. At a recent talk, Zeckhauser provided the audience with the “Cholesterol Problem.”

Your doctor has discovered that you have a high cholesterol level, namely 260. She prescribes one of many available statin drugs. She says this will generally drop your cholesterol about 30 percent. There may be side effects. Two months later you return to your doctor. Your cholesterol level is now at 195. Your only negative side effect is sweaty palms, which you experience once or twice a week for one or two hours. Your doctor asks whether you can live with this side effect. You say yes. She tells you to continue on the medicine. What do you say?

Bazerman, who has naturally problematic lipids, had a wide body of knowledge on the subject and isn’t known for his shyness. He went with the statin.

Zeckhauser responded, “Why don’t you try one of the other statins instead?” I immediately realized that he was probably right. Rather than focusing on whether or not to stay on the current statin, broadening the question to include the option of trying other statins makes a great deal of sense. After all, there may well be equally effective statins that don’t cause sweaty palms or any other side effects. My guess is that many patients err by accepting one of two options that a doctor presents to them. It is easy to get stuck on an either/or choice, which I … fell victim to at Zeckhauser’s lecture. I made the mistake of accepting the choice as my colleague presented it. I could have and should have asked what all of the options were. But I didn’t. I too easily accepted the choice presented to me.

The Power of Noticing: What the Best Leaders See opens your eyes to what you’re missing.