Tag: Cass Sunstein

Choosing your Choice Architect(ure)

“Nothing will ever be attempted
if all possible objections must first be overcome.”

— Samuel Johnson

***

In the book Nudge by Richard Thaler and Cass Sunstein they coin the terms ‘Choice Architecture’ and ‘Choice Architect’. For them, if you have an ability to influence the choices other people make, you are a choice architect.

Considering the number of interactions we have everyday, it would be quite easy to argue that we are all Choice Architects at some point. But this also makes the inverse true; we are also wandering around someone else’s Choice Architecture.

Let’s take a look at a few of the principles of good choice architecture, so we can get a better idea of when someone is trying to nudge us.

This information can then be used/weighed when making decisions.  

Defaults

Thaler and Sunstein start with a discussion on “defaults” that are commonly offered to us:

For reasons we have discussed, many people will take whatever option requires the least effort, or the path of least resistance. Recall the discussion of inertia, status quo bias, and the ‘yeah, whatever’ heuristic. All these forces imply that if, for a given choice, there is a default option — an option that will obtain if the chooser does nothing — then we can expect a large number of people to end up with that option, whether or not it is good for them. And as we have also stressed, these behavioral tendencies toward doing nothing will be reinforced if the default option comes with some implicit or explicit suggestion that it represents the normal or even the recommended course of action.

When making decisions people will often take the option that requires the least effort or the path of least resistance. This makes sense: It’s not just a matter of laziness, we also only have so many hours in a day. Unless you feel particularly strongly about it, if putting little to no effort towards something leads you forward (or at least doesn’t noticeably kick you backwards) this is what you are likely to do. Loss aversion plays a role as well. If we feel like the consequences of making a poor choice are high, we will simply decide to do nothing. 

Inertia is another reason: If the ship is currently sailing forward, it can often take a lot of time and effort just to slightly change course.

You have likely seen many examples of inertia at play in your work environment and this isn’t necessarily a bad thing.

Sometimes we need that ship to just steadily move forward. The important bit is to realize when this is factoring into your decisions, or more specifically, when this knowledge is being used to nudge you into making specific choices.

Let’s think about some of your monthly recurring bills. While you might not be reading that magazine or going to the gym, you’re still paying for the ability to use that good or service. If you weren’t being auto-renewed monthly, what is the chance that you would put the effort into renewing that subscription or membership? Much lower, right? Publishers and gym owners know this, and they know you don't want to go through the hassle of cancelling either, so they make that difficult, too. (They understand well our tendency to want to travel the path of least resistance and avoid conflict.)

This is also where they will imply that the default option is the recommended course of action. It sounds like this:

“We’re sorry to hear you no longer want the magazine Mr. Smith. You know, more than half of the fortune 500 companies have a monthly subscription to magazine X, but we understand if it’s not something you’d like to do at the moment.”

or

“Mr. Smith we are sorry to hear that you want to cancel your membership at GymX. We understand if you can’t make your health a priority at this point but we’d love to see you back sometime soon. We see this all the time, these days everyone is so busy. But I’m happy to say we are noticing a shift where people are starting to make time for themselves, especially in your demographic…”

(Just cancel them. You’ll feel better. We promise.)

The Structure of Complex Choices

We live in a world of reviews. Product reviews, corporate reviews, movie reviews… When was the last time you bought a phone or a car before checking the reviews? When was the last time that you hired an employee without checking out their references? 

Thaler and Sunstein call this Collaborative Filtering and explain it as follows:

You use the judgements of other people who share your tastes to filter through the vast number of books or movies available in order to increase the likelihood of picking one you like. Collaborative filtering is an effort to solve a problem of choice architecture. If you know what people like you tend to like, you might well be comfortable in selecting products you don’t know, because people like you tend to like them. For many of us, collaborative filtering is making difficult choices easier.

While collaborative filtering does a great job of making difficult choices easier we have to remember that companies also know that you will use this tool and will try to manipulate it. We just have to look at the information critically, compare multiple sources and take some time to review the reviewers.

These techniques can be useful for decisions of a certain scale and complexity: when the alternatives are understood and in small enough numbers. However, once we reach a certain size we require additional tools to make the right decision.

One strategy to use is what Amos Tversky (1972) called ‘elimination by aspects.’ Someone using this strategy first decides what aspect is most important (say, commuting distance), establishes a cutoff level (say, no more than a thirty-minute commute), then eliminates all the alternatives that do not come up to this standard. The process is repeated, attribute by attribute (no more than $1,500 per month; at least two bedrooms; dogs permitted), until either a choice is made or the set is narrowed down enough to switch over to a compensatory evaluation of the ‘finalists.’”

This is a very useful tool if you have a good idea of which attributes are of most value to you.

When using these techniques, we have to be mindful of the fact that the companies trying to sell us goods have spent a lot of time and money figuring out what attributes are important to you as well.

For example, if you were to shop for an SUV you would notice that there are a specific number of variables they all seem to have in common now (engine options, towing options, seating options, storage options). They are trying to nudge you not to eliminate them from your list. This forces you to do the tertiary research or better yet, this forces you to walk into dealerships where they will try to inflate the importance of those attributes (which they do best).

They also try to call things new names as a means to differentiate themselves and get onto your list. What do you mean our competitors don't have FLEXfuel?

Incentives

Incentives are so ubiquitous in our lives that it’s very easy to overlook them. Unfortunately, this can influence us to make poor decisions.

Thaler and Sunstein believe this is tied into how salient the incentive is.

The most important modification that must be made to a standard analysis of incentives is salience. Do the choosers actually notice the incentives they face? In free markets, the answer is usually yes, but in important cases the answer is no.

Consider the example of members of an urban family deciding whether to buy a car. Suppose their choices are to take taxis and public transportation or to spend ten thousand dollars to buy a used car, which they can park on the street in front of their home. The only salient costs of owning this car will be the weekly stops at the gas station, occasional repair bills, and a yearly insurance bill. The opportunity cost of the ten thousand dollars is likely to be neglected. (In other words, once they purchase the car, they tend to forget about the ten thousand dollars and stop treating it as money that could have been spent on something else.) In contrast, every time the family uses a taxi the cost will be in their face, with the meter clicking every few blocks. So behavioral analysis of the incentives of car ownership will predict that people will underweight the opportunity costs of car ownership, and possibly other less salient aspects such as depreciation, and may overweight the very salient costs of using a taxi.

The problems here are relatable and easily solved: If the family above had written down all the numbers related to either taxi, public transportation, or car ownership, it would have been a lot more difficult for them to undervalue the salient aspects of any of their choices. (At least if the highest value attribute is cost).

***

This isn’t an exhaustive list of all the daily nudges we face but it’s a good start and some important, translatable, themes emerge.

  • Realize when you are wandering around someone’s choice architecture.
  • Do your homework
  • Develop strategies to help you make decisions when you are being nudged.

 

Still Interested? Buy, and most importantly read, the whole book. Also, check out our other post on some of the Biases and Blunders covered in Nudge.

Biases and Blunders

Nudge: Improving Decisions About Health, Wealth, and Happiness

You would be hard pressed to come across a reading list on behavioral economics that doesn’t mention Nudge: Improving Decisions About Health, Wealth, and Happiness by Richard Thaler and Cass Sunstein.

It is a fascinating look at how we can create environments or ‘choice architecture’ to help people make better decisions. But one of the reasons it’s been so influential is because it helps us understand why people sometimes make bad decisions in the first place. If we really want to understand how we can nudge people into making better choices, it’s important to understand why they often make such poor ones.

Let’s take a look at how Thaler and Sunstein explain some of our common mistakes in a chapter aptly called ‘Biases and Blunders.’

Anchoring and Adjustment

Humans have a tendency to put too much emphasis on one piece of information when making decisions. When we overweigh one piece of information and make assumptions based on it, we call that an anchor. Say I borrow a 400-page-book from a friend and I think to myself, the last book I read was about 300 pages and I read it in 5 days so I’ll let my friend know I’ll have her book back to her in 7 days. Problem is, I’ve only compared one factor related to me reading books and now I’ve made a decision without taking into account many other factors which could affect the outcome. For example, is the new book a topic I will digest at the same rate? Will I have the same time over those 7 days for reading? I have looked at number of pages but are the number of words per page similar?

As Thaler and Sunstein explain:

This process is called ‘anchoring and adjustment.' You start with some anchor, the number you know, and adjust in the direction you think is appropriate. So far, so good. The bias occurs because the adjustments are typically insufficient.

Availability Heuristic

This is the tendency of our mind to overweigh information that is recent and readily available. What did you think about the last time you read about a plane crash? Did you start thinking about you being in a plane crash? Imagine how much it would weigh on your mind if you were set to fly the next day.

We assess the likelihood of risks by asking how readily examples come to mind. If people can easily think of relevant examples, they are far more likely to be frightened and concerned than if they cannot.

Accessibility and salience are closely related to availability, and they are important as well. If you have personally experienced a serious earthquake, you're more likely to believe that an earthquake is likely than if you read about it in a weekly magazine. Thus, vivid and easily imagined causes of death (for example, tornadoes) often receive inflated estimates of probability, and less-vivid causes (for example, asthma attacks) receive low estimates, even if they occur with a far greater frequency (here, by a factor of twenty). Timing counts too: more recent events have a greater impact on our behavior, and on our fears, than earlier ones.

Representativeness Heuristic

Use of the representativeness heuristic can cause serious misperceptions of patterns in everyday life. When events are determined by chance, such as a sequence of coin tosses, people expect the resulting string of heads and tails to be representative of what they think of as random. Unfortunately, people do not have accurate perceptions of what random sequences look like. When they see the outcomes of random processes, they often detect patterns that they think have great meaning but in fact are just due to chance.

It would seem as though we have issues with randomness. Our brains automatically want to see patterns when none may exist. Try a coin toss experiment on yourself. Simply flip a coin and keep track if it’s heads or tails. At some point you will hit ‘a streak’ of either heads or tails and you will notice that you experience a sort of cognitive dissonance; you know that ‘a streak’ at some point is statistically probable but you can’t help but thinking the next toss has to break the streak because for some reason in your head it’s not right. That unwillingness to accept randomness, our need for a pattern, often clouds our judgement when making decisions.

Unrealistic Optimism

We have touched upon optimism bias in the past. Optimism truly is a double-edged sword. On one hand it is extremely important to be able to look past a bad moment and tell yourself that it will get better. Optimism is one of the great drivers of human progress.

On the other hand, if you never take those rose-coloured glasses off, you will make mistakes and take risks that could have been avoided. When assessing the possible negative outcomes associated with risky behaviour we often think ‘it won’t happen to me.’ This is a brain trick: We are often insensitive to the base rate.

Unrealistic optimism is a pervasive feature of human life; it characterizes most people in most social categories. When they overestimate their personal immunity from harm, people may fail to take sensible preventive steps. If people are running risks because of unrealistic optimism, they might be able to benefit from a nudge.

Loss Aversion

When they have to give something up, they are hurt more than they are pleased if they acquire the very same thing.

We are familiar with loss aversion in the context described above but Thaler and Sunstein take the concept a step further and explain how it plays a role in ‘default choices.’ Loss aversion can make us so fearful of making the wrong decision that we don’t make any decision. This explains why so many people settle for default options.

The combination of loss aversion with mindless choosing implies that if an option is designated as the ‘default,' it will attract a large market share. Default options thus act as powerful nudges. In many contexts defaults have some extra nudging power because consumers may feel, rightly or wrongly, that default options come with an implicit endorsement from the default setter, be it the employer, government, or TV scheduler.

Of course, this is not the only reason default options are so popular. “Anchoring,” which we mentioned above, plays a role here. Our mind anchors immediately to the default option, especially in unfamiliar territory for us.

We also have the tendency towards inertia, given that mental effort is tantamount to physical effort – thinking hard requires physical resources. If we don't know the difference between two 401(k) plans and they both seem similar, why expend the mental effort to switch away from the default investment option? You may not have that thought consciously; it often happens as a “click, whirr.

State of Arousal

Our prefered definition requires recognizing that people's state of arousal varies over time. To simplify things we will consider just the two endpoints: hot and cold. When Sally is very hungry and appetizing aromas are emanating from the kitchen, we can say she is in a hot state. When Sally is thinking abstractly on Tuesday about the right number of cashews she should consume before dinner on Saturday, she is in a cold state. We will call something ‘tempting' if we consume more of it when hot than when cold. None of this means that decisions made in a cold state are always better. For example, sometimes we have to be in a hot state to overcome our fears about trying new things. Sometimes dessert really is delicious, and we do best to go for it. Sometimes it is best to fall in love. But it is clear that when we are in a hot state, we can often get into a lot of trouble.

For most of us, however, self-control issues arise because we underestimate the effect of arousal. This is something the behavioral economist George Loewenstein (1996) calls the ‘hot-cold empathy gap.' When in a cold state, we do not appreciate how much our desires and our behavior reflects a certain naivete about the effects that context can have on choice.

The concept of arousal is analogous to mood. At the risk of stating the obvious, our mood can play a definitive role in our decision making. We all know it, but how many among us truly use that insight to make better decisions?

This is one reason we advocate decision journals when it comes to meaningful decisions (probably no need to log in your cashew calculations); a big part of tracking your decisions is your mood when you make themA zillion contextual clues go into your state of arousal, but taking a quick pause to note which state you're in as you make a decision can make a difference over time.

Mood is also affected by chemicals. This one may be familiar to you coffee (or tea) addicts out there. Do you recall the last time you felt terrible or uncertain about a decision when you were tired, only to feel confident and spunky about the same topic after a cup of java?

Or, how about alcohol? There's a reason it's called a “social lubricant” – our decision making changes when we've consumed enough of it.

Lastly, the connection between sleep and mood goes deep. Need we say more?

Peer Pressure

Peer pressure is another tricky nudge that can be both positive or negative. We can be nudged to make better decisions when we think that our peer group is doing the same. If we think our neighbors conserve more energy or recycle more, we start making a better effort to reduce our consumption and recycle. If we think the people around us are eating better and exercising more we tend to do the same. Information we get from peer groups can also help us make better decisions because of ‘collaborative filtering'; the choices of our peer groups help us filter out and narrow down our choices. If your friends who share similar views and tastes as you recommend book X, then you may like it as well. (Google, Amazon and Netflix are built on this principle).

However, if we are all reading the same book because we constantly see people with it, but none of us actually like it, then we all lose. We run off the mountain with the other lemmings.

Social influences come in two basic categories. The first involves information. If many people do something or think something, their actions and their thoughts convey information about what might be best for you to do or think. The second involves peer pressure. If you care about what other people think about you (perhaps in the mistaken belief that they are paying some attention to what you are doing), then you might go along with the crowd to avoid their wrath or curry their favor.

An important problem here is ‘pluralistic ignorance' – that is, ignorance, on the part of all or most, about what other people think. We may follow a practice or a tradition not because we like it, or even think it defensible, but merely because we think that most other people like it. Many social practices persist for this reason, and a small shock, or nudge, can dislodge them.

How do we beat social influence? It's very difficult, and not always desirable: If you are about to enter a building a lot of people are running away from, there's a better than good chance you should too. But this useful instinct leads us awry.

A simple algorithm, when you feel yourself acting out of social proof, is to ask yourself: Would I still do this if everyone else was not?

***

For more, check out Nudge.

How Situations Influence Decisions

Michael Mauboussin, the first guest on my podcast, The Knowledge Project, explains how our situations influence our decisions enormously in Think Twice: Harnessing the Power of Counterintuition.

Mistakes born out of situations are difficult to avoid, in part because the influences on us are operating at a subconscious level. “Making good decisions in the face of subconscious pressure,” Mauboussin writes, “requires a very high degree of background knowledge and self-awareness.”

How do you feel when you read the word “treasure”? Do you feel good? What images come to mind? If you are like most people, just ruminating on “treasure” gives you a little lift. Our minds naturally make connections and associate ideas. So if someone introduces a cue to you— a word, a smell, a symbol— your mind often starts down an associative path. And you can be sure the initial cue will color a decision that waits at the path’s end. All this happens outside of your perception.

People around us also influence our decisions, often with good reason. Social influence arises for a couple of reasons. The first is asymmetric information, a fancy phrase meaning someone knows something you don’t. In those cases, imitation makes sense because the information upgrade allows you to make better decisions.

Peer pressure, or the desire to be part of the in-group, is a second source of social influence. For good evolutionary reasons, humans like to be part of a group— a collection of interdependent individuals— and naturally spend a good deal of time assessing who is “in” and who is “out.” Experiments in social psychology have repeatedly confirmed this.

We explain behavior based on an individual's choices and disposition and not the situation. That is, we associate bad behaviour with the person and not the situation. Unless, of course, we're talking about ourselves. This is “the fundamental attribution error”, a phrase coined by Lee Ross, a social psychologist at Stanford University.

There are two sides to this sword as the power of situations can work for good and evil. “Some of the greatest atrocities known to mankind,” Mauboussin writes, “resulted from putting normal people into bad situations.”

We believe our choices are independent of circumstance, however, the evidence points in another direction.

***

Some Wine With Your Music?

Consider how something as simple as the music playing in a store influences what wine we purchase.

Imagine strolling down the supermarket aisle and coming upon a display of French and German wines, roughly matched for price and quality. You do some quick comparisons, place a German wine in your cart, and continue shopping. After you check out, a researcher approaches and asks why you bought the German wine. You mention the price, the wine’s dryness, and how you anticipate it will go nicely with a meal you are planning. The researcher then asks whether you noticed the German music playing and whether it had any bearing on your decision. Like most, you would acknowledge hearing the music and avow that it had nothing to do with your selection.

But this isn't a hypothetical, it's an actual study and the results affirm that the environment influences our decisions.

In this test, the researchers placed the French and German wines next to each other, along with small national flags. Over two weeks, the scientists alternated playing French accordion music and German Bierkeller pieces and watched the results. When French music played, French wines represented 77 percent of the sales. When German music played, consumers selected German wines 73 percent of the time. (See the image below) The music made a huge difference in shaping purchases. But that’s not what the shoppers thought.

While the customers acknowledged that the music made them think of either France or Germany, 86 percent denied the tunes had any influence on their choice.

Music_decisions

This is an example of priming, which psychologists formally define as “the incidental activation of knowledge structures by the current situational context.”1 and priming happens all the time. For priming to be most effective it must have a strong connection to our situation's goals.

Another example of how situations influence us is the default. In a fast moving world of non-stop bits and bytes the default is the path of least resistance — that is, it's the system one option. To move away from the default is labor intensive on our brains. Studies have repeatedly shown that most people go with defaults.

This applies to a wide array of choices, from insignificant issues like the ringtone on a new cell phone to consequential issues like financial savings, educational choice, and medical alternatives. Richard Thaler, an economist, and Cass Sunstein, a law professor, call the relationship between choice presentation and the ultimate decision “choice architecture.” They convincingly argue that we can easily nudge people toward a particular decision based solely on how we arrange the choices for them.

One context for decision making is how choices are structured. Knowing that many people opt for the default option, we can influence (for better or worse) large groups of people.

Mauboussin relates a story about a prominent psychologist popular on the speaking circuit that “underscores how underappreciated choice architecture remains.”

When companies call to invite him to speak, he offers them two choices. Either they can pay him his set fee and get a standard presentation, or they can pay him nothing in exchange for the opportunity to work with him on an experiment to improve choice architecture (e.g., redesign a form or Web site). Of course, the psychologist benefits by getting more real-world results on choice architecture, but it seems like a pretty good deal for the company as well, because an improved architecture might translate into financial benefits vastly in excess of his speaking fee. He noted ruefully that so far not one company has taken him up on his experiment offer.

(As a brief aside, I engage in public speaking on a fairly regular basis. I've toyed with similar ideas. Once I even went as far as offering to speak for no pre-set fee, only “value added” as judged by the client. They opted for the fee.)

Another great example of how environments affect behavior is Stanley Milgram's famous experiment on obedience to authority. “Ordinary people, simply doing their jobs, and without any particular hostility on their part, can become agents in a terrible destructive process,” wrote Stanley Milgram. The Stanford Prison Experiment is, yet, another example.

***

Situations are generally more powerful than we think

The key point is that situations are generally more powerful than we think and we can do things to resist the pull of “unwelcome social influence.”

Mauboussin offers four tips:

1. Be aware of your situation.

You can think of this in two parts. There is the conscious element, where you can create a positive environment for decision making in your own surroundings by focusing on process, keeping stress to an acceptable level, being a thoughtful choice architect, and making sure to diffuse the forces that encourage negative behaviors.

Then there is coping with the subconscious influences. Control over these influences requires awareness of the influence, motivation to deal with it, and the willingness to devote attention to address possible poor decisions. In the real world, satisfying all three control conditions is extremely difficult, but the path starts with awareness.

2. Consider the situation first and the individual second.

This concept, called attributional charity, insists that you evaluate the decisions of others by starting with the situation and then turning to the individuals, not the other way around. While easier for Easterners than Westerners, most of us consistently underestimate the role of the situation in assessing the decisions we see others make. Try not to make the fundamental attribution error.

3. Watch out for the institutional imperative.

Warren Buffett, the celebrated investor and chairman of Berkshire Hathaway, coined the term institutional imperative to explain the tendency of organizations to “mindlessly” imitate what peers are doing. There are typically two underlying drivers of the imperative. First, companies want to be part of the in-group, much as individuals do. So if some companies in an industry are doing mergers, chasing growth, or expanding geographically, others will be tempted to follow. Second are incentives. Executives often reap financial rewards by following the group. When decision makers make money from being part of the crowd, the draw is nearly inescapable.

One example comes from a Financial Times interview with the former chief executive officer of Citigroup Chuck Prince in 2007, before the brunt of the financial crisis. “When the music stops, things will be complicated,” offered Prince, demonstrating that he had some sense of what was to come. “But as long as the music is playing, you’ve got to get up and dance.” The institutional imperative is rarely a good dance partner.

4. Avoid inertia.

Periodically revisit your processes and ask whether they are serving their purpose. Organizations sometimes adopt routines and structures that become crystallized, impeding positive change. Efforts to reform education in the United States, for example, have been met with resistance from teachers and administrators who prefer the status quo.

We like to think that we're better than the situation, that we follow the decision-making process and rationally weigh the facts, consider alternatives, and determine the best course of action. While others are easily influenced, we are not. This is how we're wrong.

Decision making is fundamentally a social exercise, something I cover in my Re:Think Decision Making workshop.

1. “Automaticity of Social Behavior: Direct Effects of Trait Construction and Stereotype Activation on Action”

A Discussion on the Work of Daniel Kahneman

Edge.org asked the likes of Christopher Chabris, Nicholas Epley, Jason Zweig, William Poundstone, Cass Sunstein, Phil Rosenzweig, Richard Thaler & Sendhil Mullainathan, Nassim Nicholas Taleb, Steven Pinker, and Rory Sutherland among others: “How has Kahneman's work influenced your own? What step did it make possible?”

Kahneman's work is summarized in the international best-seller Thinking, Fast and Slow.

Here are some select excerpts that I found interesting.

Christopher Chabris (author of The Invisible Gorilla)

There's an overarching lesson I have learned from the work of Danny Kahneman, Amos Tversky, and their colleagues who collectively pioneered the modern study of judgment and decision-making: Don't trust your intuition.

Jennifer Jacquet

After what I see as years of hard work, experiments of admirable design, lucid writing, and quiet leadership, Kahneman, a man who spent the majority of his career in departments of psychology, earned the highest prize in economics. This was a reminder that some of the best insights into economic behavior could be (and had been) gleaned outside of the discipline

Jason Zweig (author of Your Money and Your Brain)

… nothing amazed me more about Danny than his ability to detonate what we had just done.

Anyone who has ever collaborated with him tells a version of this story: You go to sleep feeling that Danny and you had done important and incontestably good work that day. You wake up at a normal human hour, grab breakfast, and open your email. To your consternation, you see a string of emails from Danny, beginning around 2:30 a.m. The subject lines commence in worry, turn darker, and end around 5 a.m. expressing complete doubt about the previous day's work.

You send an email asking when he can talk; you assume Danny must be asleep after staying up all night trashing the chapter. Your cellphone rings a few seconds later. “I think I figured out the problem,” says Danny, sounding remarkably chipper. “What do you think of this approach instead?”

The next thing you know, he sends a version so utterly transformed that it is unrecognizable: It begins differently, it ends differently, it incorporates anecdotes and evidence you never would have thought of, it draws on research that you've never heard of. If the earlier version was close to gold, this one is hewn out of something like diamond: The raw materials have all changed, but the same ideas are somehow illuminated with a sharper shift of brilliance.

The first time this happened, I was thunderstruck. How did he do that? How could anybody do that? When I asked Danny how he could start again as if we had never written an earlier draft, he said the words I've never forgotten: “I have no sunk costs.”

William Poundstone (author of Are Your Smart Enough To Work At Google?)

As a writer of nonfiction I'm often in the position of trying to connect the dots—to draw grand conclusions from small samples. Do three events make a trend? Do three quoted sources justify a conclusion? Both are maxims of journalism. I try to keep in mind Kahneman and Tversky's Law of Small Numbers. It warns that small samples aren't nearly so informative, in our uncertain world, as intuition counsels.

Cass R. Sunstein (Author, Why Nudge?)

These ideas are hardly Kahneman’s most well-known, but they are full of implications, and we have only started to understand them.

1. The outrage heuristic. People’s judgments about punishment are a product of outrage, which operates as a shorthand for more complex inquiries that judges and lawyers often think relevant. When people decide about appropriate punishment, they tend to ask a simple question: How outrageous was the underlying conduct? It follows that people are intuitive retributivists, and also that utilitarian thinking will often seem uncongenial and even outrageous.

2. Scaling without a modulus. Remarkably, it turns out that people often agree on how outrageous certain misconduct is (on a scale of 1 to 8), but also remarkably, their monetary judgments are all over the map. The reason is that people do not have a good sense of how to translate their judgments of outrage onto the monetary scale. As Kahneman shows, some work in psychophysics explains the problem: People are asked to “scale without a modulus,” and that is an exceedingly challenging task. The result is uncertainty and unpredictability. These claims have implications for numerous questions in law and policy, including the award of damages for pain and suffering, administrative penalties, and criminal sentences.

3. Rhetorical asymmetry. In our work on jury awards, we found that deliberating juries typically produce monetary awards against corporate defendants that are higher, and indeed much higher, than the median award of the individual jurors before deliberation began. Kahneman’s hypothesis is that in at least a certain category of cases, those who argue for higher awards have a rhetoric advantage over those who argue for lower awards, leading to a rhetorical asymmetry. The basic idea is that in light of social norms, one side, in certain debates, has an inherent advantage – and group judgments will shift accordingly. A similar rhetorical asymmetry can be found in groups of many kinds, in both private and public sectors, and it helps to explain why groups move.

4. Predictably incoherent judgments. We found that when people make moral or legal judgments in isolation, they produce a pattern of outcomes that they would themselves reject, if only they could see that pattern as a whole. A major reason is that human thinking is category-bound. When people see a case in isolation, they spontaneously compare it to other cases that are mainly drawn from the same category of harms. When people are required to compare cases that involve different kinds of harms, judgments that appear sensible when the problems are considered separately often appear incoherent and arbitrary in the broader context. In my view, Kahneman’s idea of predictable coherence has yet to be adequately appreciated; it bears on both fiscal policy and on regulation.

Phil Rosenzweig

For years, there were (as the old saying has it) two kinds of people: those relatively few of us who were aware of the work of Danny Kahneman and Amos Tversky, and the much more numerous who were not. Happily, the balance is now shifting, and more of the general public has been able to hear directly a voice that is in equal measures wise and modest.

Sendhil Mullainathan (Author of Scarcity: Why Having Too Little Means So Much)

… Kahneman and Tversky's early work opened this door exactly because it was not what most people think it was. Many think of this work as an attack on rationality (often defined in some narrow technical sense). That misconception still exists among many, and it misses the entire point of their exercise. Attacks on rationality had been around well before Kahneman and Tversky—many people recognized that the simplifying assumptions of economics were grossly over-simplifying. Of course humans do not have infinite cognitive abilities. We are also not as strong as gorillas, as fast as cheetahs, and cannot swim like sea lions. But we do not therefore say that there is something wrong with humans. That we have limited cognitive abilities is both true and no more helpful to doing good social science that to acknowledge our weakness as swimmers. Pointing it out did it open any new doors.

Kahneman and Tversky's work did not just attack rationality, it offered a constructive alternative: a better description of how humans think. People, they argued, often use simple rules of thumb to make judgments, which incidentally is a pretty smart thing to do. But this is not the insight that left us one step from doing behavioral economics. The breakthrough idea was that these rules of thumb could be catalogued. And once understood they can be used to predict where people will make systematic errors. Those two words are what made behavioral economics possible.

Nassim Taleb (Author of Antifragile)

Here is an insight Danny K. triggered and changed the course of my work. I figured out a nontrivial problem in randomness and its underestimation a decade ago while reading the following sentence in a paper by Kahneman and Miller of 1986:

A spectator at a weight lifting event, for example, will find it easier to imagine the same athlete lifting a different weight than to keep the achievement constant and vary the athlete's physique.

This idea of varying one side, not the other also applies to mental simulations of future (random) events, when people engage in projections of different counterfactuals. Authors and managers have a tendency to take one variable for fixed, sort-of a numeraire, and perturbate the other, as a default in mental simulations. One side is going to be random, not the other.

It hit me that the mathematical consequence is vastly more severe than it appears. Kahneman and colleagues focused on the bias that variable of choice is not random. But the paper set off in my mind the following realization: now what if we were to go one step beyond and perturbate both? The response would be nonlinear. I had never considered the effect of such nonlinearity earlier nor seen it explicitly made in the literature on risk and counterfactuals. And you never encounter one single random variable in real life; there are many things moving together.

Increasing the number of random variables compounds the number of counterfactuals and causes more extremes—particularly in fat-tailed environments (i.e., Extremistan): imagine perturbating by producing a lot of scenarios and, in one of the scenarios, increasing the weights of the barbell and decreasing the bodyweight of the weightlifter. This compounding would produce an extreme event of sorts. Extreme, or tail events (Black Swans) are therefore more likely to be produced when both variables are random, that is real life. Simple.

Now, in the real world we never face one variable without something else with it. In academic experiments, we do. This sets the serious difference between laboratory (or the casino's “ludic” setup), and the difference between academia and real life. And such difference is, sort of, tractable.

… Say you are the manager of a fertilizer plant. You try to issue various projections of the sales of your product—like the weights in the weightlifter's story. But you also need to keep in mind that there is a second variable to perturbate: what happens to the competition—you do not want them to be lucky, invent better products, or cheaper technologies. So not only you need to predict your fate (with errors) but also that of the competition (also with errors). And the variance from these errors add arithmetically when one focuses on differences.

Rory Sutherland

When I met Danny in London in 2009 he diffidently said that the only hope he had for his work was that “it might lead to a better kind of gossip”—where people discuss each other's motivations and behaviour in slightly more intelligent terms. To someone from an industry where a new flavour-variant of toothpaste is presented as being an earth-changing event, this seemed an incredibly modest aspiration for such important work.

However, if this was his aim, he has surely succeeded. When I meet people, I now use what I call “the Kahneman heuristic”. You simply ask people “Have you read Danny Kahneman's book?” If the answer is yes, you know (p>0.95) that the conversation will be more interesting, wide-ranging and open-minded than otherwise.

And it then occurred to me that his aim—for better conversations—was perhaps not modest at all. Multiplied a millionfold it may very important indeed. In the social sciences, I think it is fair to say, the good ideas are not always influential and the influential ideas are not always good. Kahneman's work is now both good and influential.