Tag: Behavioral Economics

Dan Ariely on How and Why We Cheat

Three years ago, Dan Ariely, a psychology and behavioral economics professor at Duke, put out a book called The (Honest) Truth About Dishonesty: How We Lie to Everyone–Especially Ourselves. I read the book back closer to when it was released, and I recently revisited it to see how it held up to my initial impressions.

It was even better. In fact, this is one of the most useful books I have ever come across, and my copy is now marked, flagged, and underlined. Let's get in deep.

Why We Cheat

We're Cheaters All

Dan is both an astute researcher and a good writer; he knows how to get to the point, and his points matter. His books, which include Predictably Irrational and The Upside of Irrationality, are not filled with fluff. We've mentioned his demonstrations of pluralistic ignorance here before.

In The Honest Truth, Ariely doesn't just explore where cheating comes from but he digs into which situations make us more likely to cheat than others. Those discussions are what make the book eminently practical, and not just a meditation on cheating. It's a how-to guide on our own dishonesty.

Ariely was led down that path because of a friend of his who had worked with Enron:

It was of course, possible that John and everyone else involved with Enron was deeply corrupt, but I began to think that there may have been a different type of dishonest at work–one that relates more to wishful blindness and is practiced by people like John, you, and me. I started wondering if the problem of dishonesty goes deeper than just a few bad apples and if this kind of wishful blindness takes place in other companies as well. I also wondered if my friends and I would have behaved similarly if we had been the ones consulting for Enron.

This is a beautiful setup that led him to a lot of interesting conclusions in his years of subsequent research. Here's (some of) what Dan found.

  1. Cheating was standard, but only a little. Ariely and his co-researchers ran the same experiment in many different variations, and with many different topics to investigate. Nearly every time, he found evidence of a standard level of cheating. In other experiments, the outcome was the same. A little cheating was everywhere. People generally did not grab all they could, but only as much as they could justify psychologically.
  2. Increasing the cheating reward or moderately altering the risk of being caught didn't affect the outcomes much. In Ariely's experience, the cheating stayed steady: A little bit of stretching every time.
  3. The more abstracted from the cheating we are, the more we cheat. This was an interesting one–it turns out the less “connected” we feel to our dishonesty, the more we're willing to do it. This ranges from being more willing to cheat to earn tokens exchangeable for real money than to earn actual money, to being more willing to “tap” a golf ball to improve its lie than actually pick it up and move it with our hands.
  4. A nudge not to cheat works better before we cheat than after. In other words, we need to strengthen our morals just before we're tempted to cheat, not after. And even more interesting, when Ariely took his findings to the IRS and other organizations who could benefit from being cheated less, they barely let him in the door! The incentives in organizations are interesting.
  5. We think we're more honest than everyone else. Ariely showed this pretty conclusively by studying golfers and asking them how much they thought others cheated and how much they thought they cheated themselves. It was a rout: They consistently underestimated their own dishonesty versus others'. I wasn't surprised by this finding.
  6. We underestimate how blinded we can become to incentives. In a brilliant chapter called “Blinded by our Motivations,” Ariely discusses how incentives skew our judgment and our moral compass. He shows how pharma reps are masters of this game–and yet we allow it to continue. If we take Ariely seriously, the laws against conflicts of interest need to be stronger.
  7. Related to (6), disclosure does not seem to decrease incentive-caused bias. This reminds me of Charlie Munger's statement, “I think I've been in the top 5% of my age cohort all my life in understanding the power of incentives, and all my life I've underestimated it. Never a year passes that I don't get some surprise that pushes my limit a little farther.” Ariely has discussed incentive-caused bias in teacher evaluation before.
  8. We cheat more when our willpower is depleted. This doesn't come as a total surprise: Ariely found that when we're tired and have exerted a lot of mental or physical energy, especially in resisting other temptations, we tend to increase our cheating. (Or perhaps more accurately, decrease our non-cheating.)
  9. We cheat ourselves, even if we have direct incentive not to. Ariely was able to demonstrate that even with a strong financial incentive to honestly assess our own abilities, we still think we cheat less than we do, and we hurt ourselves in the process.
  10. Related to (9), we can delude ourselves into believing we were honest all along. This goes to show the degree to which we can damage ourselves by our cheating as much as others. Ariely also discusses how good we are at pounding our own conclusions into our brain even if no one else is being persuaded, as Munger has mentioned before.
  11. We cheat more when we believe the world “owes us one.” This section of the book should feel disturbingly familiar to anyone. When we feel like we've been cheated or wronged “over here,” we let the universe make it up to us “over there.” (By cheating, of course.) Think about the last time you got cut off in traffic, stiffed on proper change, and then unloaded on by your boss. Didn't you feel more comfortable reaching for what wasn't yours afterwards? Only fair, right?
  12. Unsurprisingly, cheating has a social contagion aspect. If we see someone who we identify with and whose group we feel we belong to cheating, it makes us (much) more likely to cheat. This has wide-ranging social implications.
  13. Finally, nudging helps us cheat less. If we're made more aware of our moral compass through specific types of reminders and nudges, we can decrease our own cheating. Perhaps most important is to keep ourselves out of situations where we'll be tempted to cheat or act dishonestly, and to take pre-emptive action if it's unavoidable.

There's much more in the book, and we highly recommend you read it for that as well as Dan's general theory on cheating. The final chapter on the steps that old religions have taken to decrease dishonesty among their followers is a fascinating bonus. (Reminded me of Nassim Taleb's retort that heavy critics of religion, like Dawkins, take it too literally and under-appreciate the social value of its rules and customs. It's also been argued that religion has an evolutionary basis.)

Check out the book, and while you're at up, pick up his other two: Predictably Irrational, and The Upside of Irrationality.

13 Practical Ideas That Have Helped Me Make Better Decisions

This article is a collaboration between Mark Steed and myself. He did most of the work. Mark was a participant at the last Re:Think Decision Making event as well as a member of the Good Judgment Project. I asked him to put together something on making better predictions. This is the result.

We all face decisions. Sometimes we think hard about a specific decision, other times, we make decisions without thinking. If you've studied the genre you’ve probably read Taleb, Tversky, Kahneman, Gladwell, Ariely, Munger, Tetlock, Mauboussin and/or Thaler. These pioneers write a lot about “rationality” and “biases”.

Rationality dictates the selection of the best choice among however many options. Biases of a cognitive or emotional nature creep in and are capable of preventing the identification of the “rational” choice. These biases can exist in our DNA or can be formed through life experiences. The mentioned authors consider biases extensively, and, lucky for us, their writings are eye-opening and entertaining.

Rather than rehash what brighter minds have discussed, I’ll focus on practical ideas that have helped me make better decisions. I think of this as a list of “lessons learned (so far)” from my work in asset management and as a forecaster for the Good Judgment Project. I’ve held back on submitting this given the breadth and depth of the FS readers, but, rather than expect perfection, I wanted to put something on the table because I suspect many of you have useful ideas that will help move the conversation forward.

1. This is a messy business. Studying decision science can easily motivate self-loathing. There are over one-hundred cognitive biases that might prevent us from making calculated and “rational” decisions. What, you can’t create a decision tree with 124 decision nodes, complete with assorted probabilities in split seconds? I asked around, and it turns out, not many people can. Since there is no way to eliminate all the potential cognitive biases and I don’t possess the mental faculties of Dr. Spock or C-3PO, I might as well live with the fact that some decisions will be more elegant than others.

2. We live and work in dynamic environments. Dynamic environments adapt. The opposite of dynamic environments are static environments. Financial markets, geopolitical events, team sports, etc. are examples of dynamic “environments” because relationships between agents evolve and problems are often unpredictable. Changes from one period are conditional on what happened the previous period. Casinos are more representative of static environments. Not casinos necessarily, but the games inside. If you play Roulette, your odds of winning are always the same and it doesn’t matter what happened the previous turn.

3. Good explanatory models are not necessarily good predictive models. Dynamic environments have a habit of desecrating rigid models. While blindly following an elegant model may be ill-advised, strong explanatory models are excellent guideposts when paired with sound judgment and intuition. Just as I’m not comfortable with the automatic pilot flying a plane without a human in the cockpit, I’m also not comfortable with a human flying a plane without the help of technology. It has been said before, people make models better and models make people better.

4. Instinct is not always irrational.  The rule of thumb, otherwise known as heuristics, provide better results than more complicated analytical techniques. Gerd Gigerenzer, is the thought leader and his book Risk Savvy: How to Make Good Decisions is worth reading. Most literature despises heuristics, but he asserts intuition proves superior because optimization is sometimes mathematically impossible or exposed to sampling error. He often uses the example of Harry Markowitz, who won a Nobel Prize in Economics in 1990 for his work on Modern Portfolio Theory. Markowitz discovered a method for determining the “optimal” mix of assets. However, Markowitz himself did not follow his Nobel prize-winning mean-variance theory but instead used a 1/N heuristic by spreading his dollars equally across N number of investments. He concluded that his 1/N strategy would perform better than a mean-optimization application unless the mean-optimization model had 500 years to compete.  Our intuition is more likely to be accurate if it is preceded by rigorous analysis and introspection. And simple rules are more effective at communicating winning strategies in complex environments. When coaching a child’s soccer team, it is far easier teaching a few basic principles, than articulating the nuances of every possible situation.

5. Decisions are not evaluated in ways that help us reduce mistakes in the future. Our tendency is to only critique decisions where the desired outcome was not achieved while uncritically accepting positive outcomes even if luck, or another factor, produced the desired result. At the end of the day I understand all we care about are results, but good processes are more indicative of future success than good results.

6. Success is ill-defined. In some cases this is relatively straightforward. If the outcome is binary, either it did, or did not happen, success is easy to identify. But this is more difficult in situations where the outcome can take a range of potential values, or when individuals differ on what the values should be.

7. We should care a lot more about calibration. Confidence, not just a decision, should be recorded (and to be clear, decisions should be recorded). Next time you have a major decision, ask yourself how confident you are that the desired outcome will be achieved. Are you 50% confident? 90%? Write it down. This helps with calibration. For all decisions in which you are 50% confident, half should be successes. And you should be right nine out of ten times for all decisions in which you are 90% confident. If you are 100% confident, you should never be wrong. If you don’t know anything about a specific subject then you should be no more confident than a coin flip. It’s amazing how we will assign high confidence to an event we know nothing about. Turns out this idea is pretty helpful. Let’s say someone brings an idea to you and you know nothing about it. Your default should be 50/50; you might as well flip a coin. Then you just need to worry about the costs/payouts.

8. Probabilities are one thing, payouts are another. You might feel 50/50 about your chances but you need to know your payouts if you are right. This is where the expected value comes in handy. It’s the probability of being right multiplied by the payout if you are right, plus the probability of being wrong multiplied by the cost. E= .50(x) + .50(y). Say someone on your team has an idea for a project and you decided there is a 50% chance that it succeeds and, if it does, you double your money, if it doesn’t, you lose what you invested. If the project required $10mm, then the expected outcome is calculated as .50*20 + .50*0 = 10, or $10mm. If you repeat this process a number of times, approving only projects with a 2:1 payout and 50% probability of success you would likely end up with the same amount you started with. Binary outcomes that have a 50/50 probability should have a double-or-nothing payout. This is even more helpful given #7 above. If you were tracking this employee’s calibration you would have a sense as to whether their forecasts are accurate. As a team member or manager, you would want to know if a specific employee is 90% confident all the time but only 50% accurate. More importantly, you would want to know if a certain team member is usually right when they express 90% or 100% confidence. Use a Brier Score to track colleagues but provide an environment to encourage discussion and openness.

9. We really are overconfident. Starting from the assumption that we are probably only 50% accurate is not a bad idea. Phil Tetlock, a professor at UPenn, Team Leader for the Good Judgment Project and author of Expert Political Judgment: How Good Is It? How Can We Know?, suggested political pundits are about 53% accurate regarding political forecasts while CXO Advisory tracks investment gurus and finds they are, in aggregate, about 48% accurate. These are experts making predictions about their core area of expertise. Consider the rate of divorce in the U.S., currently around 40%-50%, as additional evidence that sometimes we don’t know as much as we think. Experts are helpful in explaining a specific discipline but they are less helpful in dynamic environments. If you need something fixed, like a car, a clock or an appliance then experts can be very helpful. Same for tax and accounting advice. It's not because this stuff is simple, it's because the environment is static.

10. Improving estimations of probabilities and payouts is about polishing our 1) subject matter expertise and 2) cognitive processing abilities. Learning more about a given subject reduces uncertainty and allows us to move from the lazy 50/50 forecast. Say you travel to Arizona and get stung by a scorpion. Rather than assume a 50% probability of death you can do a quick internet search and learn no one has died from a scorpion bite in Arizona since the 1960s. Overly simplistic, but, you get the picture. Second, data needs to be interpreted in a cogent way. Let’s say you work in asset management and one of your portfolio managers has made three investments that returned -5%, -12% and 22%. What can you say about the manager (other than two of the three investments lost money)? Does the information allow you to claim the portfolio manager is a bad manager? Does the information allow you to claim you can confidently predict his/her average rate of return? Unless you’ve had some statistics, it might not be entirely clear what clinical conclusions you can draw. What if you flipped a coin three times and came up with tails on two of them? That wouldn’t seem so strange. Two-thirds is the same as 66%. If you tossed the coin one-hundred times and got 66 tails, that would be a little more interesting. The more observations, the higher our confidence should be. A 95% confidence interval for the portfolio manager’s average return would be a range between -43% and 45%. Is that enough to take action?

11. Bayesian analysis is more useful than we think. Bayesian thinking helps direct given false/true positives and false/true negatives. It’s the probability of a hypothesis given some observed data. For example, what’s the likelihood of X (this new hire will place in the top 10% of the firm) given Y (they graduated from an Ivy League school)? A certain percentage of employees are top-performing employees, some Ivy League grads will be top-performers (others not) and some non-Ivy League grads will be top-performers (others not). If I’m staring at a random employee trying to guess whether they are a top-performing employee all I have are the starting odds, and, if only the top 10% qualify, I know my chances are 1 in 10. But I can update my odds if supplied information regarding their education. Here’s another example. What is the likelihood a project will be successful (X) given it missed one of the first two milestones (Y)?. There are lots of helpful resources online if you want to learn more but think of it this way (hat tip to Kalid Azad at Better Explained); original odds x the evidence adjustment = your new odds. The actual equation is more complicated but that is the intuition behind it. Bayesian analysis has its naysayers. In the examples provided, the prior odds of success are known, or could easily be obtained, but this isn’t always true. Most of the time subjective prior probabilities are required and this type of tomfoolery is generally discouraged. There are ways around that, but no time to explain it here.

12. A word about crowds. Is there a wisdom of crowds? Some say yes, others say no. My view is that crowds can be very useful if individual members of the crowd are able to vote independently or if the environment is such that there are few repercussions for voicing disagreement. Otherwise, I think signaling effects from seeing how others are “voting” is too much evolutionary force to overcome with sheer rational willpower. Our earliest ancestors ran when the rest of the tribe ran. Not doing so might have resulted in an untimely demise.

13. Analyze your own motives. Jonathan Haidt, author of The Righteous Mind: Why Good People Are Divided by Politics and Religion, is credited with teaching that logic isn’t used to find truth, it’s used to win arguments. Logic may not be the only source of truth (and I have no basis for that claim). Keep this in mind as it has to do with the role of intuition in decision making.

Just a few closing thoughts.

We are pretty hard on ourselves. My process is to make the best decisions I can, realizing not all of them will be optimal. I have a method to track my decisions and to score how accurate I am. Sometimes I use heuristics, but I try to keep those to within my area of competency, as Munger says. I don’t do lists of pros and cons because I feel like I’m just trying to convince myself, either way.

If I have to make a big decision, in an unfamiliar area, I try to learn as much as I can about the issue on my own and from experts, assess how much randomness could be present, formulate my thesis, look for contradictory information, try and build downside protection (risking as little as possible) and watch for signals that may indicate a likely outcome. Many of my decisions have not worked out, but most of them have. As the world changes, so will my process, and I look forward to that.

Have something to say? Become a member: join the slack conversation and chat with Mark directly.

Scarcity: Why Having Too Little Means So Much

scarcity

“The biggest mistake we make about scarcity is we view it as a physical phenomenon. It’s not.”

We're busier than ever. The typical inbox is perpetually swelling with messages awaiting attention. Meetings need to be rescheduled because something came up. Our relationships suffer. We don't spend as much time as we should with those who mean something to us. We have little time for new people; potential friends eventually get the hint and stop proposing ideas for things to do together. Falling behind turns into a vicious cycle.

Does this sound anything like your life?

You have something in common with people who fall behind on their bills, argue Harvard economist Sendhil Mullainathan and Princeton psychologist Eldar Shafir in their book Scarcity: Why Having Too Little Means So Much. The resemblance, they write, is clear.

Missed deadlines are a lot like over-due bills. Double-booked meetings (committing time you do not have) are a lot like bounced checks (spending money you do not have). The busier you are, the greater the need to say no. The more indebted you are, the greater the need to not buy. Plans to escape sound reasonable but prove hard to implement. They require constant vigilance—about what to buy or what to agree to do. When vigilance flags—the slightest temptation in time or in money—you sink deeper.

Some people end up sinking further into debt. Others with more commitments. The resemblance is striking.

We normally think of time management and money management as distinct problems. The consequences of failing are different: bad time management leads to embarrassment or poor job performance; bad money management leads to fees or eviction. The cultural contexts are different: falling behind and missing a deadline means one thing to a busy professional; falling behind and missing a debt payment means something else to an urban low-wage worker.

What's common between these situations? Scarcity. “By scarcity,” they write, “we mean having less than you feel you need.”

And what happens when we feel a sense of scarcity? To show us Mullainathan and Shafir bring us back to the past. Near the end of World War II, the Allies realized they would need to feed a lot of Europeans on the edge of starvation. The question wasn't where to get the food but, rather, something more technical. What is the best way to start feeding them? Should you begin with normal meals or small quantities that gradually increase? Researchers at the University of Minnesota undertook an experiment with healthy male volunteers in a controlled environment “where their calories were reduced until they were subsisting on just enough food so as not to permanently harm themselves.” The most surprising findings were psychological. The men became completely focused on food in unexpected ways:

Obsessions developed around cookbooks and menus from local restaurants. Some men could spend hours comparing the prices of fruits and vegetables from one newspaper to the next. Some planned now to go into agriculture. They dreamed of new careers as restaurant owners…. When they went to the movies, only the scenes with food held their interest.

“Scarcity captures the mind,” Mullainathan and Shafir write. Starving people have food on their mind to the point of irrationality. But we all act this way when we experience scarcity. “The mind,” they write, “orients automatically, powerfully, toward unfulfilled needs.”

Scarcity is like oxygen. When you don't need it, you don't notice it. When you do need it, however, it's all you notice.

For the hungry, that need is food. For the busy it might be a project that needs to be finished. For the cash-strapped it might be this month's rent payment; for the lonely, a lack of companionship. Scarcity is more than just the displeasure of having very little. It changes how we think. It imposes itself on our minds.

And when scarcity is taking up your mental cycles and putting your attention on what you lack, you can't attend to other things. How, for instance, can you learn?

(There was) a school in New Haven that was located next to a noisy railroad line. To measure the impact of this noise on academic performance, two researchers noted that only one side of the school faced the tracks, so the students in classrooms on that side were particularly exposed to the noise but were otherwise similar to their fellow students. They found a striking difference between the two sides of the school. Sixth graders on the train side were a full year behind their counterparts on the quieter side. Further evidence came when the city, prompted by this study, installed noise pads. The researchers found this erased the difference: now students on both sides of the building performed at the same level.

Cognitive load matters. Mullainathan and Shafir believe that scarcity imposes a similar mental tax, impairing our ability to perform well and exercise self control.

We are all susceptible to “the planning fallacy,” which means that we're too optimistic about how long it will take to complete a project. Busy people, however, are more vulnerable to this fallacy. Because they are focused on everything they must currently do, they are “more distracted and overwhelmed—a surefire way to misplan.” “The underlying problem,” writes Cass Sunstein in his review for the New York Review of Books, “is that when people tunnel, they focus on their immediate problem; ‘knowing you will be hungry next month does not capture your attention the same way that being hungry today does.' A behavioral consequence of scarcity is “juggling,” which prevents long-term planning.”

When we have abundance we don't have as much depletion. Wealthy people can weather a shock without turning their lives upside-down. The mental energy needed to prevail may be substantial but it will not create a feeling of scarcity.

Imagine a day at work where your calendar is sprinkled with a few meetings and your to-do list is manageable. You spend the unscheduled time by lingering at lunch or at a meeting or calling a colleague to catch up. Now, imagine another day at work where your calendar is chock-full of meetings. What little free time you have must be sunk into a project that is overdue. In both cases time was physically scarce. You had the same number of hours at work and you had more than enough activities to fill them. Yet in one case you were acutely aware of scarcity, of the finiteness of time; in the other it was a distant reality, if you felt it at all. The feeling of scarcity is distinct from its physical reality.

Mullainathan and Shafir sum up their argument:

In a way, our argument in this book is quite simple. Scarcity captures our attention, and this provides a narrow benefit: we do a better job of managing pressing needs. But more broadly, it costs us: we neglect other concerns, and we become less effective in the rest of life. This argument not only helps explain how scarcity shapes our behaviors; it also produces some surprising results and sheds new light on how we might go about managing our scarcity.

In a way this explains why diets never work.

Scarcity: Why Having Too Little Means So Much goes on to discuss some of the possible way to mitigate scarcity using defaults and reminders.

Michael Mauboussin’s Behavioral Economics Reading List

Michael Mauboussin is the Head of Global Financial Strategies at Credit Suisse. If you're looking to learn about behavioral economics, he recommends you start with the following books and articles.

Books

1. Thinking, Fast and Slow by Daniel Kahneman
Comment: A sweeping review of the work of the greatest psychologist of the past half century.

2. Judgment in Managerial Decision Making — Seventh Edition by Max H. Bazerman
Comment: A great source for heuristics and biases

3. Expert Political Judgment by Philip Tetlock
Comment: Are you still listening to expert prognosticators? A devastating, empirical study of how bad expert predictions are in complex realms.

4. The Halo Effect by Phil Rosenzweig
Comment: Lots of lessons in 175 pages–you’ll never look at the world the same way after reading this one.

5. The Winner’s Curse by Richard Thaler
Comment: The contents are the foundation of what we call behavioral finance.

Recommended Articles

1. On the Psychology of Prediction by Daniel Kahneman and Amos Tversky (Psychological Review, Vol. 80, No. 4, July 1973, 237-251)
Comment: Kahneman said this was his favorite paper. This explains the inside-outside view.

2. Prospect Theory: An Analysis of Decision Under Risk by Daniel Kahneman and Amos Tversky (Econometrica, Vol. 72, No. 2, March 1979, 263-292)
Comment: A clear and compelling discussion of how behavior varies from what utility theory predicts.

3. A Survey of Behavioral Finance by Nicholas C. Barberis and Richard H. Thaler (in George Constantinides, Milton Harris, and Rene Stulz, eds.Handbook of Economics of Finance: Volume 1B, Financial Markets and Asset Pricing, Elsevier North Holland, Chapter 18, 1053-1128)
Comment: Exactly as advertised–what you need to know about behavioral finance in one place.

4. Conditions for Intuitive Expertise: A Failure to Disagree by Daniel Kahneman and Gary Klein (American Psychologist, Vol. 64, No. 6, September 2009, 515-536)
Comment: We overestimate the abilities of experts. But they do work in certain settings. This explains when you can trust an expert.

5. Hindsight ≠ Foresight: The Effect of Outcome Knowledge on Judgment Under Uncertainty by Baruch Fischhoff (Journal of Experimental Psychology: Human Perception and Performance, Vol. 1, No. 3, August 1975, 288-299)
Comment: Hindsight bias and creeping determinism. Big problems.

(h/t Simoleon Sense)

Still curious? Check out the Farnam Street Behavioral Economics Reading List.

The Pursuit of Fairness

As James Surowiecki illustrates in an excellent piece in the New Yorker, the pursuit of perfect fairness causes a lot of terrible problems in system function.

Surowiecki calls this The Fairness Trap:

…Rationally, then, this standoff should end with a compromise—relaxing some austerity measures, and giving Greece a little more aid and time to reform. And we may still end up there. But the catch is that Europe isn’t arguing just about what the most sensible economic policy is. It’s arguing about what is fair. German voters and politicians think it’s unfair to ask Germany to continue to foot the bill for countries that lived beyond their means and piled up huge debts they can’t repay. They think it’s unfair to expect Germany to make an open-ended commitment to support these countries in the absence of meaningful reform. But Greek voters are equally certain that it’s unfair for them to suffer years of slim government budgets and high unemployment in order to repay foreign banks and richer northern neighbors, which have reaped outsized benefits from closer European integration. The grievances aren’t unreasonable, on either side, but the focus on fairness, by making it harder to reach any kind of agreement at all, could prove disastrous.

The basic problem is that we care so much about fairness that we are often willing to sacrifice economic well-being to enforce it. Behavioral economists have shown that a sizable percentage of people are willing to pay real money to punish people who are taking from a common pot but not contributing to it. Just to insure that shirkers get what they deserve, we are prepared to make ourselves poorer. Similarly, a famous experiment known as the ultimatum game—one person offers another a cut of a sum of money and the second person decides whether or not to accept—shows that people will walk away from free money if they feel that an offer is unfair. Thus, even when there’s a solution that would leave everyone better off, a fixation on fairness can make agreement impossible.

You can see this in the way the U.S. has dealt with the foreclosure crisis. Plenty of economists recommended giving mortgage relief to underwater homeowners, but that has not happened on any meaningful scale, in part because so many voters see it as unfair to those who are still obediently paying their mortgages. Mortgage relief would almost certainly have helped all homeowners, not just underwater ones—by limiting the spillover impact of foreclosures on house prices—but, still, the idea that some people would be getting something for nothing irritated voters.

The fairness problem is exacerbated by the fact that our definition of what counts as fair typically reflects what the economists Linda Babcock and George Loewenstein call a “self-serving bias.” You’d think that the Greeks’ resentment of austerity might be attenuated by the recognition of how much money Germany has already paid and how much damage was done by rampant Greek tax dodging. Or Germans might acknowledge that their devotion to low inflation makes it much harder for struggling economies like Greece to start growing again. Instead, the self-serving bias leads us to define fairness in ways that redound to our benefit, and to discount information that might conflict with our perspective. This effect is even more pronounced when bargainers don’t feel that they are part of the same community—a phenomenon that psychologists call “social distance.” The pervasive rhetoric that frames the conflict in terms of national stereotypes—hardworking, frugal Germans versus frivolous, corrupt Greeks, or tightfisted, imperialistic Germans versus freewheeling, independent Greeks—makes it all the more difficult to reach a reasonable compromise.

From the perspective of society as a whole, concern with fairness has all kinds of benefits: it limits exploitation, promotes meritocracy, and motivates workers. But in a negotiation where neither side can have what it really wants, and where the least bad solution is as good as it gets, worrying too much about fairness can be suicidal. To move Europe away from the brink, voters and politicians on all sides need to stop asking themselves what’s fair and start asking themselves what’s possible.

It is more important to have the right system in place than perfect fairness to the individual. The argument here is one of moral hazard and incentives. If you don't punish Greece, you foster a system where it's ok to default once in a while. This idea will spread to other countries.

In Steven Sample's excellent book, The Contrarian's Guide to Leadership, he talks about the law of a higher good, which he took from Machiavelli's The Prince.

Let me clarify the most fundamental misunderstanding. Machiavelli was not an immoral or even an amoral man; as mentioned earlier, he had a strong set of moral principles. But he was driven by the notion of a higher good: an orderly state in which citizens can move about at will, conduct business, safeguard their families and possessions, and be free of foreign intervention or domination. Anything which could harm this higher good, Machiavelli argued, must be opposed vigorously and ruthlessly. Failure to do so out of either weakness or kindness was condemned by Machiavelli as being contrary to the interests of the state, just as it would be contrary to the interests of a patient for his surgeon to refuse to perform a needed operation out of fear that doing so would inflict pain on the patient.

Still curious? Add The Contrarian's Guide to Leadership and The Prince to your reading list.

Five Book recommendations from Dan Ariely on Behavioural Economics

Dan Ariely, professor of psychology and behavorial economics, says we can all be more aware of our surroundings and our decision-making process. He suggests the following five books:

The Invisible Gorilla

We think we see with our eyes, but the reality is that we largely see with our brains. Our brain is a master at giving us what we expect to see. It’s all about expectation, and when things violate expectation we are just unaware of them. We go around the world with a sense that we pay attention to lots of things. The reality is that we notice much less than we think. And if we notice so much less than we think, what does that mean about our ability to figure out things around us, to learn and improve? It means we have a serious problem. I think this book has done a tremendous job in showing how even in vision, which is such a good system in general, we are poorly tooled to make good decisions.

Mindless Eating

This is one of my favourite books. He takes many of these findings about decision-making and shows how they work in the domain of food. Food is tangible, so it helps us understand the principles.

The Person and the Situation

This is an oldie but a goodie. It’s a book that shows how when we make decisions, we think personality plays a big role. “I’m the kind of person who does this, or I’m the kind of person who does that.” The reality is that the environment in which we make decisions determines a lot of what we do. Mindless Eating is also about that – how the food environment affects us. Nudge is also about that – how we can actually design the environment or external influences to make better decisions. But The Person and the Situation was the first book to articulate how we think we are making decisions, when the reality is that the environment around us has a lot to do with it.

Influence

The Cialdini book is very important because it covers a range of ways in which we end up doing things, and how we don’t understand why we’re doing them. It also shows you how much other people have control, at the end of the day, over our actions. Both of these elements are crucial. The book is becoming even more important these days.

Nudge

One of the reasons Nudge is so important is because it’s taking these ideas and applying them to the policy domain. Here are the mistakes we make. Here are the ways marketers are trying to influence us. Here’s the way we might be able to fight back. If policymakers understood these principles, what could they do? The other important thing about the book is that it describes, in detail, small interventions. It’s basically a book about cheap persuasion.

Dan Ariely is the best-selling author of The Upside of Irrationality: The Unexpected Benefits of Defying Logic at Work and at Home and Predictably Irrational, Revised and Expanded Edition: The Hidden Forces That Shape Our Decisions.

12