Tag: Biases

Choosing your Choice Architect(ure)

“Nothing will ever be attempted
if all possible objections must first be overcome.”

— Samuel Johnson

***

In the book Nudge by Richard Thaler and Cass Sunstein they coin the terms ‘Choice Architecture’ and ‘Choice Architect’. For them, if you have an ability to influence the choices other people make, you are a choice architect.

Considering the number of interactions we have everyday, it would be quite easy to argue that we are all Choice Architects at some point. But this also makes the inverse true; we are also wandering around someone else’s Choice Architecture.

Let’s take a look at a few of the principles of good choice architecture, so we can get a better idea of when someone is trying to nudge us.

This information can then be used/weighed when making decisions.  

Defaults

Thaler and Sunstein start with a discussion on “defaults” that are commonly offered to us:

For reasons we have discussed, many people will take whatever option requires the least effort, or the path of least resistance. Recall the discussion of inertia, status quo bias, and the ‘yeah, whatever’ heuristic. All these forces imply that if, for a given choice, there is a default option — an option that will obtain if the chooser does nothing — then we can expect a large number of people to end up with that option, whether or not it is good for them. And as we have also stressed, these behavioral tendencies toward doing nothing will be reinforced if the default option comes with some implicit or explicit suggestion that it represents the normal or even the recommended course of action.

When making decisions people will often take the option that requires the least effort or the path of least resistance. This makes sense: It’s not just a matter of laziness, we also only have so many hours in a day. Unless you feel particularly strongly about it, if putting little to no effort towards something leads you forward (or at least doesn’t noticeably kick you backwards) this is what you are likely to do. Loss aversion plays a role as well. If we feel like the consequences of making a poor choice are high, we will simply decide to do nothing. 

Inertia is another reason: If the ship is currently sailing forward, it can often take a lot of time and effort just to slightly change course.

You have likely seen many examples of inertia at play in your work environment and this isn’t necessarily a bad thing.

Sometimes we need that ship to just steadily move forward. The important bit is to realize when this is factoring into your decisions, or more specifically, when this knowledge is being used to nudge you into making specific choices.

Let’s think about some of your monthly recurring bills. While you might not be reading that magazine or going to the gym, you’re still paying for the ability to use that good or service. If you weren’t being auto-renewed monthly, what is the chance that you would put the effort into renewing that subscription or membership? Much lower, right? Publishers and gym owners know this, and they know you don't want to go through the hassle of cancelling either, so they make that difficult, too. (They understand well our tendency to want to travel the path of least resistance and avoid conflict.)

This is also where they will imply that the default option is the recommended course of action. It sounds like this:

“We’re sorry to hear you no longer want the magazine Mr. Smith. You know, more than half of the fortune 500 companies have a monthly subscription to magazine X, but we understand if it’s not something you’d like to do at the moment.”

or

“Mr. Smith we are sorry to hear that you want to cancel your membership at GymX. We understand if you can’t make your health a priority at this point but we’d love to see you back sometime soon. We see this all the time, these days everyone is so busy. But I’m happy to say we are noticing a shift where people are starting to make time for themselves, especially in your demographic…”

(Just cancel them. You’ll feel better. We promise.)

The Structure of Complex Choices

We live in a world of reviews. Product reviews, corporate reviews, movie reviews… When was the last time you bought a phone or a car before checking the reviews? When was the last time that you hired an employee without checking out their references? 

Thaler and Sunstein call this Collaborative Filtering and explain it as follows:

You use the judgements of other people who share your tastes to filter through the vast number of books or movies available in order to increase the likelihood of picking one you like. Collaborative filtering is an effort to solve a problem of choice architecture. If you know what people like you tend to like, you might well be comfortable in selecting products you don’t know, because people like you tend to like them. For many of us, collaborative filtering is making difficult choices easier.

While collaborative filtering does a great job of making difficult choices easier we have to remember that companies also know that you will use this tool and will try to manipulate it. We just have to look at the information critically, compare multiple sources and take some time to review the reviewers.

These techniques can be useful for decisions of a certain scale and complexity: when the alternatives are understood and in small enough numbers. However, once we reach a certain size we require additional tools to make the right decision.

One strategy to use is what Amos Tversky (1972) called ‘elimination by aspects.’ Someone using this strategy first decides what aspect is most important (say, commuting distance), establishes a cutoff level (say, no more than a thirty-minute commute), then eliminates all the alternatives that do not come up to this standard. The process is repeated, attribute by attribute (no more than $1,500 per month; at least two bedrooms; dogs permitted), until either a choice is made or the set is narrowed down enough to switch over to a compensatory evaluation of the ‘finalists.’”

This is a very useful tool if you have a good idea of which attributes are of most value to you.

When using these techniques, we have to be mindful of the fact that the companies trying to sell us goods have spent a lot of time and money figuring out what attributes are important to you as well.

For example, if you were to shop for an SUV you would notice that there are a specific number of variables they all seem to have in common now (engine options, towing options, seating options, storage options). They are trying to nudge you not to eliminate them from your list. This forces you to do the tertiary research or better yet, this forces you to walk into dealerships where they will try to inflate the importance of those attributes (which they do best).

They also try to call things new names as a means to differentiate themselves and get onto your list. What do you mean our competitors don't have FLEXfuel?

Incentives

Incentives are so ubiquitous in our lives that it’s very easy to overlook them. Unfortunately, this can influence us to make poor decisions.

Thaler and Sunstein believe this is tied into how salient the incentive is.

The most important modification that must be made to a standard analysis of incentives is salience. Do the choosers actually notice the incentives they face? In free markets, the answer is usually yes, but in important cases the answer is no.

Consider the example of members of an urban family deciding whether to buy a car. Suppose their choices are to take taxis and public transportation or to spend ten thousand dollars to buy a used car, which they can park on the street in front of their home. The only salient costs of owning this car will be the weekly stops at the gas station, occasional repair bills, and a yearly insurance bill. The opportunity cost of the ten thousand dollars is likely to be neglected. (In other words, once they purchase the car, they tend to forget about the ten thousand dollars and stop treating it as money that could have been spent on something else.) In contrast, every time the family uses a taxi the cost will be in their face, with the meter clicking every few blocks. So behavioral analysis of the incentives of car ownership will predict that people will underweight the opportunity costs of car ownership, and possibly other less salient aspects such as depreciation, and may overweight the very salient costs of using a taxi.

The problems here are relatable and easily solved: If the family above had written down all the numbers related to either taxi, public transportation, or car ownership, it would have been a lot more difficult for them to undervalue the salient aspects of any of their choices. (At least if the highest value attribute is cost).

***

This isn’t an exhaustive list of all the daily nudges we face but it’s a good start and some important, translatable, themes emerge.

  • Realize when you are wandering around someone’s choice architecture.
  • Do your homework
  • Develop strategies to help you make decisions when you are being nudged.

 

Still Interested? Buy, and most importantly read, the whole book. Also, check out our other post on some of the Biases and Blunders covered in Nudge.

13 Practical Ideas That Have Helped Me Make Better Decisions

This article is a collaboration between Mark Steed and myself. He did most of the work. Mark was a participant at the last Re:Think Decision Making event as well as a member of the Good Judgment Project. I asked him to put together something on making better predictions. This is the result.

We all face decisions. Sometimes we think hard about a specific decision, other times, we make decisions without thinking. If you've studied the genre you’ve probably read Taleb, Tversky, Kahneman, Gladwell, Ariely, Munger, Tetlock, Mauboussin and/or Thaler. These pioneers write a lot about “rationality” and “biases”.

Rationality dictates the selection of the best choice among however many options. Biases of a cognitive or emotional nature creep in and are capable of preventing the identification of the “rational” choice. These biases can exist in our DNA or can be formed through life experiences. The mentioned authors consider biases extensively, and, lucky for us, their writings are eye-opening and entertaining.

Rather than rehash what brighter minds have discussed, I’ll focus on practical ideas that have helped me make better decisions. I think of this as a list of “lessons learned (so far)” from my work in asset management and as a forecaster for the Good Judgment Project. I’ve held back on submitting this given the breadth and depth of the FS readers, but, rather than expect perfection, I wanted to put something on the table because I suspect many of you have useful ideas that will help move the conversation forward.

1. This is a messy business. Studying decision science can easily motivate self-loathing. There are over one-hundred cognitive biases that might prevent us from making calculated and “rational” decisions. What, you can’t create a decision tree with 124 decision nodes, complete with assorted probabilities in split seconds? I asked around, and it turns out, not many people can. Since there is no way to eliminate all the potential cognitive biases and I don’t possess the mental faculties of Dr. Spock or C-3PO, I might as well live with the fact that some decisions will be more elegant than others.

2. We live and work in dynamic environments. Dynamic environments adapt. The opposite of dynamic environments are static environments. Financial markets, geopolitical events, team sports, etc. are examples of dynamic “environments” because relationships between agents evolve and problems are often unpredictable. Changes from one period are conditional on what happened the previous period. Casinos are more representative of static environments. Not casinos necessarily, but the games inside. If you play Roulette, your odds of winning are always the same and it doesn’t matter what happened the previous turn.

3. Good explanatory models are not necessarily good predictive models. Dynamic environments have a habit of desecrating rigid models. While blindly following an elegant model may be ill-advised, strong explanatory models are excellent guideposts when paired with sound judgment and intuition. Just as I’m not comfortable with the automatic pilot flying a plane without a human in the cockpit, I’m also not comfortable with a human flying a plane without the help of technology. It has been said before, people make models better and models make people better.

4. Instinct is not always irrational.  The rule of thumb, otherwise known as heuristics, provide better results than more complicated analytical techniques. Gerd Gigerenzer, is the thought leader and his book Risk Savvy: How to Make Good Decisions is worth reading. Most literature despises heuristics, but he asserts intuition proves superior because optimization is sometimes mathematically impossible or exposed to sampling error. He often uses the example of Harry Markowitz, who won a Nobel Prize in Economics in 1990 for his work on Modern Portfolio Theory. Markowitz discovered a method for determining the “optimal” mix of assets. However, Markowitz himself did not follow his Nobel prize-winning mean-variance theory but instead used a 1/N heuristic by spreading his dollars equally across N number of investments. He concluded that his 1/N strategy would perform better than a mean-optimization application unless the mean-optimization model had 500 years to compete.  Our intuition is more likely to be accurate if it is preceded by rigorous analysis and introspection. And simple rules are more effective at communicating winning strategies in complex environments. When coaching a child’s soccer team, it is far easier teaching a few basic principles, than articulating the nuances of every possible situation.

5. Decisions are not evaluated in ways that help us reduce mistakes in the future. Our tendency is to only critique decisions where the desired outcome was not achieved while uncritically accepting positive outcomes even if luck, or another factor, produced the desired result. At the end of the day I understand all we care about are results, but good processes are more indicative of future success than good results.

6. Success is ill-defined. In some cases this is relatively straightforward. If the outcome is binary, either it did, or did not happen, success is easy to identify. But this is more difficult in situations where the outcome can take a range of potential values, or when individuals differ on what the values should be.

7. We should care a lot more about calibration. Confidence, not just a decision, should be recorded (and to be clear, decisions should be recorded). Next time you have a major decision, ask yourself how confident you are that the desired outcome will be achieved. Are you 50% confident? 90%? Write it down. This helps with calibration. For all decisions in which you are 50% confident, half should be successes. And you should be right nine out of ten times for all decisions in which you are 90% confident. If you are 100% confident, you should never be wrong. If you don’t know anything about a specific subject then you should be no more confident than a coin flip. It’s amazing how we will assign high confidence to an event we know nothing about. Turns out this idea is pretty helpful. Let’s say someone brings an idea to you and you know nothing about it. Your default should be 50/50; you might as well flip a coin. Then you just need to worry about the costs/payouts.

8. Probabilities are one thing, payouts are another. You might feel 50/50 about your chances but you need to know your payouts if you are right. This is where the expected value comes in handy. It’s the probability of being right multiplied by the payout if you are right, plus the probability of being wrong multiplied by the cost. E= .50(x) + .50(y). Say someone on your team has an idea for a project and you decided there is a 50% chance that it succeeds and, if it does, you double your money, if it doesn’t, you lose what you invested. If the project required $10mm, then the expected outcome is calculated as .50*20 + .50*0 = 10, or $10mm. If you repeat this process a number of times, approving only projects with a 2:1 payout and 50% probability of success you would likely end up with the same amount you started with. Binary outcomes that have a 50/50 probability should have a double-or-nothing payout. This is even more helpful given #7 above. If you were tracking this employee’s calibration you would have a sense as to whether their forecasts are accurate. As a team member or manager, you would want to know if a specific employee is 90% confident all the time but only 50% accurate. More importantly, you would want to know if a certain team member is usually right when they express 90% or 100% confidence. Use a Brier Score to track colleagues but provide an environment to encourage discussion and openness.

9. We really are overconfident. Starting from the assumption that we are probably only 50% accurate is not a bad idea. Phil Tetlock, a professor at UPenn, Team Leader for the Good Judgment Project and author of Expert Political Judgment: How Good Is It? How Can We Know?, suggested political pundits are about 53% accurate regarding political forecasts while CXO Advisory tracks investment gurus and finds they are, in aggregate, about 48% accurate. These are experts making predictions about their core area of expertise. Consider the rate of divorce in the U.S., currently around 40%-50%, as additional evidence that sometimes we don’t know as much as we think. Experts are helpful in explaining a specific discipline but they are less helpful in dynamic environments. If you need something fixed, like a car, a clock or an appliance then experts can be very helpful. Same for tax and accounting advice. It's not because this stuff is simple, it's because the environment is static.

10. Improving estimations of probabilities and payouts is about polishing our 1) subject matter expertise and 2) cognitive processing abilities. Learning more about a given subject reduces uncertainty and allows us to move from the lazy 50/50 forecast. Say you travel to Arizona and get stung by a scorpion. Rather than assume a 50% probability of death you can do a quick internet search and learn no one has died from a scorpion bite in Arizona since the 1960s. Overly simplistic, but, you get the picture. Second, data needs to be interpreted in a cogent way. Let’s say you work in asset management and one of your portfolio managers has made three investments that returned -5%, -12% and 22%. What can you say about the manager (other than two of the three investments lost money)? Does the information allow you to claim the portfolio manager is a bad manager? Does the information allow you to claim you can confidently predict his/her average rate of return? Unless you’ve had some statistics, it might not be entirely clear what clinical conclusions you can draw. What if you flipped a coin three times and came up with tails on two of them? That wouldn’t seem so strange. Two-thirds is the same as 66%. If you tossed the coin one-hundred times and got 66 tails, that would be a little more interesting. The more observations, the higher our confidence should be. A 95% confidence interval for the portfolio manager’s average return would be a range between -43% and 45%. Is that enough to take action?

11. Bayesian analysis is more useful than we think. Bayesian thinking helps direct given false/true positives and false/true negatives. It’s the probability of a hypothesis given some observed data. For example, what’s the likelihood of X (this new hire will place in the top 10% of the firm) given Y (they graduated from an Ivy League school)? A certain percentage of employees are top-performing employees, some Ivy League grads will be top-performers (others not) and some non-Ivy League grads will be top-performers (others not). If I’m staring at a random employee trying to guess whether they are a top-performing employee all I have are the starting odds, and, if only the top 10% qualify, I know my chances are 1 in 10. But I can update my odds if supplied information regarding their education. Here’s another example. What is the likelihood a project will be successful (X) given it missed one of the first two milestones (Y)?. There are lots of helpful resources online if you want to learn more but think of it this way (hat tip to Kalid Azad at Better Explained); original odds x the evidence adjustment = your new odds. The actual equation is more complicated but that is the intuition behind it. Bayesian analysis has its naysayers. In the examples provided, the prior odds of success are known, or could easily be obtained, but this isn’t always true. Most of the time subjective prior probabilities are required and this type of tomfoolery is generally discouraged. There are ways around that, but no time to explain it here.

12. A word about crowds. Is there a wisdom of crowds? Some say yes, others say no. My view is that crowds can be very useful if individual members of the crowd are able to vote independently or if the environment is such that there are few repercussions for voicing disagreement. Otherwise, I think signaling effects from seeing how others are “voting” is too much evolutionary force to overcome with sheer rational willpower. Our earliest ancestors ran when the rest of the tribe ran. Not doing so might have resulted in an untimely demise.

13. Analyze your own motives. Jonathan Haidt, author of The Righteous Mind: Why Good People Are Divided by Politics and Religion, is credited with teaching that logic isn’t used to find truth, it’s used to win arguments. Logic may not be the only source of truth (and I have no basis for that claim). Keep this in mind as it has to do with the role of intuition in decision making.

Just a few closing thoughts.

We are pretty hard on ourselves. My process is to make the best decisions I can, realizing not all of them will be optimal. I have a method to track my decisions and to score how accurate I am. Sometimes I use heuristics, but I try to keep those to within my area of competency, as Munger says. I don’t do lists of pros and cons because I feel like I’m just trying to convince myself, either way.

If I have to make a big decision, in an unfamiliar area, I try to learn as much as I can about the issue on my own and from experts, assess how much randomness could be present, formulate my thesis, look for contradictory information, try and build downside protection (risking as little as possible) and watch for signals that may indicate a likely outcome. Many of my decisions have not worked out, but most of them have. As the world changes, so will my process, and I look forward to that.

Have something to say? Become a member: join the slack conversation and chat with Mark directly.