Tag: Max Bazerman

The Difference Between Seeing and Observing

The Art of Observation
In A Scandal in Bohemia, Sherlock Holmes teaches Watson the difference between seeing and observing:

“When I hear you give your reasons,” I remarked, “the thing always appears to me to be so ridiculously simple that I could easily do it myself, though at each successive instance of your reasoning, I am baffled until you explain your process. And yet I believe that my eyes are as good as yours.”

“Quite so,” he answered, lighting a cigarette, and throwing himself down into an armchair. “You see, but you do not observe. The distinction is clear. For example, you have frequently seen the steps which lead up from the hall to this room.”

“Frequently.”

“How often?”

“Well, some hundreds of times.”

“Then how many are there?”

“How many? I don’t know.”

“Quite so! You have not observed. And yet you have seen. That is just my point. Now, I know that there are seventeen steps, because I have both seen and observed.”

The difference between seeing and observing is fundamental to many aspects of life. Indeed, we can learn a lot from how Sherlock Holmes thinks. Noticing is even something that Nassim Taleb has chimed in on with Noise and Signal.

In the video below, Harvard Business School Professor Max Bazerman, author of The Power of Noticing: What the Best Leaders See, discusses how important it is not just to be able to focus, but to be a good noticer as well. What he's really talking about is observation.

A number of years ago I had an opportunity to notice and I failed to do so and it’s been an obsession with me ever since. On March 10, 2005 I was hired by the U.S. Department of Justice in a landmark case that they were fighting against the tobacco industry. I was hired as a remedy witness. That is, I was hired to provide recommendations to the court about what the penalty would be if, in fact, the Department of Justice succeeded in its trial against the tobacco industry. I had spent a couple hundred hours working for the Department of Justice including submitting my written direct testimony which had been submitted to the court.

I was scheduled to be on the stand on May 4 where the tobacco industry attorneys would be asking me a series of questions. On April 30, a number of days before my May 4 testimony I was in Washington D.C. to meet with the Department of Justice attorneys to prepare for my time on the stand. When the day started the Department of Justice attorney that I had been working with said to me, “Professor Bazerman.” This occurred long after he had learned to call me Max. He said, “Professor Bazerman, the Department of Justice requests that you amend your testimony to note that the testimony would not be relevant if any of the following four conditions existed.”

He then read to me four legal conditions that I didn’t understand. When he was done talking I said to him, “Why would you ask me to amend my testimony when you know that I didn’t understand what you just said to me.” And his response was because if you don’t agree, there’s a significant chance that senior leadership in the Department of Justice will remove you from the case before you are on the stand on May 4. To which I said, “Okay, I don’t agree to those changes.” And his response was, “Good. Let’s continue with your preparation.” I was jarred by the fact that something very strange had occurred. But I was overwhelmed in life. I was trying to help this case and I didn’t quite know what had occurred. But to this day I’m critical of the fact that I took no action. I did appear on trial on May 4 and the trial ended in early June.

But on June 17 I woke up in a hotel room in London. I was working with another client at the time. And I woke up early at 5:00 a.m. and I opened up The New York Times web edition and I read a story about Matt Myers, the president of Tobacco Free Kids, who had come forward to the media with evidence about how Robert McCallum, the number two official in the Department of Justice, was involved in attempting to get him to change his testimony. And I then read basically the same account that I had experienced back on April 30. Matt Myers had the insight to know that he should do something with this information about what had occurred in terms of the attempt to tamper with this testimony. And at that point it was straightforward to come forward to the media to speak to congressional representatives about what happened. And my own role received media attention as well.

But to this day I’m still struck by the fact that I didn’t come forward on April 30 when, in the back of my mind I knew something had occurred. The reason I tell you this story is because I think a lot of our failure to notice happens when we’re busy. It happens when we don’t know exactly what’s happening. But I think it’s our job as executives, as leaders, as professionals to act when we’re pretty sure that there’s something wrong. It’s our job to notice and to not simply be passive when we can’t quite figure out the evidence. It’s our job to take steps to figure out what’s going on and to act on critical information.

***

Follow your curiosity and learn about why your decision making environment matters.

The Power of Noticing: What the Best Leaders See

The Power of Noticing, Max Bazerman

In The Power of Noticing: What the Best Leaders See, Harvard Professor Max Bazerman, opines about how the failure to notice things leads to “poor personal decisions, organizational crises, and societal disasters.” He walks us through the details of each of these, highlighting recent research and how it impacts our awareness of information we're prone to ignore. Bazerman presents a blueprint to help us be more aware of critical information that we otherwise would have ignored. It causes us to ask the questions, typically found in hindsight but rarely in foresight, “How could that have happened” and “Why didn't I see it coming?”

Even the best of us fail to notice things, even critical and readily available information in our environment “due to the human tendency to wear blinders that focus us on a limited set of information.” This additional information, however, is essential to success and Bazerman argues that “in the future it will prove a defining quality of leadership.”

Noticing is a system 2 process.

In his best-selling book from 2011, Thinking, Fast and Slow, Nobel laureate Daniel Kahneman discusses Stanovich and West's distinction between System 1 and System 2 thinking. System 1 is our intuitive system: it is quick, automatic, effortless, implicit, and emotional. Most of our decisions occur in System 1. By contrast, System 2 thinking is slower and more conscious, effortful, explicit, and logical. My colleague Dolly Chugh of New York University notes that the frantic pace of managerial life requires that executives typically rely on System 1 thinking. Readers of this book doubtless are busy people who depend on System 1 when making many decisions. Unfortunately we are generally more affected by biases that restrict our awareness when we rely on System 1 thinking than when we use System 2 thinking.

Noticing important information in contexts where many people do not is generally a System 2 process.

Logic and other strategic thinking tools, like game theory, are also generally system 2 thinking. This requires that we step away from the heat of the moment and think a few steps ahead – imagining how others will respond. This is something that “system 1 intuition typically fails to do adequately.”

So a lot of what Bazerman spends time on is moving toward system 2 thinking when making important judgements.

When you do so, you will find yourself noticing more pertinent information from your environment than you would have otherwise. Noticing what is not immediately in front of you is often counterintuitive and the province of System 2. Here, then, is the purpose and promise of this book: your broadened perspective as a result of System 2 thinking will guide you toward more effective decisions and fewer disappointments.

Rejecting What's Available

Often the best decisions require that you look beyond what's available and reject the presented options. Bazerman didn't always think this way, he needed some help from his colleague Richard Zeckhauser. At a recent talk, Zeckhauser provided the audience with the “Cholesterol Problem.”

Your doctor has discovered that you have a high cholesterol level, namely 260. She prescribes one of many available statin drugs. She says this will generally drop your cholesterol about 30 percent. There may be side effects. Two months later you return to your doctor. Your cholesterol level is now at 195. Your only negative side effect is sweaty palms, which you experience once or twice a week for one or two hours. Your doctor asks whether you can live with this side effect. You say yes. She tells you to continue on the medicine. What do you say?

Bazerman, who has naturally problematic lipids, had a wide body of knowledge on the subject and isn't known for his shyness. He went with the statin.

Zeckhauser responded, “Why don't you try one of the other statins instead?” I immediately realized that he was probably right. Rather than focusing on whether or not to stay on the current statin, broadening the question to include the option of trying other statins makes a great deal of sense. After all, there may well be equally effective statins that don't cause sweaty palms or any other side effects. My guess is that many patients err by accepting one of two options that a doctor presents to them. It is easy to get stuck on an either/or choice, which I … fell victim to at Zeckhauser's lecture. I made the mistake of accepting the choice as my colleague presented it. I could have and should have asked what all of the options were. But I didn't. I too easily accepted the choice presented to me.

The Power of Noticing: What the Best Leaders See opens your eyes to what you're missing.

Max Bazerman: Books for Leaders

Max Bazerman, the Jesse Isidor Straus Professor of Business Administration at Harvard Business School, recommends 7 reads for leaders.
Max Bazerman, the Jesse Isidor Straus Professor of Business Administration
at Harvard Business School, recommends 7 reads for leaders.

Max Bazerman, the author of the best book on general decision making that I've ever read, Judgment in Managerial Decision Making, came out with 7 book recommendations.

I hadn't heard of two of these, which I picked up.

1. Thinking, Fast and Slow by Daniel Kahneman

I think we've all heard of this one. Bazerman says:

The development of decision research is the most pronounced influence of the social sciences on professional education and societal change that we have witnessed in the last half century. Kahneman is the greatest social scientist of our time, and Thinking, Fast and Slow provides an integrated history of the fields of behavioral decision research and behavioral economics, the role of our two different systems for processing information (System 1 vs. System 2), and the wonderful story of Kahneman’s relationship with Amos Tversky (Tversky would have shared Kahneman’s Nobel Prize had he not passed away at an early age.).

2. Nudge: Improving Decisions About Health, Wealth and Happiness by Richard Thaler & Cass Sunstein

This is another one I think most of you have heard of but it's a classic. I once used this book as the foundation to make the case to a management team for hiring a group of behavioural psychologists. Along with Thinking, Fast and Slow it is part of the ultimate behavioural economics reading list.

Nudge takes the study of how humans depart from rational decision making and turns this work into a prescriptive strategy for action. Over the last 40 years, we have learned a great deal about the systematic and predictable ways in which the human mind departs from rational action. Yet, we have observed dozens of studies that show the limits of trying to debias the human mind. Nudge highlights that we do not need to debias humans, we simply need to understand humans, and create decision architectures with a realistic understanding of the human to guide humans to wise decisions. Nudge has emerged as the bible of behavioral insight teams that are transforming the ways countries help to devise wise policies.

3. The Big Short: Inside the Doomsday Machine by Michael Lewis

Lewis is an amazing writer, with the talent to capture amazing features of how humans have the capacity to overcome common limitations. Moneyball (that would have been on the list, but I imposed a one book per author limit) was a fascinating look about how overcoming common human limits allowed baseball leaders to develop unique and effective leadership strategies. In The Big Short, Lewis shows how people can notice, even when most of us are failing to do so. Lewis shows that it was possible to notice vast problems with our economy by 2007, and tells the amazing account of those who did.

4. Eyewitness To Power: The Essence of Leadership Nixon to Clinton by David Gergen

This one looks fascinating.

David Gergen is an amazingly insightful intellect about so many things, including the nature of Presidential leadership. His writing is wonderful, and his ability to pull out the nuggets of effective leadership in his closing chapter is a lasting contribution. You will learn about four Presidents that have escaped you in the past, and in the process, learn some insights about leadership in your organization.

5. Moral Tribes: Emotion, Reason, and the Gap Between Us and Them by Joshua Greene
This book has been recommended to me by so many smart people that there must be something to it.

Joshua Greene is a wonderful mix of insightful philosopher, careful psychologist, and keen observer of human morality. If you have ever been confronted with the famous “trolley problem”, and want to learn more, Moral Tribes is the place to go. Whether you are a philosopher looking for a new path, a psychologist looking for insight from a new direction, or simply a human who wants to understand your own morality, this book is terrific.

6. Happy Money: The Science of Smarter Spending by Elizabeth Dunn & Michael Norton

For decades, the study of consumer behavior has been dominated by the question of how marketers can understand consumers to sell their products and services. Dunn and Norton use contemporary social science to provide insight into what consumers can do to make themselves, rather than marketers, happy.

7. The Art and Science of Negotiation by Howard Raiffa

The Art and Science of Negotiation is where it all began from an intellectual standpoint, where Raiffa provides insight into how to think systematically in a world where you cannot count on the other side to do so.

Daniel Kahneman’s Favorite Approach For Making Better Decisions

premortem

Bob Sutton's book, Scaling Up Excellence: Getting to More Without Settling for Less, contains an interesting section towards the end on looking back from the future, which talks about “a mind trick that goads and guides people to act on what they know and, in turn, amplifies their odds of success.”

We build on Nobel winner Daniel Kahneman's favorite approach for making better decisions. This may sound weird, but it's a form of imaginary time travel.

It's called the premortem. And, while it may be Kahneman's favorite, he didn't come up with it. A fellow by the name of Gary Klein invented the premortem technique.

A premortem works something like this. When you're on the verge of making a decision, not just any decision but a big decision, you call a meeting. At the meeting you ask each member of your team to imagine that it's a year later.

Split them into two groups. Have one group imagine that the effort was an unmitigated disaster. Have the other pretend it was a roaring success. Ask each member to work independently and generate reasons, or better yet, write a story, about why the success or failure occurred. Instruct them to be as detailed as possible, and, as Klein emphasizes, to identify causes that they wouldn't usually mention “for fear of being impolite.” Next, have each person in the “failure” group read their list or story aloud, and record and collate the reasons. Repeat this process with the “success” group. Finally use the reasons from both groups to strengthen your … plan. If you uncover overwhelming and impassible roadblocks, then go back to the drawing board.

Premortems encourage people to use “prospective hindsight,” or, more accurately, to talk in “future perfect tense.” Instead of thinking, “we will devote the next six months to implementing a new HR software initiative,” for example, we travel to the future and think “we have devoted six months to implementing a new HR software package.”

You imagine that a concrete success or failure has occurred and look “back from the future” to tell a story about the causes.

Pretending that a success or failure has already occurred—and looking back and inventing the details of why it happened—seems almost absurdly simple. Yet renowned scholars including Kahneman, Klein, and Karl Weick supply compelling logic and evidence that this approach generates better decisions, predictions, and plans. Their work suggests several reasons why. …

1. This approach helps people overcome blind spots

As … upcoming events become more distant, people develop more grandiose and vague plans and overlook the nitty-gritty daily details required to achieve their long-term goals.

2. This approach helps people bridge short-term and long-term thinking

Weick argues that this shift is effective, in part, because it is far easier to imagine the detailed causes of a single outcome than to imagine multiple outcomes and try to explain why each may have occurred. Beyond that, analyzing a single event as if it has already occurred rather than pretending it might occur makes it seem more concrete and likely to actually happen, which motivates people to devote more attention to explaining it.

3. Looking back dampens excessive optimism

As Kahneman and other researchers show, most people overestimate the chances that good things will happen to them and underestimate the odds that they will face failures, delays, and setbacks. Kahneman adds that “in general, organizations really don't like pessimists” and that when naysayers raise risks and drawbacks, they are viewed as “almost disloyal.”

Max Bazerman, a Harvard professor, believes that we're less prone to irrational optimism when we predict the fate of projects that are not our own. For example, when it comes to friends' home renovation projects, most people estimate the costs will run 25 to 50 percent over budget. When it comes to our projects however, they will be “completed on time and near the project costs.”

4. A premortem challenges the illusion of consensus

Most times not everyone on a team agrees with the course of action. Even when you have enough cognitive diversity in the room, people still keep their mouths shut because people in power tend to reward people who agree with them while punishing those who have the courage to speak up with a dissenting view.

The resulting corrosive conformity is evident when people don't raise private doubts, known risks, and inconvenient facts. In contrast, as Klein explains, a premortem can create a competition where members feel accountable for raising obstacles that others haven't. “The whole dynamic changes from trying to avoid anything that might disrupt harmony to trying to surface potential problems.”

Mental Model: Bias from Insensitivity to Sample Size

The widespread misunderstanding of randomness causes a lot of problems.

Today we're going to explore a concept that causes a lot of human misjudgment. It’s called the bias from insensitivity to sample size, or, if you prefer,the law of small numbers.

Insensitivity to small sample sizes causes a lot of problems.

* * *

If I measured one person, who happened to measure 6 feet, and then told you that everyone in the whole world was 6 feet, you’d intuitively realize this is a mistake. You’d say, you can’t measure only one person and then draw such a conclusion. To do that you’d need a much larger sample.

And, of course, you'd be right.

While simple, this example is a key building block to our understanding of how insensitivity to sample size can lead us astray.

As Stuard Suterhland writes in Irrationality:

Before drawing conclusions from information about a limited number of events (a sample) selected from a much larger number of events (the population) it is important to understand something about the statistics of samples.

In Thinking, Fast and Slow, Daniel Kahneman writes “A random event, by definition, does not lend itself to explanation, but collections of random events do behave in a highly regular fashion.” Kahnemen continues, “extreme outcomes (both high and low) are more likely to be found in small than in large samples. This explanation is not causal.”

We all intuitively know that “the results of larger samples deserve more trust than smaller samples, and even people who are innocent of statistical knowledge have heard about this law of large numbers.”

The principle of regression to the mean says that as the sample size grows larger results should converge to a stable frequency. So, if we’re flipping coins, and measuring the proportion of times that we get heads, we’d expect it to approach 50% after some large sample size of, say, 100 but not necessarily 2 or 4.

In our minds, we often fail to account for the accuracy and uncertainty with a given sample size.

While we all understand it intuitively, it’s hard for us to realize in the moment of processing and decision making that larger samples are better representations than smaller samples.

We understand the difference between a sample size of 6 and 6,000,000 fairly well but we don't, intuitively, understand the difference between 200 and 3,000.

* * *

This bias comes in many forms.

In a telephone poll of 300 seniors, 60% support the president.

If you had to summarize the message of this sentence in exactly three words, what would they be? Almost certainly you would choose “elderly support president.” These words provide the gist of the story. The omitted details of the poll, that it was done on the phone with a sample of 300, are of no interest in themselves; they provide background information that attracts little attention.” Of course, if the sample was extreme, say 6 people, you’d question it. Unless you’re fully mathematically equipped, however, you’ll intuitively judge the sample size and you may not react differently to a sample of, say, 150 and 3000. That, in a nutshell, is exactly the meaning of the statement that “people are not adequately sensitive to sample size.”

Part of the problem is that we focus on the story over reliability, or, robustness, of the results.

System one thinking, that is our intuition, is “not prone to doubt. It suppresses ambiguity and spontaneously constructs stories that are as coherent as possible. Unless the message is immediately negated, the associations that it evokes will spread as if the message were true.”

Considering sample size, unless it’s extreme, is not a part of our intuition.

Kahneman writes:

The exaggerated faith in small samples is only one example of a more general illusion – we pay more attention to the content of messages than to information about their reliability, and as a result end up with a view of the world around us that is simpler and more coherent than the data justify. Jumping to conclusions is a safer sport in the world of our imagination than it is in reality.

* * *

In engineering, for example, we can encounter this in the evaluation of precedent.

Steven Vick, writing in Degrees of Belief: Subjective Probability and Engineering Judgment, writes:

If something has worked before, the presumption is that it will work again without fail. That is, the probability of future success conditional on past success is taken as 1.0. Accordingly, a structure that has survived an earthquake would be assumed capable of surviving with the same magnitude and distance, with the underlying presumption being that the operative causal factors must be the same. But the seismic ground motions are quite variable in their frequency content, attenuation characteristics, and many other factors, so that a precedent for a single earthquake represents a very small sample size.

Bayesian reasoning tells us that a single success, absent of other information, raises the likelihood of survival in the future.

In a way this is related to robustness. The more you’ve had to handle and you still survive the more robust you are.

Let’s look at some other examples.

* * *

Hospital

Daniel Kahneman and Amos Tversky demonstrated our insensitivity to sample size with the following question:

A certain town is served by two hospitals. In the larger hospital about 45 babies are born each day, and in the smaller hospital about 15 babies are born each day. As you know, about 50% of all babies are boys. However, the exact percentage varies from day to day. Sometimes it may be higher than 50%, sometimes lower. For a period of 1 year, each hospital recorded the days on which more than 60% of the babies born were boys. Which hospital do you think recorded more such days?

  1. The larger hospital
  2. The smaller hospital
  3. About the same (that is, within 5% of each other)

Most people incorrectly choose 3. The correct answer is, however, 2.

In Judgment in Managerial Decision Making, Max Bazerman explains:

Most individuals choose 3, expecting the two hospitals to record a similar number of days on which 60 percent or more of the babies board are boys. People seem to have some basic idea of how unusual it is to have 60 percent of a random event occurring in a specific direction. However, statistics tells us that we are much more likely to observe 60 percent of male babies in a smaller sample than in a larger sample.” This effect is easy to understand. Think about which is more likely: getting more than 60 percent heads in three flips of coin or getting more than 60 percent heads in 3,000 flips.

* * *

Another interesting example comes from Poker.

Over short periods of time luck is more important than skill. The more luck contributes to the outcome, the larger the sample you’ll need to distinguish between someone’s skill and pure chance.

David Einhorn explains.

People ask me “Is poker luck?” and “Is investing luck?”

The answer is, not at all. But sample sizes matter. On any given day a good investor or a good poker player can lose money. Any stock investment can turn out to be a loser no matter how large the edge appears. Same for a poker hand. One poker tournament isn’t very different from a coin-flipping contest and neither is six months of investment results.

On that basis luck plays a role. But over time – over thousands of hands against a variety of players and over hundreds of investments in a variety of market environments – skill wins out.

As the number of hands played increases, skill plays a larger and larger role and luck plays less of a role.

* * *

But this goes way beyond hospitals and poker. Baseball is another good example. Over a long season, odds are the best teams will rise to the top. In the short term, anything can happen. If you look at the standing 10 games into the season, odds are they will not be representative of where things will land after the full 162 game season. In the short term, luck plays too much of a role.

In Moneyball, Michael Lewis writes “In a five-game series, the worst team in baseball will beat the best about 15% of the time.”

* * *

If you promote people or work with colleagues you’ll also want to keep this bias in mind.

If you assume that performance at work is some combination of skill and luck you can easily see that sample size is relevant to the reliability of performance.

That performance sampling works like anything else, the bigger the sample size the bigger the reduction in uncertainty and the more likely you are to make good decisions.

This has been studied by one of my favorite thinkers, James March. He calls it the false record effect.

He writes:

False Record Effect. A group of managers of identical (moderate) ability will show considerable variation in their performance records in the short run. Some will be found at one end of the distribution and will be viewed as outstanding; others will be at the other end and will be viewed as ineffective. The longer a manager stays in a job, the less the probable difference between the observed record of performance and actual ability. Time on the job increased the expected sample of observations, reduced expected sampling error, and thus reduced the change that the manager (or moderate ability) will either be promoted or exit.

Hero Effect. Within a group of managers of varying abilities, the faster the rate of promotion, the less likely it is to be justified. Performance records are produced by a combination of underlying ability and sampling variation. Managers who have good records are more likely to have high ability than managers who have poor records, but the reliability of the differentiation is small when records are short.

(I realize promotions are a lot more complicated than I’m letting on. Some jobs, for example, are more difficult than others. It gets messy quickly and that's part of the problem. Often when things get messy we turn off our brains and concoct the simplest explanation we can. Simple but wrong. I’m only pointing out that sample size is one input into the decision. I’m by no means advocating an “experience is best” approach, as that comes with a host of other problems.)

* * *

This bias is also used against you in advertising.

The next time you see a commercial that says “4 out of 5 Doctors recommend ….” These results are meaningless without knowing the sample size. Odds are pretty good that the sample size is 5.

* * *

Large sample sizes are not a panacea. Things change. Systems evolve and faith in those results can be unfounded as well.

The key, at all times, is to think.

This bias leads to a whole slew of things, such as:
– under-estimating risk
– over-estimating risk
– undue confidence in trends/patterns
– undue confidence in the lack of side-effects/problems

The Bias from insensitivity to sample size is part of the Farnam Street latticework of mental models.

Making Smart Choices: 8 Keys to Making Effective Decisions

Smart Choices

Making decisions is a fundamental life skill. Expecting to make perfect decisions all of the time is unreasonable. When even an ounce of luck is involved, good decisions can have bad outcomes. So our goal should be to raise the odds of making a good decision. The best way to do that is to use a good decision-making process.

Smart Choices: A Practical Guide to Making Better Decisions contains an interesting decision-making framework: PrOACT.

We have found that even the most complex decision can be analysed and resolved by considering a set of eight elements. The first five—Problem, Objectives, Alternatives, Consequences, and Tradeoffs—constitute the core of our approach and are applicable to virtually any decision. The acronym for these—PrOACT—serves as a reminder that the best approach to decision situations is a proactive one. … The three remaining elements—uncertainty, risk tolerance, and linked decisions—help clarify decisions in volatile or evolving environments.

This framework can help you make better decisions. Of course, sometimes good decisions go wrong. A good decision, however, increases the odds of success.

There are eight keys to effective decision making.

Work on the right decision problem. … The way you frame your decision at the outset can make all the difference. To choose well, you need to state your decision problems carefully, acknowledging their complexity and avoiding unwarranted assumptions and option-limiting prejudices. …

Specify your objectives. … A decision is a means to an end. Ask yourself what you most want to accomplish and which of your interests, values, concerns, fears, and aspirations are most relevant to achieving your goal. … Decisions with multiple objectives cannot be resolved by focusing on any one objective.

Create imaginative alternatives. … Remember: your decision can be no better than your best alternative. …

Understand the consequences. … Assessing frankly the consequences of each alternative will help you to identify those that best meet your objectives—all your objectives. …

Grapple with your tradeoffs. Because objectives frequently conflict with one another, you'll need to strike a balance. Some of this must sometimes be sacrifices in favor of some of that. …

Clarify your uncertainties. What could happen in the future and how likely is it that it will? …

Think hard about your risk tolerance. When decisions involve uncertainties, the desired consequence may not be the one that actually results. A much-deliberated bone marrow transplant may or may not halt cancer. …

Consider linked decisions. What you decide today could influence your choices tomorrow, and your goals for tomorrow should influence your choices today. Thus many important decisions are linked over time. …

Harvard Professor Max Bazerman, who has written extensively human misjudgment, suggests something very similar to this approach in his book Judgment in Management Decision Making when he explains the anatomy of decisions. Before we can fully understand judgment, we have to identify the components of the decision-making process that require it. Here are the six steps that Bazerman aruges you should take, either implicitly or explicitly, when applying a rational decision-making process.

1. Define the problem. (M)anagers often act without a thorough understanding of the problem to be solved, leading them to solve the wrong problem. Accurate judgment is required to identify and define the problem. Managers often err by (a) defining the problem in terms of a proposed solution, (b) missing a bigger problem, or (c) diagnosing the problem in terms of its symptoms. Your goal should be to solve the problem not just eliminate its temporary symptoms.

2. Identify the criteria. Most decisions require you to accomplish more than one objective. When buying a car, you may want to maximize fuel economy, minimize cost, maximize comfort, and so on. The rational decision maker will identify all relevant criteria in the decision-making process.

3. Weight the criteria. Different criteria will vary in importance to a decision maker. Rational decision makers will know the relative value they place on each of the criteria identified. The value may be specified in dollars, points, or whatever scoring system makes sense.

4. Generate alternatives. The fourth step in the decision-making process requires identification of possible courses of action. Decision makers often spend an inappropriate amount of search time seeking alternatives, thus creating a barrier to effective decision making. An optimal search continues only until the cost of the search outweighs the value of added information.

5. Rate each alternative on each criterion. How well will each of the alternative solutions achieve each of the defined criteria? This is often the most difficult stage of the decision-making process, as it typically requires us to forecast future events. The rational decision maker carefully assesses the potential consequences on each of the identified criteria of selecting each of the alternative solutions.

6. Compute the optimal decision. Ideally, after all of the first five steps have been completed, the process of computing the optimal decision consists of (a) multiplying the ratings in step 5 by the weight of each criterion, (b) adding up the weighted ratings across all of the criteria for each alternative, and (c) choosing the solution with the highest sum of weighted ratings.

Rational decision frameworks, such as those suggested above, are a great starting place. On top of that, we need to consider our psychological biases. And keep a decision journal.

12