Tag: Halo Effect

Why First Impressions Don’t Matter Much For Experiences

A recent article in the WSJ, “Hidden Ways Hotels Court Guests Faster”, focused on how hotels are trying to dazzle guests with first impressions.

Jeremy McCarthy, a hotel executive, argues this is why “upon arriving to a luxury hotel, you are often greeted in the lobby by a friendly face, an offer to assist with your luggage, and sometimes a welcome beverage or a refreshing chilled towel to help wipe away the stress of travel.”

Research, however, seems to show that, while we remember people by first impressions, we don't really remember experiences the same way. With experiences, we seem to remember the peak moments and how they end. McCarthy writes:

An example of the research that supports this “peak-end” theory, is the work on colonoscopy patients done by psychologist Daniel Kahneman. Kahneman found that after a painful colonoscopy treatment, patients would forget about the overall duration of the pain they experienced and would instead remember their experience based on the peak moments of pain and on how it ended.

A patient whose colonoscopy lasted an agonizing 25 minutes, for example (Patient B), would rate the experience better and would happily come back a year later for his follow-up appointment, as long as the treatment ended with less pain. Another patient (Patient A), who only had around 8 minutes of total pain, wouldn’t come back next year because he remembers the pain of how the experience ended.

The implications of this are pretty clear. If you run a hotel, for example, you want to focus more on the departure than the arrival.

I'm left with more questions about this research than answers, so if you know of any good books/blogs/articles on this please pass them along.

Poaching Stars is a Terrible Idea to Improve Performance

In an effort to improve performance we often turn to the simple answer of trying to hire a star from another organization. This sounds like a great idea, is hard to argue with, and offers the promise of an instant performance boost.

In practice, most of the benefits turn out to be illusory.

The question is why?

One reason is that we think of the person as an isolated system when in reality they are not. The surrounding team, culture, and environment can amplify their success.

In his wonderful book,  Think Twice: Harnessing the Power of Counterintuition, Michael Mauboussin explains:

A star’s performance relied to some degress on the people, structure, and norms around him—the system. Analyzing results requires sorting the relative contributions of the individual versus the system, something we are not particularly good at. When we err, we tend to overstate the role of the individual.

This mistake is consequential because organizations routinely pay big bucks to lure high performers, only to be sorely disappointed. In one study, a trio of professors from Harvard Business School tracked more than one thousand acclaimed equity analysts over a decade and monitored how their performance changes as they switched firms. Their dour conclusion, “When a company hires a star, the star’s performance plunges, there is a sharp decline in the functioning of the group or team the person works with, and the company’s market value falls.” The hiring organization is let down because it failed to consider systems-based advantages that the prior employer supplied, including firm reputation and resources. Employers also underestimate the relationships that supported previous success, the quality of the other employees, and a familiarity with past processes.

What's happening a common mistake — we're focusing on an isolated part of a complex adaptive system without understanding how that part contributes to the overall system dynamics.

hat's For more information read the Harvard Business Review article: The Risky Business of Hiring Stars and check out The right number of stars for a team.

Brand Attachment Through Emotion

There is a ton of psychology at work in Apple's new virtual assistant Siri and if they succeed, it will be much harder to change phones.

This :

The Siri group, one of the largest software teams at Apple, fine-tuned Siri's responses in an attempt to forge an emotional tie with its customers. To that end, Siri regularly uses a customer's nickname in responses, as well as those of other important people and places in his or her life.

And this:

For Siri to be really effective, it has to learn a great deal about the user. If it knows where you work and where you live and what kind of places you like to go, it can really start to tailor itself as it becomes an expert on you. This requires a great deal of trust in the institution collecting this data. Siri didn’t have this, but Apple has earned a very high level of trust from its customers.

Read what you've been missing. Subscribe to Farnam Street via Email, RSS, or Twitter.

Shop at Amazon.com and support Farnam Street

Two Questions Everyone Asks Themselves When They Meet You

People everywhere differentiate each other by liking (warmth, trustworthiness) and by respecting (competence, efficiency).

Essentially they ask themselves: (1) Is this person warm? and (2) Is this person competent?

The “warmth dimension captures traits that are related to perceived intent, including friendliness, helpfulness, sincerity, trustworthiness and morality, whereas the competence dimension reflects traits that are related to perceived ability, including intelligence, skill, creativity and efficacy.”

“In sum, although both dimensions are fundamental to social perception, warmth judgments seem to be primary, which reflects the importance of assessing other people’s intentions before determining their ability to carry out those intentions.”

Like all perception, social perception reflects evolutionary pressures. In encounters with conspecifics, social animals must determine, immediately, whether the ‘other’ is friend or foe (i.e. intends good or ill) and, then, whether the ‘other’ has the ability to enact those intentions. New data confirm these two universal dimensions of social cognition: warmth and competence. Promoting survival, these dimensions provide fundamental social structural answers about competition and status. People perceived as warm and competent elicit uniformly positive emotions and behavior, whereas those perceived as lacking warmth and competence elicit uniform negativity. People classified as high on one dimension and low on the other elicit predictable, ambivalent affective and behavioral reactions. These universal dimensions explain both interpersonal and intergroup social cognition.

Improve your competence by subscribing to Farnam Street via Email, RSS, or Twitter.

Source

Trust the Evidence, Not Your Instincts

In most workplaces a failure to consider sound evidence inflicts unnecessary damage on “employee well-being and group performance.” But Jeffrey Pfeffer and Robert Sutton argue, in the New York Times, that it doesn't have to be that way:

Consider the issue of incentive pay. Many people believe that paying for performance will work in virtually any organization, so it is used again and again to solve problems — even where evidence shows it is ineffective.

Recently, New York City decided to end a teacher bonus program after three years and $56 million. As The New York Times reported in July, a study found that the effort to link incentive pay to student performance “had no positive effect on either student performance or teachers’ attitudes.”

But that bad news could have been predicted long before spending all that time and money. After all, the failure of similar efforts to improve school performance has been documented for decades.

Here is another example: Research has shown that stable membership is a hallmark of effective work teams. People with more experience, working together, typically communicate and coordinate more effectively.

Although this effect is seen in studies of everything from product development teams to airplane cockpit crews, managers often can’t resist the temptation to rotate people in and out to minimize costs and make scheduling easier.

For example, the National Transportation Safety Board once found that 73 percent of the safety incidents reported on commercial aircraft occur on the first day a new crew flies together.

Taking a look at what works requires re-thinking widely held beliefs:

When Google examined what employees valued most in a manager, technical expertise ranked last among eight qualities. Deemed more crucial were attributes like staying even-keeled, asking good questions, taking time to meet with people and caring about employees’ careers and lives.

Google found that managers who did these things led top-performing teams and had the happiest employees and least turnover. So Google is making many changes in how it selects and coaches managers, devoting particular effort to improving its worst managers.

A word to the wise: pointing out to your boss that the evidence says they are likely to be wrong is not a good career move. You'll have to come up with something more creative. And, the evidence says performance won't get you promoted.

Jeffrey Pfeffer and Robert Sutton are professors at Stanford and authors of “Hard Facts, Dangerous Half-Truths, and Total Nonsense: Profiting from Evidence-Based Management.”

A Simple Checklist to Improve Decisions

We owe thanks to the publishing industry. Their ability to take a concept and fill an entire category with a shotgun approach is the reason that more people are talking about biases.

Unfortunately, talk alone will not eliminate them but it is possible to take steps to counteract them. Reducing biases can make a huge difference in the quality of any decision and it is easier than you think.

In a recent article for Harvard Business Review, Daniel Kahneman (and others) describe a simple way to detect bias and minimize its effects in the most common type of decisions people make: determining whether to accept, reject, or pass on a recommendation.

The Munger two-step process for making decisions is a more complete framework, but Kahneman's approach is a good way to help reduce biases in our decision-making.

If you're short on time here is a simple checklist that will get you started on the path towards improving your decisions:

Preliminary Questions: Ask yourself

1. Check for Self-interested Biases

  • Is there any reason to suspect the team making the recommendation of errors motivated by self-interest?
  • Review the proposal with extra care, especially for overoptimism.

2. Check for the Affect Heuristic

  • Has the team fallen in love with its proposal?
  • Rigorously apply all the quality controls on the checklist.

3. Check for Groupthink

  • Were there dissenting opinions within the team?
  • Were they explored adequately?
  • Solicit dissenting views, discreetly if necessary.
  • Challenge Questions: Ask the recommenders

4. Check for Saliency Bias

  • Could the diagnosis be overly influenced by an analogy to a memorable success?
  • Ask for more analogies, and rigorously analyze their similarity to the current situation.

5. Check for Confirmation Bias

  • Are credible alternatives included along with the recommendation?
  • Request additional options.

6. Check for Availability Bias

  • If you had to make this decision again in a year’s time, what information would you want, and can you get more of it now?
  • Use checklists of the data needed for each kind of decision.

7. Check for Anchoring Bias

  • Do you know where the numbers came from? Can there be
  • …unsubstantiated numbers?
  • …extrapolation from history?
  • …a motivation to use a certain anchor?
  • Reanchor with figures generated by other models or benchmarks, and request new analysis.

8. Check for Halo Effect

  • Is the team assuming that a person, organization, or approach that is successful in one area will be just as successful in another?
  • Eliminate false inferences, and ask the team to seek additional comparable examples.

9. Check for Sunk-Cost Fallacy, Endowment Effect

  • Are the recommenders overly attached to a history of past decisions?
  • Consider the issue as if you were a new CEO.
  • Evaluation Questions: Ask about the proposal

10. Check for Overconfidence, Planning Fallacy, Optimistic Biases, Competitor Neglect

  • Is the base case overly optimistic?
  • Have the team build a case taking an outside view; use war games.

11. Check for Disaster Neglect

  • Is the worst case bad enough?
  • Have the team conduct a premortem: Imagine that the worst has happened, and develop a story about the causes.

12. Check for Loss Aversion

  • Is the recommending team overly cautious?
  • Realign incentives to share responsibility for the risk or to remove risk.

If you're looking to dramatically improve your decision making here is a great list of books to get started:

Nudge: Improving Decisions About Health, Wealth, and Happiness by Richard H. Thaler and Cass R. Sunstein

Think Twice: Harnessing the Power of Counterintuition by Michael J. Mauboussin

Think Again: Why Good Leaders Make Bad Decisions and How to Keep It from Happening to You by Sydney Finkelstein, Jo Whitehead, and Andrew Campbell

Predictably Irrational: The Hidden Forces That Shape Our Decisions by Dan Ariely

Thinking, Fast and Slow by Daniel Kahneman

Judgment and Managerial Decision Making by Max Bazerman