Mental Model: Multiplicative Systems

Let’s run through a little elementary algebra. Try to do it in your head: What’s 1,506,789 x 9,809 x 5.56 x 0?

Hopefully you didn’t have to whip out the old TI-84 to solve that one. It’s a zero.

This leads us to a mental model called Multiplicative Systems, and understanding it can get to the heart of a lot of issues.

The Weakest Link in the Chain

Suppose you were trying to become the best basketball player in the world. You’ve got the following things going for you:

1. God-given talent. You’re 6’9″, quick, skillful, can leap out of the building, and have been the best player in a competitive city since you can remember.

2. Support. You live in a city that reveres basketball and you’re raised by parents who care about your goals.

3. A proven track record. You were the player of the year in a very competitive Division 1 college conference.

4. A clear path forward. You’re selected as the second overall pick in the NBA Draft by the Boston Celtics.

Sounds like you have a shot, right? As good as anyone could have, right? What would you put the odds at of this person becoming one of the better players in the world? Pretty high?

Let’s add one more piece of information:

5. You’ve developed a cocaine habit.

What are your odds now?

This little exercise isn’t an academic one, it’s the sad case of Leonard “Len” Bias, a young basketball prodigy who died of a cocaine overdose after being selected to play in the NBA for the Boston Celtics in 1986. Many call Bias the best basketball player who never played professionally.

What the story of Len Bias illustrates so well is the truth that anything times zero must still be zero, no matter how large the string of numbers preceding it. In some facets of life, all of your hard work, dedication to improvement, and good fortune may still be worth nothing if there is a weak link in the chain.

Something all engineers learn very early on is that a system is no stronger than its weakest component. Take, for example, the case of a nuclear power plant. We have a very good understanding of how to make the nuclear power plant quite safe, nearly indestructible, which it must be considering the magnitude of a failure.

But in reality, what is the weakest link in the chain for most nuclear power plants? The human beings running them. We’re part of the system! And since we’ve yet to perfect the human being, we have yet to perfect the nuclear power plant. How could it be otherwise?

An additive system does not work this way. In an additive system, each component adds on to one another to create the final outcome. Going back to algebra, let’s say our equation was additive rather than multiplicative: 1,506,789 plus 9,809 plus 5.56 plus 0. The answer is 1,516,603.56 — still a pretty big number!

Think of an additive system as something like a great Thanksgiving dinner. You’ve got a great turkey, some whipped potatoes, a mass of stuffing, and a lump of homemade cranberry sauce, and you’re hanging with your family. Awesome!

Let’s say the potatoes get burnt in the oven, and they’re inedible. Problem? Sure, but dinner still works out just fine. Someone shows up with a pie for dessert? Great! But it won’t change the dinner all that much.

The interaction of the parts make the dinner range from good to great. Take some parts away or add new ones in, and you get a different outcome, but not a binary, win/lose one. The meal still happens. Additive systems and multiplicative systems react differently when components are added or taken away.

Most businesses, for example, operate in a multiplicative system. But they too often think they’re operating in additive ones: Ever notice how some businesses will add one feature on top of another to their products but fail at basic customer service, so you leave, never to return? That’s a business that thinks it’s in an additive system when they really need to be resolving the big fat zero in the middle of the equation instead of adding more stuff.

***

Financial systems are, of course, multiplicative. General Motors, founded in 1908 by William Durant and C.S. Mott, came to dominate the American car market to the tune of 50% market share through a series of brilliant innovations and management practices, and was for many years the dominant and most admirable corporation in America. Even today, after more than a century of competition, no American carmaker produces more automobiles than General Motors.

And yet, the original shareholders of GM ended up with a zero in 2008 as the company went into bankruptcy due to years of financial mismanagement. It didn’t matter than they had several generations of leadership: All of that becomes naught in a multiplicative system.

***

On a smaller scale, take the case of a young corporate climber who feels they just can’t get ahead. They seem to have all their ducks in a row: great resume, great background, great experience…the problem is that they suck at dealing with other people and treat others like stepping stones. That’s a zero that can negate all of the big numbers preceding it. The rest doesn’t matter.

And so we arrive at the “must be true” conclusion that understanding when you’re in an additive system versus a multiplicative system, and which components need absolute reliability for the system to work, is a critical model to have in your head. Multiplicative thinking is a model related to the greater idea of systems thinking, another mental model well worth acquiring.

***

Multiplicative Systems is another Farnam Street Mental Model.  

Multiplicative Systems, from the Farnam Street Latticework of Mental Models is worth reading. Click To Tweet

What’s So Significant About Significance?

How Not to be wrong

One of my favorite studies of all time took the 50 most common ingredients from a cookbook and searched the literature for a connection to cancer: 72% had a study linking them to increased or decreased risk of cancer. (Here’s the link for the interested.)

Meta-analyses (studies examining multiple studies) quashed the effect pretty seriously, but how many of those single studies were probably reported on in multiple media outlets, permanently causing changes in readers’ dietary habits? (We know from studying juries that people are often unable to “forget” things that are subsequently proven false or misleading — misleading data is sticky.)

The phrase “statistically significant” is one of the more unfortunately misleading ones of our time. The word significant in the statistical sense — meaning distinguishable from random chance — does not carry the same meaning in common parlance, in which we mean distinguishable from something that does not matterWe’ll get to what that means.

Confusing the two gets at the heart of a lot of misleading headlines and it’s worth a brief look into why they don’t mean the same thing, so you can stop being scared that everything you eat or do is giving you cancer.

***

The term statistical significance is used to denote when an effect is found to be extremely unlikely to have occurred by chance. In order to make that determination, we have to propose a null hypothesis to be rejected. Let’s say we propose that eating an apple a day reduces the incidence of colon cancer. The “null hypothesis” here would be that eating an apple a day does nothing to the incidence of colon cancer — that we’d be equally likely to get colon cancer if we ate that daily apple.

When we analyze the data of our study, we’re technically not looking to say “Eating an apple a day prevents colon cancer” — that’s a bit of a misconception. What we’re actually doing is an inversion we want the data to provide us with sufficient weight to reject the idea that apples have no effect on colon cancer.

And even when that happens, it’s not an all-or-nothing determination. What we’re actually saying is “It would be extremely unlikely for the data we have, which shows a daily apple reduces colon cancer by 50%, to have popped up by chance. Not impossible, but very unlikely.” The world does not quite allow us to have absolute conviction.

How unlikely? The currently accepted standard in many fields is 5% — there is a less than 5% chance the data would come up this way randomly. That immediately tells you that at least 1 out of every 20 studies must be wrong, but alas that is where we’re at. (The problem with the 5% p-value, and the associated problem of p-hacking has been subject to some intense debate, but we won’t deal with that here.)

We’ll get to why “significance can be insignificant,” and why that’s so important, in a moment. But let’s make sure we’re fully on board with the importance of sorting chance events from real ones with another illustration, this one outlined by Jordan Ellenberg in his wonderful book How Not to Be WrongPay close attention:

Suppose we’re in null hypothesis land, where the chance of death is exactly the same (say, 10%) for the fifty patients who got your drug and the fifty who got [a] placebo. But that doesn’t mean that five of the drug patients die and five of the placebo patients die. In fact, the chance that exactly five of the drug patients die is about 18.5%; not very likely, just as it’s not very likely that a long series of coin tosses would yield precisely as many heads as tails. In the same way, it’s not very likely that exactly the same number of drug patients and placebo patients expire during the course of the trial. I computed:

13.3% chance equally many drug and placebo patients die
43.3% chance fewer placebo patients than drug patients die
43.3% chance fewer drug patients than placebo patients die

Seeing better results among the drug patients than the placebo patients says very little, since this isn’t at all unlikely, even under the null hypothesis that your drug doesn’t work.

But things are different if the drug patients do a lot better. Suppose five of the placebo patients die during the trial, but none of the drug patients do. If the null hypothesis is right, both classes of patients should have a 90% chance of survival. But in that case, it’s highly unlikely that all fifty of the drug patients would survive. The first of the drug patients has a 90% chance; now the chance that not only the first but also the second patient survives is 90% of that 90%, or 81%–and if you want the third patient to survive as well, the chance of that happening is only 90% of that 81%, or 72.9%. Each new patient whose survival you stipulate shaves a little off the chances, and by the end of the process, where you’re asking about the probability that all fifty will survive, the slice of probability that remains is pretty slim:

(0.9) x (0.9) x (0.9) x … fifty times! … x (0.9) x (0.9) = 0.00515 …

Under the null hypothesis, there’s only one chance in two hundred of getting results this good. That’s much more compelling. If I claim I can make the sun come up with my mind, and it does, you shouldn’t be impressed by my powers; but if I claim I can make the sun not come up, and it doesn’t, then I’ve demonstrated an outcome very unlikely under the null hypothesis, and you’d best take notice.

So you see, all this null hypothesis stuff is pretty important because what you want to know is if an effect is really “showing up” or if it just popped up by chance.

A final illustration should make it clear:

Imagine you were flipping coins with a particular strategy of getting more heads, and after 30 flips you had 18 heads and 12 tails. Would you call it a miracle? Probably not — you’d realize immediately that it’s perfectly possible for an 18/12 ratio to happen by chance. You wouldn’t write an article in U.S. News and World Report proclaiming you’d figured out coin flipping.

Now let’s say instead you flipped the coin 30,000 times and you get 18,000 heads and 12,000 tails…well, then your case for statistical significance would be pretty tight.  It would be approaching impossible to get that result by chance — your strategy must have something to it. The null hypothesis of “My coin flipping technique is no better than the usual one” would be easy to reject! (The p-value here would be orders of magnitude less than 5%, by the way.)

That’s what this whole business is about.

***

Now that we’ve got this idea down, we come to the big question that statistical significance cannot answer: Even if the result is distinguishable from chance, does it actually matter?

Statistical significance cannot tell you whether the result is worth paying attention to — even if you get the p-value down to a minuscule number, increasing your confidence that what you saw was not due to chance. 

In How Not to be Wrong, Ellenberg provides a perfect example:

A 1995 study published in a British journal indicated that a new birth control pill doubled the risk of venous thrombosis (potentially killer blood clot) in its users. Predictably, 1.5 million British women freaked out, and some meaningfully large percentage of them stopped taking the pill. In 1996, 26,000 more babies were born than the previous year and there were 13,600 more abortions. Whoops!

So what, right? Lots of mothers’ lives were saved, right?

Not really. The initial probability of a women getting a venous thrombosis with any old birth control pill, was about 1 in 7,000 or about 0.01%. That means that the “Killer Pill,” even if was indeed increasing “thrombosis risk,” only increased that risk to 2 in 7,000, or about 0.02%!! Is that worth rearranging your life for? Probably not.

Ellenberg makes the excellent point that, at least in the case of health, the null hypothesis is unlikely to be right in most cases! The body is a complex system — of course what we put in it affects how it functions in some direction or another. It’s unlikely to be absolute zero.

But numerical and scale-based thinking, indispensable for anyone looking to not be a sucker, tells us that we must distinguish between small and meaningless effects (like the connection between almost all individual foods and cancer so far) and real ones (like the connection between smoking and lung cancer).

And now we arrive at the problem of “significance” — even if an effect is really happening, it still may not matter!  We must learn to be wary of “relative” statistics (i.e., “the risk has doubled”), and look to favor “absolute” statistics, which tell us whether the thing is worth worrying about at all.

So we have two important ideas:

A. Just like coin flips, many results are perfectly possible by chance. We use the concept of “statistical significance” to figure out how likely it is that the effect we’re seeing is real and not just a random illusion, like seeing 18 heads in 30 coin tosses.

B. Even if it is really happening, it still may be unimportant – an effect so insignificant in real terms that it’s not worth our attention.

These effects should combine to raise our level of skepticism when hearing about groundbreaking new studies! (A third and equally important problem is the fact that correlation is not causation, a common problem in many fields of science including nutritional epidemiology. Just because x is associated with y does not mean that x is causing y.)

Tread carefully and keep your thinking cap on.

***

Still Interested? Read Ellenberg’s great book to get your head working correctly, and check out our posts on Bayesian thinking, another very useful statistical tool, and learn a little about how we distinguish science from pseudoscience.

The Many Ways Our Memory Fails Us (Part 1)

(Purchase a copy of the entire 3-part series in one sexy PDF for $3.99)

***

Recently, we discussed some of the net advantages of our faulty, but incredibly useful, memory system. Thanks to Harvard’s brilliant memory-focused psychologist Daniel Schacter, we know not to be too harsh in judging its flaws. The system we’ve been endowed with, on the whole, works at its intended purpose, and a different one might not be a better one.

It isn’t optimal though, and since we’ve given it a “fair shake”, it is worth discussing where the errors actually lie, so we can work to improve them, or at least be aware of them.

In his fascinating book, Schacter lays out seven broad areas in which our memory regularly fails us. Let’s take a look at them so we can better understand ourselves and others, and maybe come up with a few optimal solutions. Perhaps the most important lesson will be that we must expect our memory to be periodically faulty, and take that into account in advance.

We’re going to cover a lot of ground, so this one will be a multi-parter. Let’s dig in.

Transience

The first regular memory error is called transience. This is one we’re all quite familiar with, but sometimes forget to account for: The forgetting that occurs with the passage of time. Much of our memory is indeed transient — things we don’t regularly need to recall or use get lost with time.

Schacter gives an example of the phenomenon:

On October 3, 1995, the most sensational criminal trial of our time reached a stunning conclusion: a jury acquitted O.J. Simpson of murder. Word of the not-guilty verdict spread quickly, nearly everyone reacted with either outrage or jubilation, and many people could talk about little else for weeks or days afterward. The Simpson verdict seemed like just the sort of momentous event that most of us would always remember vividly: how we reacted to it, and where we were when we heard the news.

Can you recall how you found out that Simpson had been acquitted? Chances are that you don’t remember, or that what you remember is wrong. Several days after the verdict, a group of California undergraduates provided researchers with detailed accounts of how they learned about the jury’s decision. When the researchers probed students’ memories again fifteen months later, only half recalled accurately how they found out about the decision. When asked again nearly three years after the verdict, less than 30 percent of students’ recollections were accurate; nearly half were dotted with major errors.

Soon after something happens, particularly something meaningful or impactful, we have a pretty accurate recollection of it. But the accuracy of that recollection declines on a curve over time — quickly at first, then slowing down. We go from remembering specifics to remembering the gist of what happened. (Again, on average — some detail is often left intact.) As the Simpson trial example shows, even in the case of a very memorable event, transience is high. Less memorable events are forgotten almost entirely.

What we typically do later on is fill in specific details of a specific event with what typically would happen in that situation. Schacter explains:

Try to answer in detail the following three questions: What do you do during a typical day at work? What did you do yesterday? And what did you do on that day one week earlier? When twelve employees in the engineering division of a large office-product manufacturer answered these questions, there was a dramatic difference in what they recalled from yesterday and a week earlier. The employees recalled fewer activities from a week ago than yesterday, and the ones they did recall from a week earlier tended to be part of a “typical” day. Atypical activities — departures from the daily script — were remembered much more frequently after a day than after a week. Memory after a day was close to a verbatim record of specific events; memory after a week was closer to a generic description of what usually happens.

So when we need to recall a memory, we tend to reconstruct as best as we can, starting with whatever “gist” is left over in our brains, and filling in the details by (often incorrectly) assuming that particular event was a lot like others. Generally, this is a correct assumption. There’s no reason to remember exactly what you ate last Thanksgiving, so turkey is a pretty reliable bet. Occasionally, though, transience gets us in trouble, as anyone who’s forgotten a name they should have remembered can attest.

How do we help solve the issue of transience?

Obviously, one easy solution, if it’s something we wish to remember specifically, and in an unaltered form, is to record it as specifically as possible and as soon as possible. That is the optimal solution, for time begins acting immediately to make our memories vague.

Another idea is visual imagery. The idea of using visual mneumonics is popular in the memory-improvement game; in other words, associating parts of a hoped-for memory with highly vivid imagery (an elephant squashing a clown!), which can be easily recalled later. Greek orators were famous for the technique.

The problem is that almost no one uses this on a day to day basis, because it’s very cognitively demanding. You must go through the process of making interesting and evocative associations every time you want to remember something — there’s no “general memory improvement” going on, which is what people are really interested in, where all future memories are more effectively encoded.

Another approach — associating and tying something you wish to remember with something else you already know to increase its availability later on — is also useful, but as with visual imagery, must be used each and every time.

In fact, so far as we can tell, the only “general memory improver” available to us is to create better habits of association — attaching vivid stories, images, and connections to things — the very habits we talk about frequently when we discuss the mental model approach. It won’t happen automatically.

Absent-Mindedness

The second memory failure is closely related to transience, but a little different in practice. Whereas transience entails remembering something that then fades, absent-mindedness is a process whereby the information is never properly encoded, or is simply overlooked at the point of recall.

Failed encoding explains phenomena like regularly misplacing our keys or glasses: The problem is not that the information faded, it’s that it never made it from our working memory into our long term memory. This often happens because we are distracted or otherwise not paying attention at the moment of encoding (e.g., when we take our glasses off).

Interestingly enough, although divided attention can prevent us from retaining particulars, we still may encode some basic familiarity: 

Familiarity entails a more primitive sense of knowing that something has happened previously, without dredging up particular details. In [a] restaurant, for example, you might have noticed at a nearby table someone you are certain you have met previously despite failing to recall such specifics as the person’s name or how you know her. Laboratory studies indicate that dividing attention during encoding has a drastic effect on subsequent recollection, and has little or no effect on familiarity.

This phenomenon probably happens because divided attention prevents us from elaborating on the particulars that are necessary for subsequent recollection, but allows us to record some rudimentary information that later gives rise to a sense of familiarity.

Schacter also points out something that older people might take solace in: Aging produces a similar cognitive effect to attention-dividedness. The reason older people start feeling they’ve misplaced their keys or checkbook constantly is that the brain’s decline in cognitive resources mirrors the “split attention” problem that causes all of us to misplace our keys or checkbook.

A related phenomenon to this poor encoding problem is one called change-blindness — failing to see differences in objects or scenes unfolding over time. Similar to the “slowly boiling a frog” issue most of us are familiar with, change-blindness causes us to fail to see subtle change. This is the Invisible Gorilla problem, made famous through its vivid demonstration by Daniel Simons and Christopher Chabris.

In fact, in another experiment, Simons was able to show that even in a real-life conversation, he could swap out one man for another in many instances without the conversational partner even noticing! Magicians and con-men regularly use this to fool and astonish.

What’s happening is shallow encoding — similar to the transience problem, we often encode only a superficial level of information related to what’s happening in front of our face, even when talking to a real person. Thus, subtly changing details are not registered because they were never encoded in the first place! (Sherlock Holmes made a career of countering this natural tendency by being super-observant.)

Generally, this is totally fine and OK. As a whole, the system serves us well. But the instances where it doesn’t can get us into trouble.

***

This brings up the problem of absent-mindedness in what psychologists call prospective memory — remembering something you need to do in the future. We’re all familiar with situations when we forget to do something we clearly “told ourselves” we needed to remember.

The typical antidote is using cues to help us remember: An event-based prospective memory goes like this: “When you see Harry today, tell him to call me.” A time-based prospective memory goes like this: “At 11PM, take the cookies out of the oven.”

It doesn’t always work, though. Time-based prospective memory is the worst of all: We’re not consistently good at remembering that “11PM = cookies” because other stuff will also be happening at 11PM! A time-based cue is insufficient.

For the same reason, an event-based cue will also fail to work if we’re not careful:

Consider the first event-based prospective memory. Frank has asked you to tell Harry to call him, but you have forgotten to do so. You indeed saw Harry in the office, but instead of remembering Frank’s message you were reminded of the bet you and Harry made concerning last night’s college basketball championship, gloating for several minutes over your victory before settling down to work.

“Harry” carries many associations other than “Tell him something for Frank.” Thus, we’re not guaranteed to recall it in the moment.

This knowledge allows us to construct an optimal solution to the prospective memory problem: Specific, distinctive cues that call to mind the exact action needed, at the time it is needed. All elements must be in place for the optimal solution.

Post-it notes with explicit directions put in an optimal place (somewhere a post-it note would not usually be found) tend to work well. A specific reminder on your phone that pops up exactly when needed will work.  As Schacter puts it, “The point is to transfer as many details as possible from working memory to written reminders.” Be specific, make it stand out, make it timely. Hoping for a spontaneous reminder to work means that, some percentage of the time, we will certainly commit an absent-minded error. It’s just the way our minds work.

***

Let’s pause there for now. In our next post on memory, we’ll cover the sins of Blocking and Misattribution, and some potential solutions. We recommend re-reading Part 1 at that time, and then Parts 1 and 2 when Part 3 comes out. One easy fact about memory is that repeated exposure is nearly always a good idea. In the meantime, try checking out the book in its entirety, if you want to read ahead.

Mental Model: Bias from Envy and Jealousy

“It is not greed that drives the world,
but envy.”
— Warren Buffett

***

It is a fact of life that we are not equal. Not biologically, not culturally.

Some inequities come from flawed governing systems, but most are simply due to luck — the vagaries of life. Some of us are born healthier, prettier and smarter than others, some will encounter opportunities to become extremely wealthy, and some will simply be born at the right place and time. Just as many will be born without such good fortune.

It’s difficult to imagine that these differences will not matter in our everyday interactions. When you think of how natural the differences among us are, you should quickly come to realize the potential power and frequency that bias from envy and jealousy can have on world affairs. It’s built deeply into the human condition, from birth.

The concept of jealousy is as old as modern humanity itself and has permeated our culture worldwide. We are advised not to brag too much, for it can evoke feelings of envy and jealousy in others. Christianity, Hinduism, Islam and other religions all have at least one cautionary tale about the destructive consequences of being captivated by these emotions as well as the dangers of being the one who is envied.

The stoics knew all about envy, and warned against its consequences constantly. Seneca described a wise man as one who is “Content with his lot, whatever it be, without wishing for what he has not …”

Tales of envy extend as far back as ancient times.

In one of the oldest recorded myths, the tall, slender and handsome Egyptian God Osiris marries his beautiful sister and brings civilization and prosperity to Egypt and the world. However, Osiris also has an ugly younger brother Seth who hates him. Seth envies Osiris for his attractiveness, power and success. Seth’s wife becomes so attracted to Osiris that she tricks him into sleeping with her and bears Osiris’s child. Unable to deal with his envy and jealousy, Seth traps and kills Osiris.

Even though the myth is several thousand years old, the problems caused by envy and jealousy can be just as real and destructive today. To avoid these feelings and becoming a victim of them is an important reason to examine them a little closer.

The Two Types of Envy

It has been said that there are two types of envy – a good type and a bad type.

The first type is the feeling of inferiority that motivates a person to improve herself. This bias exerts its influence by framing the success of others as a learning opportunity for ourselves. Think about watching an inspirational movie, or reading a book about an inspirational figure, someone who you feel dwarfs your own capabilities and accomplishments. Frequently, our envy leads us to imitate that hero in a quest for self-improvement.

The other type, though, is malicious envy, which motivates the envious to take good things away from others. For Aristotle, the evil of malicious envy lay in its desire to lessen the good in the world and experience joy at another’s misfortune (also called Schadenfreude which, mysteriously enough, has no equivalent word in English.)

To the malicious envier, ridding oneself of envy requires taking away from the other — the beautiful car or house should be stolen or damaged, the virtuous person corrupted or killed and the beautiful face of someone ruined or covered. The malicious envier believes that those things should be his rather than theirs. He, after all, deserves it more.

Lord Chesterfield once said: “People hate those who make them feel their own inferiority.”

This can be true. While the envied need not cause the deprivation of the other, the envier may still experience anger or resentment, a sense of unfairness, which may lead to feelings of hate. The relation between envy and hate is pretty close, if you observe the world closely.

“The relation between envy and hate is pretty close, if you observe the world closely.” Click To Tweet

“Envy is pain at the good fortune of others.
We envy those who are near us in time, place, age or reputation.”
— Aristotle

There are two basic determinants of the type of envy that will be experienced: the degree of belief that one has been treated unfairly and the belief that one’s disadvantage is one’s own fault.

Common sense, and data presented by Peter Salovey in his book The Psychology of Envy and Jealousy, suggest that the second determinant, one’s belief in one’s own shortcomings, plants the motivation for improvement.

The belief that one has been treated unfairly has the opposite effect – it results in feelings of anger and resentment. You become a negatively coiled spring.

Social Comparison

“Injustice is relatively easy to bear, it is justice that hurts.”
— Henry Louis Mencken

At the heart of envy is social comparison, which is a powerful influence in our self-concept. Think about it – much of our self-definition comes from comparison with others. We can’t define ourselves as great singers, if there is no one else around who sings worse than we do. Qualities like intelligence, beauty and skills are relative and thus when we compare poorly in comparison to our peers, our self-esteem suffers. Judith Rich Harris argues that this is a core part of our personality development.

An inflamed self-esteem is a good first step towards envy. We experience envy when the quality we feel inferior about threatens our self-concept. We may not even be aware that we are lacking a particular quality, but the object of our envy heightens our awareness of our deprivation.

Think about it this way: Do you feel envy when you see a great javelin thrower at the Olympics? Probably not, because, for most of us, success at javelin throwing isn’t a core part of our self-concept. But let’s say you were a competitive javelin thrower — might you feel a little envy if you saw someone much better than you competing at the Olympics?

Thus, envy of others is always a reflection of something we feel about ourselves. We’re not rich enough, or smart enough, or beautiful enough; we don’t have enough possessions, enough attention, enough success.

Jealousy

If there is a way to define jealousy in its most narrow scope, then it would probably be the feeling of anxious insecurity that follows the perception of threat to a relationship which provides important attention. Perceiving such a threat makes a person feel insecure about the status of the relationship and thus also about the aspects of the self that are sustained by it.

We do not become jealous when our partners die or move across the country or quit the relationship without getting into a new one. What is always true for jealousy, unlike envy, is that it involves a triangle of relationships between the self, the partner and the rival. Therefore, when talking about jealousy it is important to realize its key characteristic — the threat of losing something to someone else.

There is the hypothesis that at the heart of feelings of jealousy lies our need to feel needed. This need exists because relationships define certain aspects of who we are.

We like to think of ourselves as sexually attractive, funny or otherwise worthy persons. However, when there is no one to be funny with or no one who is attracted to us, our self-definition of funny or attractive dissipates. We need the others not only to reaffirm these aspects, but also to create them. Humans are deeply social creatures — how we feel about ourselves has to do with our interactions with others. There are no personality traits in a vacuum.

“How we feel about ourselves has to do with our interactions with others.” Click To Tweet

Jealousy is not limited to romantic relationships — jealousy exists between siblings, co-workers and even friends. Yet there is a reason why we get so caught up in romantic jealousy.

Someone stealing away our chess partner’s time does not threaten our self-concept nearly as much someone stealing away our girlfriend’s time. We have a biological need for a romantic other that eclipses our need for a chess partner; feelings of deprival are intensified.

Children can also experience intense jealousy. The most important relationship for a child is that with his parents, which is why sibling rivalry can sometimes be so fierce. Naturally, as the child reaches adolescence and develops serious friendships and romantic relationships, the sibling jealousy tends to dissipate.

***

There are several notable commonalities and differences between jealousy and envy.

With both envy and jealousy we experience loss of self-esteem stemming from social comparison. In the case of envy the loss comes from our self-appraisal, whereas in the case of jealousy it comes from the appraisal by others. Therefore, when experiencing jealousy we are often left wondering what is it that the other person finds in our rival? whereas in the case of envy we know exactly what we’re missing.

Another common quality of both envy and jealousy is how extreme they can be in modern life. For instance, ongoing discussions about employee salaries and CEO salaries. In many workplaces this has resulted in a salary non-disclosure policy within the job contract. Still, when the compensation data is leaked, and some perceived inequality is discovered, there is often outrage.

These effects are observed in workplaces as diverse as universities, investment banks, corporates and law firms. In order to keep the air clear of envy and jealousy, numerous firms and government agencies have gone as far as opting for the same base compensation per seniority level, regardless of employee contribution. Berkshire Hathaway chooses not to disclose the compensation of most of its top people for fear of creating organization-wide envy, and CEO Warren Buffett credits his ridiculously low salary with keeping envy of his success to a minimum.

***

An important question remains: How should we deal with envy at a personal level?

There are three ways to overcome envy.

The first is to focus on the differences between you and the other person, rather than the similarities. Examine the situation — you are not as alike as you think you are.

The second is more difficult, but we certainly find we can do it in other contexts, as discussed above. Shape your malicious envy into a drive to improve and learn. From every person that we envy there is a virtue that we can learn from.

Thirdly, envy can be avoided (over time) with simple repetitive denial. Don’t let yourself become envious of others’ deserved success. Stop the feeling in its tracks, if you can. As for undeserved success, remind yourself that the world is not a fair place that owes us what we wish. Remind yourself that the success of others does not reflect on you and does not take away from you. Envy is a somewhat childish emotion, one that hurts us as we get older.

Jealousy is more difficult to overcome since we are not fully in control of the needs or perceptions of others. There is little we can do to make others appreciate us when they don’t, but we can learn to accept that we will not always be everything for everyone. Just as with envy – jealousy can be an important trigger for increased self-awareness as well as positive change. After all, a little jealousy can be a good thing, for we may be reminded to appreciate what we have.

The sins of jealously and envy drive huge swaths of human behavior, but if we work to understand it and see it in ourselves and others, it doesn’t have to drive ours.

***

“Man will do many things to get himself loved;
he will do all things to get himself envied.”
— Mark Twain

Envy is a great cause of human suffering. Charlie Munger points out why: “Envy is a really stupid sin because it’s the only one you could never possibly have any fun at. There’s a lot of pain and no fun. Why would you want to get on that trolley?”

Why indeed.

While envy and jealously are powerful drivers of behaviour on their own, they become more powerful when mixed with ego, greed, and fear. These emotions can weigh the mind down and dramatically diminish our ability to think with our full mental capacity. They can also hone our focus. These emotions have also motivated people to do great things.

And in case you’re wondering how you can avoid being the source of envy for others? Aristotle had an answer: “The best way to avoid envy is to deserve the success you get.”

***

Still Interested? Check up on some other mental models and biases.

Mental Model: Bias from Envy and Jealousy Click To Tweet

Daniel Pink on Incentives and the Two Types of Motivation

Motivation is a tricky multifaceted thing. How do we motivate people to become the best they can be? How do we motivate ourselves? Sometimes when we are running towards a goal we suddenly lose steam and peter out before we cross the finish line. Why do we lose our motivation part way to achieving our goal?

Dan Pink wrote an excellent book on motivation called Drive: The Surprising Truth About What Motivates Us. We’ve talked about the book before but it’s worth going into a bit more detail.

When Pink discusses motivation he breaks it into two specific types: extrinsic and intrinsic.

Extrinsic motivation is driven by external forces such as money or praise. Intrinsic motivation is something that comes from within and can be as simple as the joy one feels after accomplishing a challenging task. Pink also describes two distinctly different types of tasks: algorithmic and heuristic. An algorithmic task is when you follow a set of instructions down a defined path that leads to a single conclusion. A heuristic task has no instructions or defined path, one must be creative and experiment with possibilities to complete the task.

As you can see the two types of motivations and tasks are quite different.

Let’s look at how they play against each other depending on what type of reward is offered.

Baseline Rewards

Money was once thought to be the best way to motivate an employee. If you wanted someone to stay with your company or to perform better you simply had to offer financial incentives. However, the issue of money as a motivator has become moot in many sectors. If you are a skilled worker you will quite easily be able to find a job in your desired salary range. Pink puts it succinctly:

Of course the starting point for any discussion of motivation in the workplace is a simple fact of life: People have to earn a living. Salary, contract payments, some benefits, a few perks are what I call “baseline rewards.” If someone’s baseline rewards aren’t adequate or equitable, her focus will be on the unfairness of her situation and the anxiety of her circumstance. You’ll get neither the predictability of extrinsic motivation nor the weirdness of intrinsic motivation. You’ll get very little motivation at all. The best use of money as a motivator is to pay people enough to take the issue of money off the table.

Once the baseline rewards have been sorted we are often offered other ‘carrots and sticks’ to nudge our behavior. Many of these rewards will actually achieve the opposite effect to what was intended.

‘If, then’ Rewards

‘If, then’ rewards are when we promise to deliver something to an individual once they complete a specific task. If you hit your sales goals this month then I will give you a bonus. There are inherent dangers with ‘if, then’ rewards. They tend to prompt a short term surge in motivation but actually dampen it over the long term. Just the fact of offering a reward for some form of effort sends the message that the work is, well, work. This can have a large negative impact on intrinsic motivation. Additionally, rewards by their very nature narrow our focus, we tend to ignore everything but the finish line. This is fine for algorithmic tasks but hurts us with heuristic based tasks.

Amabile and others have found that extrinsic rewards can be effective for algorithmic tasks – those that depend on following an existing formula to its logical conclusion. But for more right-brain undertakings – those that demand flexible problem-solving, inventiveness, or conceptual understanding – contingent rewards can be dangerous. Rewarded subjects often have a harder time seeing the periphery and crafting original solutions.

Goals

When we use goals to motivate us how does that affect how we think and behave?

Like all extrinsic motivators, goals narrow our focus. That’s one reason they can be effective; they concentrate the mind. But as we’ve seen, a narrowed focus exacts a cost. For complex or conceptual tasks, offering a reward can blinker the wide-ranging thinking necessary to come up with an innovative solution. Likewise, when an extrinsic goal is paramount – particularly a short-term, measurable one whose achievement delivers a big payoff – its presence can restrict our view of the broader dimensions of our behavior. As the cadre of business school professors write, ‘Substantial evidence demonstrates that in addition to motivating constructive effort, goal setting can induce unethical behavior.

The examples are legion, the researchers note. Sears imposes a sales quota on its auto repair staff – and workers respond by overcharging customers and completing unnecessary repairs. Enron sets lofty revenue goals – and the race to meet them by any means possible catalyzes the company’s collapse. Ford is so intent on producing a certain car at a certain weight at a certain price by a certain date that it omits safety checks and unleashes the dangerous Ford Pinto.

The problem with making extrinsic reward the only destination that matters is that some people will choose the quickest route there, even if it means taking the low road.

Indeed, most of the scandals and misbehavior that have seemed endemic to modern life involve shortcuts. Executives game their quarterly earnings so they can snag a performance bonus. Secondary school counselors doctor student transcripts so their seniors can get into college. Athletes inject themselves with steroids to post better numbers and trigger lucrative performance bonuses.

Contrast that approach with behavior sparked by intrinsic motivation. When the reward is the activity itself – deepening learning, delighting customers, doing one’s best – there are no shortcuts. The only route to the destination is the high road. In some sense, it’s impossible to act unethically because the person who’s disadvantaged isn’t a competitor but yourself.

“Most of the scandals and misbehavior that have seemed endemic to modern life involve shortcuts.” Click To Tweet

These same pressures that may nudge you towards unethical actions can also push you to make more risky decisions. The drive towards the goal can convince you to make decisions that in any other situation you would likely never consider. (See more about the dangers of goals.)

It’s not only the person who is being motivated with the reward that is hurt here. The person who is trying to encourage a certain type of behaviour also falls into a trap and is forced to try and course correct which, often, leaves them worse off than if they had never offered the reward in the first place.

The Russian economist Anton Suvorov has constructed an elaborate econometric model to demonstrate this effect, configured around what’s called ‘principal-agent theory.’ Think of the principal as the motivator – the employer, the teacher, the parent. Think of the agent as the motivatee – the employee, the student, the child. A principal essentially tries to get the agent to do what the principal wants, while the agent balances his own interests with whatever the principal is offering. Using a blizzard of complicated equations that test a variety of scenarios between principal and agent, Suvorov has reached conclusions that make intuitive sense to any parent who’s tried to get her kids to empty the garbage.

By offering a reward, a principal signals to the agent that the task is undesirable. (If the task were desirable, the agent wouldn’t need a prod.) But that initial signal, and the reward that goes with it, forces the principal onto a path that’s difficult to leave. Offer too small a reward and the agent won’t comply. But offer a reward that’s enticing enough to get the agent to act the first time, and the principal ‘is doomed to give it again in the second.’ There’s no going back. Pay your son to take out the trash – and you’ve pretty much guaranteed the kid will never do it again for free. What’s more, once the initial money buzz tapers off, you’ll likely have to increase the payment to continue compliance.

Even if you are able to trigger the better behaviour it will often disappear once incentives are removed.

In environments where extrinsic rewards are most salient, many people work only to the point that triggers the reward – and no further. So if students get a prize for reading three books, many won’t pick up a fourth, let alone embark on a lifetime of reading – just as executives who hit their quarterly numbers often won’t boost earnings a penny more, let alone contemplate that long-term health of their company. Likewise, several studies show that paying people to exercise, stop smoking, or take their medicines produces terrific results at first – but the healthy behavior disappears once the incentives are removed.

When Do Rewards Work?

Rewards can work for routine (algorithmic) tasks that require little creativity.

For routine tasks, which aren’t very interesting and don’t demand much creative thinking, rewards can provide a small motivational booster shot without the harmful side effects. In some ways, that’s just common sense. As Edward Deci, Richard Ryan, and Richard Koestner explain, ‘Rewards do not undermine people’s intrinsic motivation for dull tasks because there is little or no intrinsic motivation to be undermined.’

You will increase your chances for success when rewarding routine tasks using these three practices:

  1. Offer a rationale for why the task is necessary.
  2. Acknowledge that the task is boring.
  3. Allow people to complete the task their own way (think autonomy not control).

Any extrinsic reward should be unexpected and offered only once the task is complete. In many ways this is common sense as it is the opposite of the ‘if, then’ rewards allowing you to avoid its many failings (focus isn’t solely on the prize, motivation won’t wane if reward isn’t present during task, etc…). However, one word of caution – be careful if these rewards become expected, because at that point they are no different than the ‘if, then’ rewards.

Daniel Pink on Incentives and the Two Types of Motivation Click To Tweet

Every Number Tells a Story

“We depend on numbers to make sense of the world,
and have done so ever since we started to count.”

— Alex Bellos

From The Grapes of Math By Alex Borros
From The Grapes of Math By Alex Bellos

 

The earliest symbols used for numbers go back to about 5000 years ago Sumer (modern day Iraq). They didn’t really look far for names. Ges, the word for one, also meant man. Min, the word for two, also meant women. At first numbers served a practical purpose, mostly things like counting sheep and determining taxes.

“Yet numbers also revealed abstract patterns,” writes Alex Bellos in his fascinating book The Grapes of Math: How Life Reflects Numbers and Numbers Reflect Life, which, he continues, “made them objects of deep contemplation. Perhaps the earliest mathematical discovery was that numbers come in two types, even and odd: those that can be halved cleanly, such as 2, 4 and 6, and those that cannot, such as 1, 3 and 5.

Pythagoras and Number Gender

Pythagoras, the Greek teacher, who lived in the sixth century BC and is most famous for his theorem about triangles, agreed with the Sumerians on number gender. He believed that odd numbers were masculine and even ones were feminine. This is where it gets interesting … why did he think that? It was, he believed, a resistance to splitting in two that embodied strength. The ability to be divisible by two, in his eyes was a weakness. He believed odd numbers were master over even. Christianity agrees with the gender theory, God created Adam first and Eve second. The latter being the sin.

Large Numbers

Numbers originally accounted for practical and countable things, such as sheep and teeth. Things get interesting as quantities increase, because we don’t use numbers in the same way.

We approximate using a “round number” as a place mark. It is easier and more convenient. When I say, for example, that there were a hundred people at the market, I don’t mean that there were exactly one hundred people there. … Big numbers are understood approximately, small ones precisely, and these two systems interact uneasily. It is clearly nonsensical to say that next year the universe will be “13.7 billion and one” years old. It will remain 13.7 billion years old for the rest of our lives.

Round numbers usually end in zero.

The word round is used because a round number represents the completion of a full counting cycle, not because zero is a circle. There are ten digits in our number system, so any combination of cycles will always be divisible by ten. Because we are so used to using round numbers for big numbers, when we encounter a big number that is nonround— say, 754,156,293— it feels discrepant.

Manoj Thomas, a psychologist at Cornell University, argues that we are uneasy with large, non-round numbers, which causes us to see them as smaller than they are and carries with it practical implications when, say, selling a house. “We tend to think that small numbers are more precise,” he says, “so when we see a big number that is precise we instinctively assume it is less than it is.” If he’s right the result is that you will pay more for expensive and non-round prices. Indeed his experiments seem to agree. In one, respondents viewed pictures of several houses and sales prices, some were round and some were larger and non-round (e.g., $490,000 and $492,332). On average subjects judged the precise one to be lower. As Bellos concludes on large numbers, “if you want to make money, don’t end the price with a zero.”

Number Influence When Shopping

One of the ways to make a number seem more precise is by subtracting 1.

When we read a number, we are more influenced by the leftmost digit than by the rightmost, since that is the order in which we read, and process, them. The number 799 feels significantly less than 800 because we see the former as 7-something and the latter as 8-something, whereas 798 feels pretty much like 799. Since the nineteenth century, shopkeepers have taken advantage of this trick by choosing prices ending in a 9, to give the impression that a product is cheaper than it is. Surveys show that anything between a third and two-thirds of all retail prices now end in a 9.

Of course we think that other people fall for this and surely not us, but that is not the case. Studies like this continue to be replicated over and over. Dropping the price one cent, say from $8 to $7.99 influences decisions dramatically.

Not only are prices ending in 9 harder to recall for price comparisons, we’ve also been conditioned to believe they are discounted and cheap. The practical implications of this are that if you’re a high-end brand or selling an exclusive service, you want to avoid bargain aspect. You don’t want a therapist who charges $99.99, any more than you want a high-end restaurant to list menu prices ending in $.99.

In fact, most of the time, it’s best to avoid the $ all together. Our response to this stimulus is pain.

The “$” reminds us of the pain of paying. Another clever menu strategy is to show the prices immediately after the description of each dish, rather than listing them in a column, since listing prices facilitates price comparison. You want to encourage diners to order what they want, whatever the price, rather than reminding them which dish is most expensive.

These are not the only nor most subtle ways that numbers influence us. The display of absurdly expensive items first creates an artificial benchmark. The real estate agent, who shows you a house way above your price range first, is really setting an artificial benchmark.

The $100,000 car in the showroom and the $10,000 pair of shoes in the shop window are there not because the manager thinks they will sell, but as decoys to make the also-expensive $50,000 car and $5,000 shoes look cheap. Supermarkets use similar strategies. We are surprisingly susceptible to number cues when it comes to making decisions, and not just when shopping.

We can all be swayed by irrelevant random numbers, which is why it’s important to use a two-step framework when making decisions.

Numbers and Time

Time has always been counted.

We carved notches on sticks and daubed splotches on rocks to mark the passing of days. Our first calendars were tied to astronomical phenomena, such as the new moon, which meant that the number of days in each calendar cycle varied, in the case of the new moon between 29 and 30 days, since the exact length of a lunar cycle is 29.53 days. In the middle of the first millennium BCE, however, the Jews introduced a new system. They decreed that the Sabbath come every seven days ad infinitum, irrespective of planetary positions. The continuous seven-day cycle was a significant step forward for humanity. It emancipated us from consistent compliance with Nature, placing numerical regularity at the heart of religious practice and social organization, and since then the seven-day week has become the world’s longest-running uninterrupted calendrical tradition.

Why seven days in the week?

Seven was already the most mystical of numbers by the time the Jews declared that God took six days to make the world, and rested the day after. Earlier peoples had also used seven-day periods in their calendars, although never repeated in an endless loop. The most commonly accepted explanation for the predominance of seven in religious contexts is that the ancients observed seven planets in the sky: the Sun, the Moon, Venus, Mercury, Mars, Jupiter and Saturn. Indeed, the names Saturday, Sunday and Monday come from the planets, although the association of planets with days dates from Hellenic times, centuries after the seven-day week had been introduced.

The Egyptians used the human head to represent 7, which offers “another possible reason for the number’s symbolic importance.”

There are seven orifices in the head: the ears, eyes, nostrils and mouth. Human physiology provides other explanations too. Six days might be the optimal length of time to work before you need a day’s rest, or seven might be the most appropriate number for our working memory: the number of things the average person can hold in his or her head simultaneously is seven, plus or minus two.

Bellos isn’t convinced. He thinks seven is special, not for the reasons mentioned above, but rather because of arithmetic.

Seven is unique among the first ten numbers because it is the only number that cannot be multiplied or divided within the group. When 1, 2, 3, 4 and 5 are doubled the answer is less than or equal to ten. The numbers 6, 8 and 10 can be halved and 9 is divisible by three. Of the numbers we can count on our fingers, only 7 stands alone: it neither produces nor is produced. Of course the number feels special. It is!

Favorite Numbers and Number Personalities

When people are asked to think of a digit off the top of their head, they are most likely to think of 7. When choosing a number below 20, the most probable response is 17. We’ll come back to that in a second. But for now let’s talk about the meaning of numbers.

Numbers express quantities and we express qualities to them. Here are the results from a simple survey that paints a “coherent picture of number personalities.

From The Grapes of Math by Alex Borros
From The Grapes of Math by Alex Bellos

 

Interestingly, Bellos writes, “the association of one with male characteristics, and two with female ones, also remains deeply ingrained.”

When asked to pick favorite numbers, we follow clear patterns, as shown below in a heat map, in which the numbers from 1 to 100 are represented by squares. Bellos explains:

(The top row of each grid contains the numbers 1 to 10, the second row the numbers 11 to 20, and so on.) The numbers marked with black squares represent those that are “most liked” (the top twenty in the rankings), the white squares are the “least liked” (the bottom twenty) and the squares in shades of gray are the numbers ranked in between.

From the Grapes of Math by Alex Bellos
From the Grapes of Math by Alex Bellos

The heat map shows conspicuous patches of order. Black squares are mostly positioned at the top of the grid, showing on average that low numbers are liked best. The left-sloping diagonal through the center reveals that two-digit numbers where both digits are the same are also attractive. We like patterns. Most strikingly, however, four white columns display the unpopularity of numbers ending in 1, 3, 7 and 9.

Numbers are a part of our lives. We see them everywhere. They influence us, they guide us, and they help us solve problems. And yet, as The Grapes of Math: How Life Reflects Numbers and Numbers Reflect Life shows us, their history and patterns can also be a source of wonder.

Every Number Tells a Story Click To Tweet