Tag: Cognitive Dissonance

The Psychology of Risk and Reward

The Psychology of Risk and Reward

An excerpt from The Aspirational Investor: Taming the Markets to Achieve Your Life's Goals that I think you'd enjoy.

Most of us have a healthy understanding of risk in the short term.

When crossing the street, for example, you would no doubt speed up to avoid an oncoming car that suddenly rounds the corner.

Humans are wired to survive: it’s a basic instinct that takes command almost instantly, enabling our brains to resolve ambiguity quickly so that we can take decisive action in the face of a threat.

The impulse to resolve ambiguity manifests itself in many ways and in many contexts, even those less fraught with danger. Glance at the (above) picture for no more than a couple of seconds. What do you see?

Some observers perceive the profile of a young woman with flowing hair, an elegant dress, and a bonnet. Others see the image of a woman stooped in old age with a wart on her large nose. Still others—in the gifted minority—are able to see both of the images simultaneously.

What is interesting about this illusion is that our brains instantly decide what image we are looking at, based on our first glance. If your initial glance was toward the vertical profile on the left-hand side, you were all but destined to see the image of the elegant young woman: it was just a matter of your brain interpreting every line in the picture according to the mental image that you already formed, even though each line can be interpreted in two different ways. Conversely, if your first glance fell on the central dark horizontal line that emphasizes the mouth and chin, your brain quickly formed an image of the older woman.

Regardless of your interpretation, your brain wasn’t confused. It simply decided what the picture was and filled in the missing pieces. Your brain resolved ambiguity and extracted order from conflicting information.

What does this have to do with decision making? Every bit of information can be interpreted differently according to our perspective. Ashvin Chhabra directs us to investing. I suggest you reframe this in the context of decision making in general.

Every trade has a seller and a buyer: your state of mind is paramount. If you are in a risk-averse mental framework, then you are likely to interpret a further fall in stocks as additional confirmation of your sell bias. If instead your framework is positive, you will interpret the same event as a buying opportunity.

The challenge of investing is compounded by the fact that our brains, which excel at resolving ambiguity in the face of a threat, are less well equipped to navigate the long term intelligently. Since none of us can predict the future, successful investing requires planning and discipline.

Unfortunately, when reason is in apparent conflict with our instincts—about markets or a “hot stock,” for example—it is our instincts that typically prevail. Our “reptilian brain” wins out over our “rational brain,” as it so often does in other facets of our lives. And as we have seen, investors trade too frequently, and often at the wrong time.

One way our brains resolve conflicting information is to seek out safety in numbers. In the animal kingdom, this is called “moving with the herd,” and it serves a very important purpose: helping to ensure survival. Just as a buffalo will try to stay with the herd in order to minimize its individual vulnerability to predators, we tend to feel safer and more confident investing alongside equally bullish investors in a rising market, and we tend to sell when everyone around us is doing the same. Even the so-called smart money falls prey to a herd mentality: one study, aptly titled “Thy Neighbor’s Portfolio,” found that professional mutual fund managers were more likely to buy or sell a particular stock if other managers in the same city were also buying or selling.

This comfort is costly. The surge in buying activity and the resulting bullish sentiment is self-reinforcing, propelling markets to react even faster. That leads to overvaluation and the inevitable crash when sentiment reverses. As we shall see, such booms and busts are characteristic of all financial markets, regardless of size, location, or even the era in which they exist.

Even though the role of instinct and human emotions in driving speculative bubbles has been well documented in popular books, newspapers, and magazines for hundreds of years, these factors were virtually ignored in conventional financial and economic models until the 1970s.

This is especially surprising given that, in 1951, a young PhD student from the University of Chicago, Harry Markowitz, published two very important papers. The first, entitled “Portfolio Selection,” published in the Journal of Finance, led to the creation of what we call modern portfolio theory, together with the widespread adoption of its important ideas such as asset allocation and diversification. It earned Harry Markowitz a Nobel Prize in Economics.

The second paper, entitled “The Utility of Wealth” and published in the prestigious Journal of Political Economy, was about the propensity of people to hold insurance (safety) and to buy lottery tickets at the same time. It delved deeper into the psychological aspects of investing but was largely forgotten for decades.

The field of behavioral finance really came into its own through the pioneering work of two academic psychologists, Amos Tversky and Daniel Kahneman, who challenged conventional wisdom about how people make decisions involving risk. Their work garnered Kahneman the Nobel Prize in Economics in 2002. Behavioral finance and neuroeconomics are relatively new fields of study that seek to identify and understand human behavior and decision making with regard to choices involving trade-offs between risk and reward. Of particular interest are the human biases that prevent individuals from making fully rational financial decisions in the face of uncertainty.

As behavioral economists have documented, our propensity for herd behavior is just the tip of the iceberg. Kahneman and Tversky, for example, showed that people who were asked to choose between a certain loss and a gamble, in which they could either lose more money or break even, would tend to choose the double down (that is, gamble to avoid the prospect of losses), a behavior the authors called “loss aversion.” Building on this work, Hersh Shefrin and Meir Statman, professors at the University of Santa Clara Leavey School of Business, have linked the propensity for loss aversion to investors’ tendency to hold losing investments too long and to sell winners too soon. They called this bias the disposition effect.

The lengthy list of behaviorally driven market effects often converge in an investor’s tale of woe. Overconfidence causes investors to hold concentrated portfolios and to trade excessively, behaviors that can destroy wealth. The illusion of control causes investors to overestimate the probability of success and underestimate risk because of familiarity—for example, causing investors to hold too much employer stock in their 401(k) plans, resulting in under-diversification. Cognitive dissonance causes us to ignore evidence that is contrary to our opinions, leading to myopic investing behavior. And the representativeness bias leads investors to assess risk and return based on superficial characteristics—for example, by assuming that shares of companies that make products you like are good investments.

Several other key behavioral biases come into play in the realm of investing. Framing can cause investors to make a decision based on how the question is worded and the choices presented. Anchoring often leads investors to unconsciously create a reference point, say for securities prices, and then adjust decisions or expectations with respect to that anchor. This bias might impede your ability to sell a losing stock, for example, in the false hope that you can earn your money back. Similarly, the endowment bias might lead you to overvalue a stock that you own and thus hold on to the position too long. And regret aversion may lead you to avoid taking a tough action for fear that it will turn out badly. This can lead to decision paralysis in the wake of a market crash, even though, statistically, it is a good buying opportunity.

Behavioral finance has generated plenty of debate. Some observers have hailed the field as revolutionary; others bemoan the discipline’s seeming lack of a transcendent, unifying theory. This much is clear: behavioral finance treats biases as mistakes that, in academic parlance, prevent investors from thinking “rationally” and cause them to hold “suboptimal” portfolios.

But is that really true? In investing, as in life, the answer is more complex than it appears. Effective decision making requires us to balance our “reptilian brain,” which governs instinctive thinking, with our “rational brain,” which is responsible for strategic thinking. Instinct must integrate with experience.

Put another way, behavioral biases are nothing more than a series of complex trade-offs between risk and reward. When the stock market is taking off, for example, a failure to rebalance by selling winners is considered a mistake. The same goes for a failure to add to a position in a plummeting market. That’s because conventional finance theory assumes markets to be inherently stable, or “mean-reverting,” so most deviations from the historical rate of return are viewed as fluctuations that will revert to the mean, or self-correct, over time.

But what if a precipitous market drop is slicing into your peace of mind, affecting your sleep, your relationships, and your professional life? What if that assumption about markets reverting to the mean doesn’t hold true and you cannot afford to hold on for an extended period of time? In both cases, it might just be “rational” to sell and accept your losses precisely when investment theory says you should be buying. A concentrated bet might also make sense, if you possess the skill or knowledge to exploit an opportunity that others might not see, even if it flies in the face of conventional diversification principles.

Of course, the time to create decision rules for extreme market scenarios and concentrated bets is when you are building your investment strategy, not in the middle of a market crisis or at the moment a high-risk, high-reward opportunity from a former business partner lands on your desk and gives you an adrenaline jolt. A disciplined process for managing risk in relation to a clear set of goals will enable you to use the insights offered by behavioral finance to your advantage, rather than fall prey to the common pitfalls. This is one of the central insights of the Wealth Allocation Framework. But before we can put these insights to practical use, we need to understand the true nature of financial markets.

Nassim Taleb: How to Not be a Sucker From the Past

"History is useful for the thrill of knowing the past, and for the narrative (indeed), provided it remains a harmless narrative." — Nassim Taleb
“History is useful for the thrill of knowing the past, and for the narrative (indeed), provided it remains a harmless narrative.” — Nassim Taleb

The fact that new information exists about the past in general means that we have an incomplete road map about history. There is a necessarily fallibility … if you will.

In The Black Sawn, Nassim Taleb writes:

History is useful for the thrill of knowing the past, and for the narrative (indeed), provided it remains a harmless narrative. One should learn under severe caution. History is certainly not a place to theorize or derive general knowledge, nor is it meant to help in the future, without some caution. We can get negative confirmation from history, which is invaluable, but we get plenty of illusions of knowledge along with it.

While I don't entirely hold Taleb's view, I think it's worth reflecting on. As a friend put it to me recently, “when people are looking into the rear view mirror of the past, they can take facts and like a string of pearls draw lines of causal relationships that facilitate their argument while ignoring disconfirming facts that detract from their central argument or point of view.”

Taleb advises us to adopt the empirical skeptic approach of Menodotus which was to “know history without theorizing from it,” and to not draw any large theoretical or scientific claims.

We can learn from history but our desire for causality can easily lead us down a dangerous rabbit hole when new facts come to light disavowing what we held to be true. In trying to reduce the cognitive dissonance, our confirmation bias leads us to reinterpret past events in a way that fits our current beliefs.

History is not stagnant — we only know what we know currently and what we do know is subject to change. The accepted beliefs about how events played out may change in light of new information and then the new accepted beliefs may change over time as well.

Genevieve Bell on the Value of Humanities in an Executive Role

Genevieve Bell

Genevieve Bell, is perhaps the most powerful and influential social scientist in the tech industry. Speaking to Christian Madsbjerg in an excerpt from The Moment of Clarity: Using the Human Sciences to Solve Your Toughest Business Problems, she says something quite profound on executive management and cognitive dissonance.

I’ve been really struck by what it takes to be an executive at a company like Intel. Increasingly, much like in my own training in the social sciences, it requires holding these multiple competing realities in one’s head at the same time. An executive has to be able to hold the reality of what the company needs to be now with what it needs to be ten years from now, and these concepts are often at odds with one another. You also have to hold the realities of different markets in your head that have completely different formulations of success. In the US, you have to think about miles per gallon and environmentally sensitive processes, and in China you just have to go really fast. For Intel executives from a culture of engineering, this is really hard. They are taught to think that dissonance should be resolved in the design: “There is one answer, and we have to get it right.”

And a few sentences later she explained how we lose track of what's important to our customer. “Moore’s Law stated that semiconductors were going to get smaller,” Bell explained, “but it didn’t tell us anything about what people were going to do with them or why a consumer should be interested. It started to become increasingly clear to all of us that consumers just didn’t care about the same things that we cared about. They weren’t necessarily engaged in our narrative.”

While not a silver bullet, one thing I see more and more through my engagements with companies is the value of humanities. Humanities offer a different perspective on the same problems, often with different (and better) results, they bring a better sense of people and their behaviours. When you're solving difficult problems you want cognitive diversity.

Impressions Are Schematically Determined

Mary Douglas's book, Purity and Danger, is an inquiry into the nature of dirt and cleanliness across different cultures around the world.

The following passage describes how we construct impressions, why it's hard to move away from these impressions, and their seductive potential.

When we encounter something new we try to fit it into our existing categories, “ignoring or distorting” those “uncomfortable facts” that do not find into our pre-established schema so as to not disturb our assumptions.

It seems that whatever we perceive is organized into patterns for which we, the perceivers, are largely responsible. Perceiving is not a matter of passively allowing an organ—say of sight or hearing—to receive a ready-made impression from without, like a palette receiving a spot of paint. Recognizing and remembering are not matters of stirring up old images of past impressions. It is generally agreed that all our impressions are schematically determined from the start. As perceivers we select from all the stimuli falling on our senses only those which interest us, and our interests are governed by a pattern-making tendency, sometimes called a schema. In a chaos of shifting impressions, each of us constructs a stable world in which objects have recognizable shapes, are located in depth, and have permanence. In perceiving we are building, talking some cues and rejecting those which fit most easily into the patter that is being built up. Ambiguous ones tend to be treated as if they harmonized with the rest of the pattern. Discordant ones tend to be rejected. If they are accepted, the structure of assumptions has to be modified. As learning proceeds, objects are named. Their names then affect the way they are perceived next time: Once labelled they are more speedily slotted into the pigeon-holes in future.

As time goes on an experiences pile up, we make a greater and greater investment in our system of labels. So a conservative bias is built in. It gives us confidence. At any time we may have to modify our structure of assumptions to accommodate new experience, but the more consistent experience is with the past, the more confidence we can have in our assumptions.

Uncomfortable facts which refuse to be fitted in, we find ourselves ignoring or distorting so that they do not disturb these established assumptions. By and large anything we take note of is preselected and organized in the very act of perceiving. We share with other animals a kind of filtering mechanism which at first only lets in sensations we know how to use.

Granted that disorder spoils pattern, it also provides the material of pattern. Order implies restriction; from all possible materials, a limited selection has been made and from all possible relations a limited set has been used. So disorder by implication is unlimited, no pattern has been realized in it, but its potential for patterning is indefinite. This is why, though we seek to create order, we do not simply condemn disorder. We recognize that is it destructive to existing patterns; also that it has potentiality. It symbolizes both danger and power.

Sartre writes of the lapidary hardness of the anti-Semite:

How can anyone choose to reason falsely? It is simply the old yearning for impermeability … there are people who are attracted by the permanence of stone. They would like to be solid and impenetrable, they do not want change: for who knows what change might bring? … It is as if their own existence were perpetually in suspense. But they want to exist in all ways at once, and all in once instant … they want to adopt a mode of life in which reasoning and the quest for truth play only a subordinate part, in which nothing is sought expect what has already been found, in which one never becomes anything else by what one already was.

“Powers are attributed to any structure of ideas.” Our inclination is to think that our categories of understanding are real. “This yearning for rigidity is in all of us,” she continues:

It is part of our human condition to long for hard lines and clear concepts. When we have them we have to either face the fact that some realities elude them, or else bind ourselves to the inadequacy of the concepts.

The Half-life of Facts

Facts change all the time. Smoking has gone from doctor recommended to deadly. We used to think the Earth was the center of the universe and that Pluto was a planet. For decades we were convinced that the brontosaurus was a real dinosaur.

Knowledge, like milk, has an expiry date. That's the key message behind Samuel Arbesman's excellent new book The Half-life of Facts: Why Everything We Know Has an Expiration Date.

We're bombarded with studies that seemingly prove this or that. Caffeine is good for you one day and bad for you the next. What we think we know and understand about the world is constantly changing. Nothing is immune. While big ideas are overturned infrequently, little ideas churn regularly.

As scientific knowledge grows, we end up rethinking old knowledge. Abresman calls this “a churning of knowledge.” But understanding that facts change (and how they change) helps us cope in a world of constant uncertainty. We can never be too sure of what we know.

In introducing this idea, Abresam writes:

Knowledge is like radioactivity. If you look at a single atom of uranium, whether it’s going to decay — breaking down and unleashing its energy — is highly unpredictable. It might decay in the next second, or you might have to sit and stare at it for thousands, or perhaps even millions, of years before it breaks apart.

But when you take a chunk of uranium, itself made up of trillions upon trillions of atoms, suddenly the unpredictable becomes predictable. We know how uranium atoms work in the aggregate. As a group of atoms, uranium is highly regular. When we combine particles together, a rule of probability known as the law of large numbers takes over, and even the behavior of a tiny piece of uranium becomes understandable. If we are patient enough, half of a chunk of uranium will break down in 704 million years, like clock-work. This number — 704 million years — is a measurable amount of time, and it is known as the half-life of uranium.

It turns out that facts, when viewed as a large body of knowledge, are just as predictable. Facts, in the aggregate, have half-lives: We can measure the amount of time for half of a subject’s knowledge to be overturned. There is science that explores the rates at which new facts are created, new technologies developed, and even how facts spread. How knowledge changes can be understood scientifically.

This is a powerful idea. We don’t have to be at sea in a world of changing knowledge. Instead, we can understand how facts grow and change in the aggregate, just like radioactive materials. This book is a guide to the startling notion that our knowledge — even what each of us has in our head — changes in understandable and systematic ways.

Why does this happen? Why does knowledge churn? In Zen and the Art of Motocycle Maintenance, Robert Pirsig writes:

If all hypotheses cannot be tested, then the result of any experiment are inconclusive and the entire scientific method falls short of its goal of establishing proven knowledge.

About this Einstein had said, “Evolution has shown that at any given moment out of all conceivable constructions a single one has always proved itself absolutely superior to the rest,” and let it go at that.

… But there it was, the whole history of science, a clear story of continuously new and changing explanations of old facts. The time spans of permanence seemed completely random, he could see no order in them. Some scientific truths seemed to last for centuries, others for less than a year. Scientific truth was not dogma, good for eternity, but a temporal quantitative entity that could be studied like anything else.

A few pages later, Pirsig continues:

The purpose of scientific method is to select a single truth from among many hypothetical truths. That, more than anything else, is what science is all about. But historically science has done exactly the opposite. Through multiplication upon multiplication of facts, information, theories and hypotheses, it is science itself that is leading mankind from single absolute truths to multiple, indeterminate, relative ones.

With that, lets dig into how this looks. Arbesman offers a example:

A few years ago a team of scientists at a hospital in Paris decided to actually measure this (churning of knowledge). They decided to look at fields that they specialized in: cirrhosis and hepatitis, two areas that focus on liver diseases. They took nearly five hundred articles in these fields from more than fifty years and gave them to a battery of experts to examine.

Each expert was charged with saying whether the paper was factual, out-of-date, or disproved, according to more recent findings. Through doing this they were able to create a simple chart (see below) that showed the amount of factual content that had persisted over the previous decades. They found something striking: a clear decay in the number of papers that were still valid.

Furthermore, they got a clear measurement of the half-life of facts in these fields by looking at where the curve crosses 50 percent on this chart: 45 years. Essentially, information is like radioactive material: Medical knowledge about cirrhosis or hepatitis takes about forty-five years for half of it to be disproven or become out-of-date.

half-life of facts, decay in the truth of knowledge

Old knowledge, however, isn't a waste. It's not like we have to start from scratch. “Rather,” writes Arbesman, “the accumulation of knowledge can then lead us to a fuller and more accurate picture of the world around us.”

Isaac Asimov, in a wonderful essay, uses the Earth's curvature to help explain this:

When people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.

When our knowledge in a field is immature, discoveries come easily and often explain the main ideas. “But there are uncountably more discoveries, although far rarer, in the tail of this distribution of discovery. As we delve deeper, whether it's intro discovering the diversity of life in the oceans or the shape of the earth, we begin to truly understand the world around us.”

So what we're really dealing with the long tail of discovery. Our search for what's way out at the end of that tail, while it might not be as important or as Earth-shattering as the blockbuster discoveries, can be just as exciting and surprising. Each new little piece can teach us something about what we thought was possible in the world and help us to asymptotically approach a more complete understanding of our surroundings.

In an interview with the Economist, Arbesman was asked which scientific fields decay the slowest-and fastest-and what causes that difference.

Well it depends, because these rates tend to change over time. For example, when medicine transitioned from an art to a science, its half-life was much more rapid than it is now. That said, medicine still has a very short half-life; in fact it is one of the areas where knowledge changes the fastest. One of the slowest is mathematics, because when you prove something in mathematics it is pretty much a settled matter unless someone finds an error in one of your proofs.

One thing we have seen is that the social sciences have a much faster rate of decay than the physical sciences, because in the social sciences there is a lot more “noise” at the experimental level. For instance, in physics, if you want to understand the arc of a parabola, you shoot a cannon 100 times and see where the cannonballs land. And when you do that, you are likely to find a really nice cluster around a single location. But if you are making measurements that have to do with people, things are a lot messier, because people respond to a lot of different things, and that means the effect sizes are going to be smaller.

Arbesman concludes his economist interview:

I want to show people how knowledge changes. But at the same time I want to say, now that you know how knowledge changes, you have to be on guard, so you are not shocked when your children (are) coming home to tell you that dinosaurs have feathers. You have to look things up more often and recognise that most of the stuff you learned when you were younger is not at the cutting edge. We are coming a lot closer to a true understanding of the world; we know a lot more about the universe than we did even just a few decades ago. It is not the case that just because knowledge is constantly being overturned we do not know anything. But too often, we fail to acknowledge change.

Some fields are starting to recognise this. Medicine, for example, has got really good at encouraging its practitioners to stay current. A lot of medical students are taught that everything they learn is going to be obsolete soon after they graduate. There is even a website called “up to date” that constantly updates medical textbooks. In that sense we could all stand to learn from medicine; we constantly have to make an effort to explore the world anew—even if that means just looking at Wikipedia more often. And I am not just talking about dinosaurs and outer space. You see this same phenomenon with knowledge about nutrition or childcare—the stuff that has to do with how we live our lives.

Even when we find new information that contradicts what we thought we knew, we're likely to be slow to change our minds. “A prevailing theory or paradigm is not overthrown by the accumulation of contrary evidence,” writes Richard Zeckhauser, “but rather by a new paradigm that, for whatever reasons, begins to be accepted by scientists.”

In this view, scientific scholars are subject to status quo persistence. Far from being objective decoders of the empirical evidence, scientists have decided preferences about the scientific beliefs they hold. From a psychological perspective, this preference for beliefs can be seen as a reaction to the tensions caused by cognitive dissonance.

A lot of scientific advancement happens only when the old guard dies off. Many years ago Max Planck offered this insight: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”

While we have the best intentions and our minds change slowly, a lot of what we think we know is actually just a temporary knowledge to be updated in the future by more complete knowledge. I think this is why Nassim Taleb argues that we should read Seneca and not worry about someone like Jonah Lehrer bringing us sexy narratives of the latest discoveries. It turns out most of these discoveries are based on very little data and, while they may add to our cumulative knowledge, they are not likely to be around in 10 years.

The Half-life of Facts is a good read that help puts what we think we understand about the world into perspective.

Follow your curiosity and read my interview with the author. Knowing that knowledge has a half-life isn't enough, we can use this to help us determine what to read.

Cognitive Dissonance and Change Blindness

“Their judgment was based more on wishful thinking than on a sound calculation of probabilities; for the usual thing among men is that when they want something they will, without any reflection, leave that to hope, while they will employ the full force of reason in rejecting what they find unpalatable.”
Thucydides, in History of the Peloponnesian War

From Stalking the Black Swan: Research and Decision Making in a World of Extreme Volatility

When new information conflicts with our preexisting hypotheses, we have a problem that needs to be resolved. Cognitive dissonance refers to the state of tension that occurs when a person holds two ideas, beliefs, attitudes, or opinions that are psychologically inconsistent. This conflict manifests itself as a state of mental tension or dissonance, the intensity of which is visible in magnetic resonance imaging studies of the brain. The theory was developed in 1957 by Leon Festinger, who observed in a series of experiments that people would change their attitudes to make them more consistent with actions they had just taken. In popular usage, cognitive dissonance refers to the tendency to ignore information that conflicts with preexisting views, to rationalize certain behaviors to make them seem more consistent with self-image, or to change attitudes to make them consistent with actions already taken. In some cases, it is the equivalent of telling ourselves “little while lies,” but in other cases it no doubt contributes to logical errors like the “confirmation trap,” where people deliberately search for data to confirm existing views rather than challenge them.

Two major sources of cognitive dissonance are self-image (when the image we hold of ourselves is threatened) and commitment (when we've said something, we don't want to be criticized for changing our minds).

“Cognitive dissonance,” writes Ken Posner, “may mainfest itself in a phenomenon known as change blindness. According to behavioral researches”:

change blindness is a situation where people fail to notice change because it takes place slowly and incrementally. It is also called the “boiling frog syndrome,” referring to the folk wisdom that if you throw a frog in boiling water it will jump out, but if you put it into cold water that is gradually heated, the frog will never notice the change. Most of the studies in this area focus on difficulties in perceiving change visually, but researchers think there is a parallel to decision making.

“Change blindness,” Posner continues, “happens when we filter out the implications of new information rather than assigning them even partial weight in our thinking.”

12