Tag: Status Quo Bias

Choosing your Choice Architect(ure)

“Nothing will ever be attempted
if all possible objections must first be overcome.”

— Samuel Johnson

***

In the book Nudge by Richard Thaler and Cass Sunstein they coin the terms ‘Choice Architecture’ and ‘Choice Architect’. For them, if you have an ability to influence the choices other people make, you are a choice architect.

Considering the number of interactions we have everyday, it would be quite easy to argue that we are all Choice Architects at some point. But this also makes the inverse true; we are also wandering around someone else’s Choice Architecture.

Let’s take a look at a few of the principles of good choice architecture, so we can get a better idea of when someone is trying to nudge us.

This information can then be used/weighed when making decisions.  

Defaults

Thaler and Sunstein start with a discussion on “defaults” that are commonly offered to us:

For reasons we have discussed, many people will take whatever option requires the least effort, or the path of least resistance. Recall the discussion of inertia, status quo bias, and the ‘yeah, whatever’ heuristic. All these forces imply that if, for a given choice, there is a default option — an option that will obtain if the chooser does nothing — then we can expect a large number of people to end up with that option, whether or not it is good for them. And as we have also stressed, these behavioral tendencies toward doing nothing will be reinforced if the default option comes with some implicit or explicit suggestion that it represents the normal or even the recommended course of action.

When making decisions people will often take the option that requires the least effort or the path of least resistance. This makes sense: It’s not just a matter of laziness, we also only have so many hours in a day. Unless you feel particularly strongly about it, if putting little to no effort towards something leads you forward (or at least doesn’t noticeably kick you backwards) this is what you are likely to do. Loss aversion plays a role as well. If we feel like the consequences of making a poor choice are high, we will simply decide to do nothing. 

Inertia is another reason: If the ship is currently sailing forward, it can often take a lot of time and effort just to slightly change course.

You have likely seen many examples of inertia at play in your work environment and this isn’t necessarily a bad thing.

Sometimes we need that ship to just steadily move forward. The important bit is to realize when this is factoring into your decisions, or more specifically, when this knowledge is being used to nudge you into making specific choices.

Let’s think about some of your monthly recurring bills. While you might not be reading that magazine or going to the gym, you’re still paying for the ability to use that good or service. If you weren’t being auto-renewed monthly, what is the chance that you would put the effort into renewing that subscription or membership? Much lower, right? Publishers and gym owners know this, and they know you don't want to go through the hassle of cancelling either, so they make that difficult, too. (They understand well our tendency to want to travel the path of least resistance and avoid conflict.)

This is also where they will imply that the default option is the recommended course of action. It sounds like this:

“We’re sorry to hear you no longer want the magazine Mr. Smith. You know, more than half of the fortune 500 companies have a monthly subscription to magazine X, but we understand if it’s not something you’d like to do at the moment.”

or

“Mr. Smith we are sorry to hear that you want to cancel your membership at GymX. We understand if you can’t make your health a priority at this point but we’d love to see you back sometime soon. We see this all the time, these days everyone is so busy. But I’m happy to say we are noticing a shift where people are starting to make time for themselves, especially in your demographic…”

(Just cancel them. You’ll feel better. We promise.)

The Structure of Complex Choices

We live in a world of reviews. Product reviews, corporate reviews, movie reviews… When was the last time you bought a phone or a car before checking the reviews? When was the last time that you hired an employee without checking out their references? 

Thaler and Sunstein call this Collaborative Filtering and explain it as follows:

You use the judgements of other people who share your tastes to filter through the vast number of books or movies available in order to increase the likelihood of picking one you like. Collaborative filtering is an effort to solve a problem of choice architecture. If you know what people like you tend to like, you might well be comfortable in selecting products you don’t know, because people like you tend to like them. For many of us, collaborative filtering is making difficult choices easier.

While collaborative filtering does a great job of making difficult choices easier we have to remember that companies also know that you will use this tool and will try to manipulate it. We just have to look at the information critically, compare multiple sources and take some time to review the reviewers.

These techniques can be useful for decisions of a certain scale and complexity: when the alternatives are understood and in small enough numbers. However, once we reach a certain size we require additional tools to make the right decision.

One strategy to use is what Amos Tversky (1972) called ‘elimination by aspects.’ Someone using this strategy first decides what aspect is most important (say, commuting distance), establishes a cutoff level (say, no more than a thirty-minute commute), then eliminates all the alternatives that do not come up to this standard. The process is repeated, attribute by attribute (no more than $1,500 per month; at least two bedrooms; dogs permitted), until either a choice is made or the set is narrowed down enough to switch over to a compensatory evaluation of the ‘finalists.’”

This is a very useful tool if you have a good idea of which attributes are of most value to you.

When using these techniques, we have to be mindful of the fact that the companies trying to sell us goods have spent a lot of time and money figuring out what attributes are important to you as well.

For example, if you were to shop for an SUV you would notice that there are a specific number of variables they all seem to have in common now (engine options, towing options, seating options, storage options). They are trying to nudge you not to eliminate them from your list. This forces you to do the tertiary research or better yet, this forces you to walk into dealerships where they will try to inflate the importance of those attributes (which they do best).

They also try to call things new names as a means to differentiate themselves and get onto your list. What do you mean our competitors don't have FLEXfuel?

Incentives

Incentives are so ubiquitous in our lives that it’s very easy to overlook them. Unfortunately, this can influence us to make poor decisions.

Thaler and Sunstein believe this is tied into how salient the incentive is.

The most important modification that must be made to a standard analysis of incentives is salience. Do the choosers actually notice the incentives they face? In free markets, the answer is usually yes, but in important cases the answer is no.

Consider the example of members of an urban family deciding whether to buy a car. Suppose their choices are to take taxis and public transportation or to spend ten thousand dollars to buy a used car, which they can park on the street in front of their home. The only salient costs of owning this car will be the weekly stops at the gas station, occasional repair bills, and a yearly insurance bill. The opportunity cost of the ten thousand dollars is likely to be neglected. (In other words, once they purchase the car, they tend to forget about the ten thousand dollars and stop treating it as money that could have been spent on something else.) In contrast, every time the family uses a taxi the cost will be in their face, with the meter clicking every few blocks. So behavioral analysis of the incentives of car ownership will predict that people will underweight the opportunity costs of car ownership, and possibly other less salient aspects such as depreciation, and may overweight the very salient costs of using a taxi.

The problems here are relatable and easily solved: If the family above had written down all the numbers related to either taxi, public transportation, or car ownership, it would have been a lot more difficult for them to undervalue the salient aspects of any of their choices. (At least if the highest value attribute is cost).

***

This isn’t an exhaustive list of all the daily nudges we face but it’s a good start and some important, translatable, themes emerge.

  • Realize when you are wandering around someone’s choice architecture.
  • Do your homework
  • Develop strategies to help you make decisions when you are being nudged.

 

Still Interested? Buy, and most importantly read, the whole book. Also, check out our other post on some of the Biases and Blunders covered in Nudge.

Antigone: Better Decisions Through Literature

antigone

I recently picked up Sophocles's Antigone. Sophocles wrote more than 100 plays in his lifetime but only seven complete tragedies remain.

In Antigone, Polynices, son of Oedipus, went to war with his brother, Eteocles, the ruler of Thebes, for control of the city. These two kill each other and their uncle, Creon, assumes control of the city.

Creon regards Polynices as a traitor. Accordingly, he denies his body a decent burial. He warns that anyone ignoring this edict shall be put to death.

Creon's position is understandable. He's trying to establish order, punish a traitor, and gain political authority. Yet he proceeds in ignorance, in the sense that he does not see the possible outcomes that may arise from his edict.

Antigone is the sister of Polynices and Eteocles. She's clearly upset with this and defies Creon's order to give her brother a proper burial. Antigone is convinced that Creon is wrong. To her, he's defying the authority of the gods and overstepping.

Antigone is arrested and confesses. Creon orders her death by sealing her in a cave, entombed alive.

Tiresias, the blind prophet, warns Creon. “Think, thou dost walk on fortune's razor-edge.” He predicts that if Creon doesn't change his mind and permit the burial of Polynices, the gods will curse Thebes. Disaster, of course, will naturally follow.

Creon recognizes the error of his ways and realizes he made a mistake. He orders Antigone to be freed and Polynices a proper burial.

Alas, this wouldn't quite be a tragedy if things worked out so neatly.

Antigone has already hung herself. Her fiancé, who also happens to be Creon's son, blames his father for her death. He tries to kill his father but accidentally ends up killing himself. Creon's wife, Eurydice, hears of her son's death and commits suicide.

So what exactly can we learn from all of this?

Creon is reluctant to change the status quo.

While he may not have foreseen Antigone's reaction or its consequences as warned by the prophet, he refuses, until it is too late, to change his mind. He's powerful. He's the ruler. He needs to be seen as decisive and he likely views changing his mind as a loss of status rather than a gain of compassion. Yet it is more complicated than this.

“Should Creon change his stance and lose authority and influence,” he would have committed an error of commission, weighted more heavily, ceteris paribus, than doing nothing, and having bad things happen,” explain Devjani Roy and Richard Zeckhauser in their paper Ignorance: Lessons from the Laboratory of Literature.

If you're curious, I'd recommend you give Antigone a read. It's short, only 50 pages or so.

The Half-life of Facts

Facts change all the time. Smoking has gone from doctor recommended to deadly. We used to think the Earth was the center of the universe and that Pluto was a planet. For decades we were convinced that the brontosaurus was a real dinosaur.

Knowledge, like milk, has an expiry date. That's the key message behind Samuel Arbesman's excellent new book The Half-life of Facts: Why Everything We Know Has an Expiration Date.

We're bombarded with studies that seemingly prove this or that. Caffeine is good for you one day and bad for you the next. What we think we know and understand about the world is constantly changing. Nothing is immune. While big ideas are overturned infrequently, little ideas churn regularly.

As scientific knowledge grows, we end up rethinking old knowledge. Abresman calls this “a churning of knowledge.” But understanding that facts change (and how they change) helps us cope in a world of constant uncertainty. We can never be too sure of what we know.

In introducing this idea, Abresam writes:

Knowledge is like radioactivity. If you look at a single atom of uranium, whether it’s going to decay — breaking down and unleashing its energy — is highly unpredictable. It might decay in the next second, or you might have to sit and stare at it for thousands, or perhaps even millions, of years before it breaks apart.

But when you take a chunk of uranium, itself made up of trillions upon trillions of atoms, suddenly the unpredictable becomes predictable. We know how uranium atoms work in the aggregate. As a group of atoms, uranium is highly regular. When we combine particles together, a rule of probability known as the law of large numbers takes over, and even the behavior of a tiny piece of uranium becomes understandable. If we are patient enough, half of a chunk of uranium will break down in 704 million years, like clock-work. This number — 704 million years — is a measurable amount of time, and it is known as the half-life of uranium.

It turns out that facts, when viewed as a large body of knowledge, are just as predictable. Facts, in the aggregate, have half-lives: We can measure the amount of time for half of a subject’s knowledge to be overturned. There is science that explores the rates at which new facts are created, new technologies developed, and even how facts spread. How knowledge changes can be understood scientifically.

This is a powerful idea. We don’t have to be at sea in a world of changing knowledge. Instead, we can understand how facts grow and change in the aggregate, just like radioactive materials. This book is a guide to the startling notion that our knowledge — even what each of us has in our head — changes in understandable and systematic ways.

Why does this happen? Why does knowledge churn? In Zen and the Art of Motocycle Maintenance, Robert Pirsig writes:

If all hypotheses cannot be tested, then the result of any experiment are inconclusive and the entire scientific method falls short of its goal of establishing proven knowledge.

About this Einstein had said, “Evolution has shown that at any given moment out of all conceivable constructions a single one has always proved itself absolutely superior to the rest,” and let it go at that.

… But there it was, the whole history of science, a clear story of continuously new and changing explanations of old facts. The time spans of permanence seemed completely random, he could see no order in them. Some scientific truths seemed to last for centuries, others for less than a year. Scientific truth was not dogma, good for eternity, but a temporal quantitative entity that could be studied like anything else.

A few pages later, Pirsig continues:

The purpose of scientific method is to select a single truth from among many hypothetical truths. That, more than anything else, is what science is all about. But historically science has done exactly the opposite. Through multiplication upon multiplication of facts, information, theories and hypotheses, it is science itself that is leading mankind from single absolute truths to multiple, indeterminate, relative ones.

With that, lets dig into how this looks. Arbesman offers a example:

A few years ago a team of scientists at a hospital in Paris decided to actually measure this (churning of knowledge). They decided to look at fields that they specialized in: cirrhosis and hepatitis, two areas that focus on liver diseases. They took nearly five hundred articles in these fields from more than fifty years and gave them to a battery of experts to examine.

Each expert was charged with saying whether the paper was factual, out-of-date, or disproved, according to more recent findings. Through doing this they were able to create a simple chart (see below) that showed the amount of factual content that had persisted over the previous decades. They found something striking: a clear decay in the number of papers that were still valid.

Furthermore, they got a clear measurement of the half-life of facts in these fields by looking at where the curve crosses 50 percent on this chart: 45 years. Essentially, information is like radioactive material: Medical knowledge about cirrhosis or hepatitis takes about forty-five years for half of it to be disproven or become out-of-date.

half-life of facts, decay in the truth of knowledge

Old knowledge, however, isn't a waste. It's not like we have to start from scratch. “Rather,” writes Arbesman, “the accumulation of knowledge can then lead us to a fuller and more accurate picture of the world around us.”

Isaac Asimov, in a wonderful essay, uses the Earth's curvature to help explain this:

When people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.

When our knowledge in a field is immature, discoveries come easily and often explain the main ideas. “But there are uncountably more discoveries, although far rarer, in the tail of this distribution of discovery. As we delve deeper, whether it's intro discovering the diversity of life in the oceans or the shape of the earth, we begin to truly understand the world around us.”

So what we're really dealing with the long tail of discovery. Our search for what's way out at the end of that tail, while it might not be as important or as Earth-shattering as the blockbuster discoveries, can be just as exciting and surprising. Each new little piece can teach us something about what we thought was possible in the world and help us to asymptotically approach a more complete understanding of our surroundings.

In an interview with the Economist, Arbesman was asked which scientific fields decay the slowest-and fastest-and what causes that difference.

Well it depends, because these rates tend to change over time. For example, when medicine transitioned from an art to a science, its half-life was much more rapid than it is now. That said, medicine still has a very short half-life; in fact it is one of the areas where knowledge changes the fastest. One of the slowest is mathematics, because when you prove something in mathematics it is pretty much a settled matter unless someone finds an error in one of your proofs.

One thing we have seen is that the social sciences have a much faster rate of decay than the physical sciences, because in the social sciences there is a lot more “noise” at the experimental level. For instance, in physics, if you want to understand the arc of a parabola, you shoot a cannon 100 times and see where the cannonballs land. And when you do that, you are likely to find a really nice cluster around a single location. But if you are making measurements that have to do with people, things are a lot messier, because people respond to a lot of different things, and that means the effect sizes are going to be smaller.

Arbesman concludes his economist interview:

I want to show people how knowledge changes. But at the same time I want to say, now that you know how knowledge changes, you have to be on guard, so you are not shocked when your children (are) coming home to tell you that dinosaurs have feathers. You have to look things up more often and recognise that most of the stuff you learned when you were younger is not at the cutting edge. We are coming a lot closer to a true understanding of the world; we know a lot more about the universe than we did even just a few decades ago. It is not the case that just because knowledge is constantly being overturned we do not know anything. But too often, we fail to acknowledge change.

Some fields are starting to recognise this. Medicine, for example, has got really good at encouraging its practitioners to stay current. A lot of medical students are taught that everything they learn is going to be obsolete soon after they graduate. There is even a website called “up to date” that constantly updates medical textbooks. In that sense we could all stand to learn from medicine; we constantly have to make an effort to explore the world anew—even if that means just looking at Wikipedia more often. And I am not just talking about dinosaurs and outer space. You see this same phenomenon with knowledge about nutrition or childcare—the stuff that has to do with how we live our lives.

Even when we find new information that contradicts what we thought we knew, we're likely to be slow to change our minds. “A prevailing theory or paradigm is not overthrown by the accumulation of contrary evidence,” writes Richard Zeckhauser, “but rather by a new paradigm that, for whatever reasons, begins to be accepted by scientists.”

In this view, scientific scholars are subject to status quo persistence. Far from being objective decoders of the empirical evidence, scientists have decided preferences about the scientific beliefs they hold. From a psychological perspective, this preference for beliefs can be seen as a reaction to the tensions caused by cognitive dissonance.

A lot of scientific advancement happens only when the old guard dies off. Many years ago Max Planck offered this insight: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”

While we have the best intentions and our minds change slowly, a lot of what we think we know is actually just a temporary knowledge to be updated in the future by more complete knowledge. I think this is why Nassim Taleb argues that we should read Seneca and not worry about someone like Jonah Lehrer bringing us sexy narratives of the latest discoveries. It turns out most of these discoveries are based on very little data and, while they may add to our cumulative knowledge, they are not likely to be around in 10 years.

The Half-life of Facts is a good read that help puts what we think we understand about the world into perspective.

Follow your curiosity and read my interview with the author. Knowing that knowledge has a half-life isn't enough, we can use this to help us determine what to read.

Blindness to the Benefits of Ambiguity

“Decision makers,” write Stefan Trautmann and Richard Zeckhauser in their paper Blindness to the Benefits of Ambiguity, “often prove to be blind to the learning opportunities offered by ambiguous probabilities. Such decision makers violate rational decision making and forgo significant expected payoffs.”

Trautmann and Zeckhauser argue that we often don't recognize the benefits in commonly occurring ambiguous situations. In part this is because we often treated repeated decisions involving ambiguity as one-shot decisions. In doing so, we ignore the opportunity for learning when we encounter ambiguity in decisions that offer repeat choices.

To put this in context, the authors offer the following example:

A patient is prescribed a drug for high cholesterol. It is successful, lowering her total cholesterol from 230 to 190, and her only side effect is a mild case of sweaty palms. The physician is likely to keep the patient on this drug as long as her cholesterol stays low. Yet, there are many medications for treating cholesterol. Another might lower her cholesterol even more effectively or impose no side effects. Trying an alternative would seem to make sense, since the patient is likely to be on a cholesterol medication for the rest of her life.

In situations of ambiguity with repeated choices we often gravitate towards the first decision that offers a positive payoff. Once we've found a positive payoff we're likely to stick with that decision when given the opportunity to make the same choice again rather than experiment in an attempt to optimize payoffs. We ignore the opportunity for learning and favor the status quo. Another way to think of this is uncertainty avoidance (or ambiguity aversion).

Few individuals recognize that ambiguity offers the opportunity for learning. If a choice situation is to be repeated, ambiguity brings benefits, since one can change one’s choice if one learns the ambiguous choice is superior.

“We observe,” they offer, “that people's lack of a clear understanding of learning under ambiguity leads them to adopt non-Bayesian rules.”

Another example of how this manifests itself in the real world:

In the summer of 2010, the consensus estimate is that there are five applicants for every job opening, yet major employers who expect to hire significant numbers of workers once the economy turns up are sitting by the sidelines and having current workers do overtime. The favorability of the hiring situation is unprecedented in recent years. Thus, it would seem to make sense to hire a few workers, see how they perform relative to the norm. If the finding is much better, suggesting that the ability to select in a very tough labor market and among five applicants is a big advantage, then hire many more. This situation, where the payoff to the first-round decision is highly ambiguous, but perhaps well worthwhile once learning is taken into account, is a real world exemplar of the laboratory situations investigated in this paper.

According to Tolstoi, happy families are all alike, while every unhappy family is unhappy in its own way. A similar observation seems to hold true for situations involving ambiguity: There is only one way to capitalize correctly on learning opportunities under ambiguity, but there are many ways to violate reasonable learning strategies.

From an evolutionary perspective, why would learning avoidance persist if the benefits from learning are large?

Psychological findings suggest that negative experiences are crucial to learning, while good experiences have virtually no pedagogic power. In the current setting, ambiguous options would need to be sampled repeatedly in order to obtain sufficient information on whether to switch from the status quo. Both bad and good outcomes would be experienced along the way, but only good ones could trigger switching. Bad outcomes would also weigh much more heavily, leading people to require too much positive evidence before shifting to ambiguous options. In individual decision situations, losses often weigh 2 to 3 times as much as gains.

In addition, if one does not know what returns would have come from an ambiguous alternative, one cannot feel remorse from not having chosen it. Blame from others also plays an important role. In principal-agent relationships, bad outcomes often lead to criticism, and possibly legal consequences because of responsibility and accountability. Therefore, agents, such as financial advisors or medical practitioners may experience an even higher asymmetry from bad and good payoffs. Most people, for that reason, have had many fewer positive learning experiences with ambiguity than rational sampling would provide.

It might be a good idea to try a new brand the next time you're at the store rather than just making the same choice over and over. Who knows, you might discover you like it better.

The Default Choice, So Hard to Resist

The Web offers choice and competition that is only one click away. But in practice, the power of defaults often matters most.

This article in the NYT flags some interesting points on technological defaults and privacy.

THE default values built into product designs can be particularly potent in the infinitely malleable medium of software, and on the Internet, where a software product or service can be constantly fine-tuned.

“Computing allows you to slice and dice choices in so many ways,” says Ben Shneiderman, a computer scientist at the University of Maryland. “Those design choices also shape our social, cultural and economic choices in ways most people don’t appreciate or understand.”

Default design choices play a central role in the debate over the privacy issues raised by marketers’ tracking of online consumer behavior. The Federal Trade Commission is considering what rules should limit how much online personal information marketers can collect, hold and pass along to other marketers — and whether those rules should be government regulations or self-regulatory guidelines.

Privacy advocates want tighter curbs on gathering online behavioral data, and want marketers to have to ask consumers to collect and share their information, presumably in exchange for discount offers or extra services. Advertisers want a fairly free hand to track online behavior, and to cut back only if consumers choose to opt out.

Defaults are part of a rich field of study that explores “decision architecture” — how a choice is presented or framed. If you want to learn more, read the 2008 book “Nudge,” by Richard H. Thaler.

Thomas Kuhn: The Structure of Scientific Revolutions

“The decision to reject one paradigm is always simultaneously the decision to accept another, and the judgment leading to that decision involves the comparison of both paradigms with nature and with each other.”

structure of scientific revolutions

The progress of science is commonly perceived of as a continuous, incremental advance, where new discoveries add to the existing body of scientific knowledge. This view of scientific progress, however, is challenged by the physicist and philosopher of science Thomas Kuhn, in his book The Structure of Scientific Revolutions. Kuhn argues that the history of science tells a different story, one where science proceeds with a series of revolutions interrupting normal incremental progress.

“A prevailing theory or paradigm is not overthrown by the accumulation of contrary evidence,” Richard Zeckhauser wrote, “but rather by a new paradigm that, for whatever reasons, begins to be accepted by scientists.”

Between scientific revolutions, old ideas and beliefs persist. These form the barriers of resistance to alternative explanations.

Zeckhauser continues “In this view, scientific scholars are subject to status quo persistence. Far from being objective decoders of the empirical evidence, scientists have decided preferences about the scientific beliefs they hold. From a psychological perspective, this preference for beliefs can be seen as a reaction to the tensions caused by cognitive dissonance. ”

* * *

Gary Taubes posted an excellent blog post discussing how paradigm shifts come about in science. He wrote:

…as Kuhn explained in The Structure of Scientific Revolutions, his seminal thesis on paradigm shifts, the people who invariably do manage to shift scientific paradigms are “either very young or very new to the field whose paradigm they change… for obviously these are the men [or women, of course] who, being little committed by prior practice to the traditional rules of normal science, are particularly likely to see that those rules no longer define a playable game and to conceive another set that can replace them.”

So when a shift does happen, it’s almost invariably the case that an outsider or a newcomer, at least, is going to be the one who pulls it off. This is one thing that makes this endeavor of figuring out who’s right or what’s right such a tricky one. Insiders are highly unlikely to shift a paradigm and history tells us they won’t do it. And if outsiders or newcomers take on the task, they not only suffer from the charge that they lack credentials and so credibility, but their work de facto implies that they know something that the insiders don’t – hence, the idiocy implication.

…This leads to a second major problem with making these assessments – who’s right or what’s right. As Kuhn explained, shifting a paradigm includes not just providing a solution to the outstanding problems in the field, but a rethinking of the questions that are asked, the observations that are considered and how those observations are interpreted, and even the technologies that are used to answer the questions. In fact, often the problems that the new paradigm solves, the questions it answers, are not the problems and the questions that practitioners living in the old paradigm would have recognized as useful.

“Paradigms provide scientists not only with a map but also with some of the direction essential for map-making,” wrote Kuhn. “In learning a paradigm the scientist acquires theory, methods, and standards together, usually in an inextricable mixture. Therefore, when paradigms change, there are usually significant shifts in the criteria determining the legitimacy both of problems and of proposed solutions.”

As a result, Kuhn said, researchers on different sides of conflicting paradigms can barely discuss their differences in any meaningful way: “They will inevitably talk through each other when debating the relative merits of their respective paradigms. In the partially circular arguments that regularly result, each paradigm will be shown to satisfy more or less the criteria that it dictates for itself and to fall short of a few of those dictated by its opponent.”

But Taubes' explanation wasn't enough to satisfy my curiosity.

***

The Structure of Scientific Revolutions

To learn more on how paradigm shifts happen, I purchased Kuhn's book, The Structure of Scientific Revolutions, and started to investigate.

Kuhn writes:

“The decision to reject one paradigm is always simultaneously the decision to accept another, and the judgment leading to that decision involves the comparison of both paradigms with nature and with each other.”

Anomalies are not all bad.

Yet any scientist who pauses to examine and refute every anomaly will seldom get any work done.

…during the sixty years after Newton's original computation, the predicted motion of the moon's perigee remained only half of that observed. As Europe's best mathematical physicists continued to wrestle unsuccessfully with the well-known discrepancy, there were occasional proposals for a modification of Newton's inverse square law. But no one took these proposals very seriously, and in practice this patience with a major anomaly proved justified. Clairaut in 1750 was able to show that only the mathematics of the application had been wrong and that Newtonian theory could stand as before. … persistent and recognized anomaly does not always induce crisis. … It follows that if an anomaly is to evoke crisis, it must usually be more than just an anomaly.

So what makes an anomaly worth the effort of investigation?

To that question Kuhn responds, “there is probably no fully general answer.” Einstein knew how to sift the essential from the non-essential better than most.

When the anomaly comes to be recognized as more than another puzzle of science the transition, or revolution, has begun.

The anomaly itself now comes to be more generally recognized as such by the profession. More and more attention is devoted to it by more and more of the field's most eminent men. If it still continues to resist, as it usually does not, many of them may come to view its resolution as the subject matter of their discipline. …

Early attacks on the anomaly will have followed the paradigm rules closely. As time passes and scrutiny increases, more of the attacks will start to diverge from the existing paradigm. It is “through this proliferation of divergent articulations,” Kuhn argues, “the rules of normal science become increasing blurred.

Though there still is a paradigm, few practitioners prove to be entirely agreed about what it is. Even formally standard solutions of solved problems are called into question.”

Einstein explained this transition, which is the structure of scientific revolutions, best. He said: “It was as if the ground had been pulled out from under one, with no firm foundation to be seen anywhere, upon which one could have built.

All scientific crises begin with the blurring of a paradigm.

In this respect research during crisis very much resembles research during the pre-paradigm period, except that in the former the locus of difference is both smaller and more clearly defined. And all crises close in one of three ways. Sometimes normal science ultimately proves able to handle the crisis—provoking problem despite the despair of those who have seen it as the end of an existing paradigm. On other occasions the problem resists even apparently radical new approaches. Then scientists may conclude that no solution will be forthcoming in the present state of their field. The problem is labelled and set aside for a future generation with more developed tools. Or, finally, the case that will most concern us here, a crisis may end up with the emergence of a new candidate for paradigm and with the ensuing battle over its acceptance.

But this isn't easy.

The transition from a paradigm in crisis to a new one from which a new tradition of normal science can emerge is far from a cumulative process, one achieved by an articulation or extension of the old paradigm. Rather it is a reconstruction of the field from new fundamentals, a reconstruction that changes some of the field's most elementary theoretical generalizations as well as many of its paradigm methods and applications.

Who solves these problems? Do the men and women who have invested a large portion of their lives in a field or theory suddenly confront evidence and change their mind? Sadly, no.

Almost always the men who achieve these fundamental inventions of a new paradigm have been either very young, or very new to the field whose paradigm they change. And perhaps that point need not have been made explicit, for obviously these are men who, being little committed by prior practice to the traditional rules of normal science, are particularly likely to see that those rules no longer define a playable game and to conceive another set that can replace them.

And

Therefore, when paradigms change, there are usually significant shifts in the criteria determining the legitimacy both of problems and of proposed solutions.

That observation returns us to the point from which this section began, for it provides our first explicit indication of why the choice between competing paradigms regularly raises questions that cannot be resolved by the criteria of normal science. To the extent, as significant as it is incomplete, that two scientific schools disagree about what is a problem and what is a solution, they will inevitably talk through each other when debating the relative merits of their respective paradigms. In the partially circular arguments that regularly result, each paradigm will be shown to satisfy more or less the criteria that it dictates for itself and to fall short of a few of those dictated by its opponent. There are other reasons, too, for the incompleteness of logical contact that consistently characterizes paradigm debates. For example, since no paradigm ever solves all the problems it defines and since no two paradigms leave all the same problems unsolved, paradigm debates always involve the question: Which problems is it more significant to have solved? Like the issue of competing standards, that questions of values can be answered only in terms of criteria that lie outside of normal science altogether.

Many years ago Max Planck offered this insight: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”

If you're interested in learning more about how paradigm shifts happen, read The Structure of Scientific Revolutions.

12