Tag: Max Planck

The Two Types of Knowledge: The Max Planck/Chauffeur Test

Charlie Munger, the billionaire business partner of Warren Buffett, frequently tells the story below to illustrate how to distinguish real knowledge from pretend knowledge.

At the 2007 Commencement to the USC Law School, Munger explained it this way:

I frequently tell the apocryphal story about how Max Planck, after he won the Nobel Prize, went around Germany giving the same standard lecture on the new quantum mechanics.

Over time, his chauffeur memorized the lecture and said, “Would you mind, Professor Planck, because it's so boring to stay in our routine, if I gave the lecture in Munich and you just sat in front wearing my chauffeur's hat?” Planck said, “Why not?” And the chauffeur got up and gave this long lecture on quantum mechanics. After which a physics professor stood up and asked a perfectly ghastly question. The speaker said, “Well I'm surprised that in an advanced city like Munich I get such an elementary question. I'm going to ask my chauffeur to reply.”

The point of the story is not the quick wittedness of the protagonist, but rather — to echo Richard Feynman — it's about making a distinction between knowing the name of something and knowing something.

Two Kinds of Knowledge

In this world we have two kinds of knowledge. One is Planck knowledge, the people who really know. They’ve paid the dues, they have the aptitude. And then we’ve got chauffeur knowledge. They've learned the talk. They may have a big head of hair, they may have fine temper in the voice, they’ll make a hell of an impression.

But in the end, all they have is chauffeur knowledge. I think I’ve just described practically every politician in the United States.

And you are going to have the problem in your life of getting the responsibility into the people with the Planck knowledge and away from the people with the chauffeur knowledge.

And there are huge forces working against you. My generation has failed you a bit… but you wouldn’t like it to be too easy now would you?

Real knowledge comes when people do the work. This is so important that Elon Musk tries to tease it out in interviews.

On the other hand, we have the people who don't do the work — they pretend. While they've learned to put on a good show, they lack understanding. They can't answer questions that don't rely on memorization. They can't explain things without using jargon or vague terms. They have no idea how things interact. They can't predict consequences.

The problem is that it's difficult to separate the two.

One way to tease out the difference between Planck and chauffeur knowledge is to ask them why.

In The Art of Thinking Clearly Rolf Dobelli offers some commentary on distinguishing fake from real knowledge:

With journalists, it is more difficult. Some have acquired true knowledge. Often they are veteran reporters who have specialized for years in a clearly defined area. They make a serious effort to understand the complexity of a subject and to communicate it. They tend to write long articles that highlight a variety of cases and exceptions. The majority of journalists, however, fall into the category of chauffeur. They conjure up articles off the tops of their heads or, rather, from Google searches. Their texts are one-sided, short, and— often as compensation for their patchy knowledge— snarky and self-satisfied in tone.

The same superficiality is present in business. The larger a company, the more the CEO is expected to possess “star quality.” Dedication, solemnity, and reliability are undervalued, at least at the top. Too often shareholders and business journalists seem to believe that showmanship will deliver better results, which is obviously not the case.

One way to guard against this is to understand your circle of competence.

Dobelli concludes with some advice worth taking to heart.

Be on the lookout for chauffeur knowledge. Do not confuse the company spokesperson, the ringmaster, the newscaster, the schmoozer, the verbiage vendor, or the cliché generator with those who possess true knowledge. How do you recognize the difference? There is a clear indicator: True experts recognize the limits of what they know and what they do not know. If they find themselves outside their circle of competence, they keep quiet or simply say, “I don’t know.” This they utter unapologetically, even with a certain pride. From chauffeurs, we hear every line except this.

***

If you liked this, you'll love these other Farnam Street articles:

Circle of Competence — Knowing your Circle of Competence helps intelligent people like Charlie Munger and Warren Buffett stay out of trouble.

Learn Anything Faster with the Feynman Technique — The Feynman Technique helps you learn anything faster by quickly identifying gaps in your understanding. It's also a versatile thinking tool.

The Half-life of Facts

Facts change all the time. Smoking has gone from doctor recommended to deadly. We used to think the Earth was the center of the universe and that Pluto was a planet. For decades we were convinced that the brontosaurus was a real dinosaur.

Knowledge, like milk, has an expiry date. That's the key message behind Samuel Arbesman's excellent new book The Half-life of Facts: Why Everything We Know Has an Expiration Date.

We're bombarded with studies that seemingly prove this or that. Caffeine is good for you one day and bad for you the next. What we think we know and understand about the world is constantly changing. Nothing is immune. While big ideas are overturned infrequently, little ideas churn regularly.

As scientific knowledge grows, we end up rethinking old knowledge. Abresman calls this “a churning of knowledge.” But understanding that facts change (and how they change) helps us cope in a world of constant uncertainty. We can never be too sure of what we know.

In introducing this idea, Abresam writes:

Knowledge is like radioactivity. If you look at a single atom of uranium, whether it’s going to decay — breaking down and unleashing its energy — is highly unpredictable. It might decay in the next second, or you might have to sit and stare at it for thousands, or perhaps even millions, of years before it breaks apart.

But when you take a chunk of uranium, itself made up of trillions upon trillions of atoms, suddenly the unpredictable becomes predictable. We know how uranium atoms work in the aggregate. As a group of atoms, uranium is highly regular. When we combine particles together, a rule of probability known as the law of large numbers takes over, and even the behavior of a tiny piece of uranium becomes understandable. If we are patient enough, half of a chunk of uranium will break down in 704 million years, like clock-work. This number — 704 million years — is a measurable amount of time, and it is known as the half-life of uranium.

It turns out that facts, when viewed as a large body of knowledge, are just as predictable. Facts, in the aggregate, have half-lives: We can measure the amount of time for half of a subject’s knowledge to be overturned. There is science that explores the rates at which new facts are created, new technologies developed, and even how facts spread. How knowledge changes can be understood scientifically.

This is a powerful idea. We don’t have to be at sea in a world of changing knowledge. Instead, we can understand how facts grow and change in the aggregate, just like radioactive materials. This book is a guide to the startling notion that our knowledge — even what each of us has in our head — changes in understandable and systematic ways.

Why does this happen? Why does knowledge churn? In Zen and the Art of Motocycle Maintenance, Robert Pirsig writes:

If all hypotheses cannot be tested, then the result of any experiment are inconclusive and the entire scientific method falls short of its goal of establishing proven knowledge.

About this Einstein had said, “Evolution has shown that at any given moment out of all conceivable constructions a single one has always proved itself absolutely superior to the rest,” and let it go at that.

… But there it was, the whole history of science, a clear story of continuously new and changing explanations of old facts. The time spans of permanence seemed completely random, he could see no order in them. Some scientific truths seemed to last for centuries, others for less than a year. Scientific truth was not dogma, good for eternity, but a temporal quantitative entity that could be studied like anything else.

A few pages later, Pirsig continues:

The purpose of scientific method is to select a single truth from among many hypothetical truths. That, more than anything else, is what science is all about. But historically science has done exactly the opposite. Through multiplication upon multiplication of facts, information, theories and hypotheses, it is science itself that is leading mankind from single absolute truths to multiple, indeterminate, relative ones.

With that, lets dig into how this looks. Arbesman offers a example:

A few years ago a team of scientists at a hospital in Paris decided to actually measure this (churning of knowledge). They decided to look at fields that they specialized in: cirrhosis and hepatitis, two areas that focus on liver diseases. They took nearly five hundred articles in these fields from more than fifty years and gave them to a battery of experts to examine.

Each expert was charged with saying whether the paper was factual, out-of-date, or disproved, according to more recent findings. Through doing this they were able to create a simple chart (see below) that showed the amount of factual content that had persisted over the previous decades. They found something striking: a clear decay in the number of papers that were still valid.

Furthermore, they got a clear measurement of the half-life of facts in these fields by looking at where the curve crosses 50 percent on this chart: 45 years. Essentially, information is like radioactive material: Medical knowledge about cirrhosis or hepatitis takes about forty-five years for half of it to be disproven or become out-of-date.

half-life of facts, decay in the truth of knowledge

Old knowledge, however, isn't a waste. It's not like we have to start from scratch. “Rather,” writes Arbesman, “the accumulation of knowledge can then lead us to a fuller and more accurate picture of the world around us.”

Isaac Asimov, in a wonderful essay, uses the Earth's curvature to help explain this:

When people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.

When our knowledge in a field is immature, discoveries come easily and often explain the main ideas. “But there are uncountably more discoveries, although far rarer, in the tail of this distribution of discovery. As we delve deeper, whether it's intro discovering the diversity of life in the oceans or the shape of the earth, we begin to truly understand the world around us.”

So what we're really dealing with the long tail of discovery. Our search for what's way out at the end of that tail, while it might not be as important or as Earth-shattering as the blockbuster discoveries, can be just as exciting and surprising. Each new little piece can teach us something about what we thought was possible in the world and help us to asymptotically approach a more complete understanding of our surroundings.

In an interview with the Economist, Arbesman was asked which scientific fields decay the slowest-and fastest-and what causes that difference.

Well it depends, because these rates tend to change over time. For example, when medicine transitioned from an art to a science, its half-life was much more rapid than it is now. That said, medicine still has a very short half-life; in fact it is one of the areas where knowledge changes the fastest. One of the slowest is mathematics, because when you prove something in mathematics it is pretty much a settled matter unless someone finds an error in one of your proofs.

One thing we have seen is that the social sciences have a much faster rate of decay than the physical sciences, because in the social sciences there is a lot more “noise” at the experimental level. For instance, in physics, if you want to understand the arc of a parabola, you shoot a cannon 100 times and see where the cannonballs land. And when you do that, you are likely to find a really nice cluster around a single location. But if you are making measurements that have to do with people, things are a lot messier, because people respond to a lot of different things, and that means the effect sizes are going to be smaller.

Arbesman concludes his economist interview:

I want to show people how knowledge changes. But at the same time I want to say, now that you know how knowledge changes, you have to be on guard, so you are not shocked when your children (are) coming home to tell you that dinosaurs have feathers. You have to look things up more often and recognise that most of the stuff you learned when you were younger is not at the cutting edge. We are coming a lot closer to a true understanding of the world; we know a lot more about the universe than we did even just a few decades ago. It is not the case that just because knowledge is constantly being overturned we do not know anything. But too often, we fail to acknowledge change.

Some fields are starting to recognise this. Medicine, for example, has got really good at encouraging its practitioners to stay current. A lot of medical students are taught that everything they learn is going to be obsolete soon after they graduate. There is even a website called “up to date” that constantly updates medical textbooks. In that sense we could all stand to learn from medicine; we constantly have to make an effort to explore the world anew—even if that means just looking at Wikipedia more often. And I am not just talking about dinosaurs and outer space. You see this same phenomenon with knowledge about nutrition or childcare—the stuff that has to do with how we live our lives.

Even when we find new information that contradicts what we thought we knew, we're likely to be slow to change our minds. “A prevailing theory or paradigm is not overthrown by the accumulation of contrary evidence,” writes Richard Zeckhauser, “but rather by a new paradigm that, for whatever reasons, begins to be accepted by scientists.”

In this view, scientific scholars are subject to status quo persistence. Far from being objective decoders of the empirical evidence, scientists have decided preferences about the scientific beliefs they hold. From a psychological perspective, this preference for beliefs can be seen as a reaction to the tensions caused by cognitive dissonance.

A lot of scientific advancement happens only when the old guard dies off. Many years ago Max Planck offered this insight: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”

While we have the best intentions and our minds change slowly, a lot of what we think we know is actually just a temporary knowledge to be updated in the future by more complete knowledge. I think this is why Nassim Taleb argues that we should read Seneca and not worry about someone like Jonah Lehrer bringing us sexy narratives of the latest discoveries. It turns out most of these discoveries are based on very little data and, while they may add to our cumulative knowledge, they are not likely to be around in 10 years.

The Half-life of Facts is a good read that help puts what we think we understand about the world into perspective.

Follow your curiosity and read my interview with the author. Knowing that knowledge has a half-life isn't enough, we can use this to help us determine what to read.

The Human Mind has a Shut-Off Device

This passage is from Ryan Holiday in Trust Me, I'm Lying:

Once the mind has accepted a plausible explanation for something, it becomes a framework for all the information that is perceived after it. We’re drawn, subconsciously, to fit and contort all the subsequent knowledge we receive into our framework, whether it fits or not. Psychologists call this “cognitive rigidity”. The facts that built an original premise are gone, but the conclusion remains—the general feeling of our opinion floats over the collapsed foundation that established it.

Information overload, “busyness,” speed, and emotion all exacerbate this phenomenon. They make it even harder to update our beliefs or remain open-minded.

Reminds me of this quote from Charlie Munger:

[W]hat I'm saying here is that the human mind is a lot like the human egg, and the human egg has a shut-off device. When one sperm gets in, it shuts down so the next one can't get in. The human mind has a big tendency of the same sort. And here again, it doesn't just catch ordinary mortals; it catches the deans of physics. According to Max Planck, the really innovative, important new physics was never really accepted by the old guard. Instead a new guard came along that was less brain-blocked by its previous conclusions. And if Max Planck's crowd had this consistency and commitment tendency that kept their old inclusions intact in spite of disconfirming evidence, you can imagine what the crowd that you and I are part of behaves like.

If we get most of our plausible explanations from headlines—that is newspapers, tweets, facebook—we're in trouble. Conclusions based not on well-reasoned arguments, deep fluency, or facts but headlines are the most troublesome. Map that to how hard it is to update our beliefs and you can start to see the structural problem.