Tag: Critical Thinking

What’s So Significant About Significance?

How Not to be wrong

One of my favorite studies of all time took the 50 most common ingredients from a cookbook and searched the literature for a connection to cancer: 72% had a study linking them to increased or decreased risk of cancer. (Here's the link for the interested.)

Meta-analyses (studies examining multiple studies) quashed the effect pretty seriously, but how many of those single studies were probably reported on in multiple media outlets, permanently causing changes in readers' dietary habits? (We know from studying juries that people are often unable to “forget” things that are subsequently proven false or misleading — misleading data is sticky.)

The phrase “statistically significant” is one of the more unfortunately misleading ones of our time. The word significant in the statistical sense — meaning distinguishable from random chance — does not carry the same meaning in common parlance, in which we mean distinguishable from something that does not matterWe'll get to what that means.

Confusing the two gets at the heart of a lot of misleading headlines and it's worth a brief look into why they don't mean the same thing, so you can stop being scared that everything you eat or do is giving you cancer.

***

The term statistical significance is used to denote when an effect is found to be extremely unlikely to have occurred by chance. In order to make that determination, we have to propose a null hypothesis to be rejected. Let's say we propose that eating an apple a day reduces the incidence of colon cancer. The “null hypothesis” here would be that eating an apple a day does nothing to the incidence of colon cancer — that we'd be equally likely to get colon cancer if we ate that daily apple.

When we analyze the data of our study, we're technically not looking to say “Eating an apple a day prevents colon cancer” — that's a bit of a misconception. What we're actually doing is an inversion we want the data to provide us with sufficient weight to reject the idea that apples have no effect on colon cancer.

And even when that happens, it's not an all-or-nothing determination. What we're actually saying is “It would be extremely unlikely for the data we have, which shows a daily apple reduces colon cancer by 50%, to have popped up by chance. Not impossible, but very unlikely.” The world does not quite allow us to have absolute conviction.

How unlikely? The currently accepted standard in many fields is 5% — there is a less than 5% chance the data would come up this way randomly. That immediately tells you that at least 1 out of every 20 studies must be wrong, but alas that is where we're at. (The problem with the 5% p-value, and the associated problem of p-hacking has been subject to some intense debate, but we won't deal with that here.)

We'll get to why “significance can be insignificant,” and why that's so important, in a moment. But let's make sure we're fully on board with the importance of sorting chance events from real ones with another illustration, this one outlined by Jordan Ellenberg in his wonderful book How Not to Be WrongPay close attention:

Suppose we're in null hypothesis land, where the chance of death is exactly the same (say, 10%) for the fifty patients who got your drug and the fifty who got [a] placebo. But that doesn't mean that five of the drug patients die and five of the placebo patients die. In fact, the chance that exactly five of the drug patients die is about 18.5%; not very likely, just as it's not very likely that a long series of coin tosses would yield precisely as many heads as tails. In the same way, it's not very likely that exactly the same number of drug patients and placebo patients expire during the course of the trial. I computed:

13.3% chance equally many drug and placebo patients die
43.3% chance fewer placebo patients than drug patients die
43.3% chance fewer drug patients than placebo patients die

Seeing better results among the drug patients than the placebo patients says very little, since this isn't at all unlikely, even under the null hypothesis that your drug doesn't work.

But things are different if the drug patients do a lot better. Suppose five of the placebo patients die during the trial, but none of the drug patients do. If the null hypothesis is right, both classes of patients should have a 90% chance of survival. But in that case, it's highly unlikely that all fifty of the drug patients would survive. The first of the drug patients has a 90% chance; now the chance that not only the first but also the second patient survives is 90% of that 90%, or 81%–and if you want the third patient to survive as well, the chance of that happening is only 90% of that 81%, or 72.9%. Each new patient whose survival you stipulate shaves a little off the chances, and by the end of the process, where you're asking about the probability that all fifty will survive, the slice of probability that remains is pretty slim:

(0.9) x (0.9) x (0.9) x … fifty times! … x (0.9) x (0.9) = 0.00515 …

Under the null hypothesis, there's only one chance in two hundred of getting results this good. That's much more compelling. If I claim I can make the sun come up with my mind, and it does, you shouldn't be impressed by my powers; but if I claim I can make the sun not come up, and it doesn't, then I've demonstrated an outcome very unlikely under the null hypothesis, and you'd best take notice.

So you see, all this null hypothesis stuff is pretty important because what you want to know is if an effect is really “showing up” or if it just popped up by chance.

A final illustration should make it clear:

Imagine you were flipping coins with a particular strategy of getting more heads, and after 30 flips you had 18 heads and 12 tails. Would you call it a miracle? Probably not — you'd realize immediately that it's perfectly possible for an 18/12 ratio to happen by chance. You wouldn't write an article in U.S. News and World Report proclaiming you'd figured out coin flipping.

Now let's say instead you flipped the coin 30,000 times and you get 18,000 heads and 12,000 tails…well, then your case for statistical significance would be pretty tight.  It would be approaching impossible to get that result by chance — your strategy must have something to it. The null hypothesis of “My coin flipping technique is no better than the usual one” would be easy to reject! (The p-value here would be orders of magnitude less than 5%, by the way.)

That's what this whole business is about.

***

Now that we've got this idea down, we come to the big question that statistical significance cannot answer: Even if the result is distinguishable from chance, does it actually matter?

Statistical significance cannot tell you whether the result is worth paying attention to — even if you get the p-value down to a minuscule number, increasing your confidence that what you saw was not due to chance. 

In How Not to be Wrong, Ellenberg provides a perfect example:

A 1995 study published in a British journal indicated that a new birth control pill doubled the risk of venous thrombosis (potentially killer blood clot) in its users. Predictably, 1.5 million British women freaked out, and some meaningfully large percentage of them stopped taking the pill. In 1996, 26,000 more babies were born than the previous year and there were 13,600 more abortions. Whoops!

So what, right? Lots of mothers' lives were saved, right?

Not really. The initial probability of a women getting a venous thrombosis with any old birth control pill, was about 1 in 7,000 or about 0.01%. That means that the “Killer Pill,” even if was indeed increasing “thrombosis risk,” only increased that risk to 2 in 7,000, or about 0.02%!! Is that worth rearranging your life for? Probably not.

Ellenberg makes the excellent point that, at least in the case of health, the null hypothesis is unlikely to be right in most cases! The body is a complex system — of course what we put in it affects how it functions in some direction or another. It's unlikely to be absolute zero.

But numerical and scale-based thinking, indispensable for anyone looking to not be a sucker, tells us that we must distinguish between small and meaningless effects (like the connection between almost all individual foods and cancer so far) and real ones (like the connection between smoking and lung cancer).

And now we arrive at the problem of “significance” — even if an effect is really happening, it still may not matter!  We must learn to be wary of “relative” statistics (i.e., “the risk has doubled”), and look to favor “absolute” statistics, which tell us whether the thing is worth worrying about at all.

So we have two important ideas:

A. Just like coin flips, many results are perfectly possible by chance. We use the concept of “statistical significance” to figure out how likely it is that the effect we're seeing is real and not just a random illusion, like seeing 18 heads in 30 coin tosses.

B. Even if it is really happening, it still may be unimportant – an effect so insignificant in real terms that it's not worth our attention.

These effects should combine to raise our level of skepticism when hearing about groundbreaking new studies! (A third and equally important problem is the fact that correlation is not causation, a common problem in many fields of science including nutritional epidemiology. Just because x is associated with y does not mean that x is causing y.)

Tread carefully and keep your thinking cap on.

***

Still Interested? Read Ellenberg's great book to get your head working correctly, and check out our posts on Bayesian thinking, another very useful statistical tool, and learn a little about how we distinguish science from pseudoscience.

Daniel Dennett’s Most Useful Critical Thinking Tools

We recently discussed some wonderful mental tools from the great Richard Feynman. Let's get some more good ones from another giant, Daniel Dennett.

Dennett is one of the great thinkers in the world; he's been at the forefront of cognitive science and evolutionary science for over 50 years, trying to figure out how the mind works and why we believe the things we believe. He's written a number of amazing books on evolution, religion, consciousness, and free will. (He's also subject to some extreme criticism due to his atheist bent, as with Dawkins.)

His most recent book is the wise and insightful Intuition Pumps and Other Tools for Critical Thinking, where he lays out a series of short essays (some very short — less than a page) with mental shortcuts, tools, analogies, and metaphors for thinking about a variety of topics, mostly those topics he is best known for.

Some people don't like the disconnected nature of the book, but that's precisely its usefulness: Like what we do here at Farnam Street, Dennett is simply trying to add tools to your toolkit. You are free to, in the words of Bruce Lee, “Absorb what is useful, discard what is useless and add what is specifically your own.”

***

The book opens with 12 of Dennett's best “tools for critical thinking” — a bag of mental tricks to improve your ability to engage critically and rationally with the world.

Let's go through a few of the best ones. You'll be familiar with some and unfamiliar with others, agree with some and not with others. But if you adopt Bruce Lee's advice, you should come away with something new and useful.

Making mistakes

Mistakes are not just opportunities for learning; they are, in an important sense, the only opportunity for learning or making something truly new. Before there can be learning, there must be learners. There are only two non-miraculous ways for learners to come into existence: they must either evolve or be designed and built by learners that evolved. Biological evolution proceeds by a grand, inexorable process of trial and error–and without the errors the trials wouldn't accomplish anything. As Gore Vidal once said, “It is not enough to succeed. Others must fail.”

[…]

The chief trick to making good mistakes is not to hide them–especially not from yourself. Instead of turning away in denial when you make a mistake, you should become a connoisseur of your own mistakes, turning them over in your mind as if they were works of art, which in a way they are. The fundamental reaction to any mistake ought to be this: “Well, I won't do that again!”

Reductio ad absurdum

The crowbar of rational inquiry, the great lever that enforces consistency, is reductio ad absurdum–literally, reduction (of the argument) to absurdity. You take the assertion or conjecture at issue and see if you can pry any contradictions (or just preposterous implications) out of it. If you can, that proposition has to be discarded or sent back to the shop for retooling. We do this all the time without bothering to display the underlying logic: “If that's a bear, then bears have antlers!” or “He won't get here in time for supper unless he can fly like Superman.”

Rapoport's Rules

Just how charitable are you supposed to be when criticizing the views of an opponent? […] The best antidote I know for [the] tendency to caricature one's opponent is a list of rules promulgated by the social psychologist and game theorist Anatol Rapoport (creator of the winning Tit-for-Tat strategy in Robert Axelrod's legendary prisoner's dilemma tournament).

How to compose a successful critical commentary:

1. You should attempt to re-express your target's position so clearly, vividly, and fairly that your target says, “Thanks, I wish I'd thought of putting it that way.”
2. You should list any points of agreement (especially if they are not matters of general or widespread agreement).
3. You should mention anything that you have learned from your target.
4. Only then are you permitted to say so much as a word of rebuttal or criticism.

Sturgeon's Law

The science-fiction writer Ted Sturgeon, speaking at the World Science Fiction Convention in Philadelphia in September 1953, said,

When people talk about the mystery novel, they mentioned The Maltese Falcon and The Big Sleep. When they talk about the western, they say there's The Way West and Shane. But when they talk about science fiction, they call it “that Buck Rogers stuff,” and they say “ninety percent of science fiction is crud.” Well, they're right. Ninety percent of science fiction is crud. But then ninety percent of everything is crud, and it's the ten percent that isn't crud that's important, and the ten percent of science fiction that isn't crud is as good as or better than anything being written anywhere.

This advice is often ignored by ideologues intent on destroying the reputation of analytic philosophy, evolutionary psychology, sociology, cultural anthropology, macroeconomics, plastic surgery, improvisational theater, television sitcoms, philosophical theology, massage therapy, you name it. Let's stipulate at the outset that there is a great deal of deplorable, stupid, second-rate stuff out there, of all sorts.

Occam's Razor

Attributed to William of Ockham (or Occam), the fourteenth century logician and philosopher, this thinking tool is actually a much older rule of thumb. A Latin name for it is lex parsimoniae, the law of parsimony. It is usually put into English as the maxim “Do not muliply entities beyond necessary.” The idea is straightforward: Don't concoct a complicated, extravagant theory if you've got a simpler one (containing fewer ingredients, fewer entities) that handles the phenomenon just as well. If exposure to extremely cold air can account for all the symptoms of frostbite, don't postulate unobserved “snow germs” or “arctic microbes.” Kepler's laws explain the orbit of the planets; we have no need to hypothesize pilots guiding the planets from control panels hidden under the surface.

Occam's Broom

The molecular biologist Sidney Brenner recently invented a delicious play on Occam's Razor, introducing the new term Occam's Broom, to describe the process in which inconvenient facts are whisked under the rug by intellectually dishonest champions of one theory or another. This is our first boom crutch, an anti-thinking tool, and you should keep your eyes peeled for it. The practice is particularly insidious when used by propagandists who direct their efforts at the lay public, because like Sherlock Holmes' famous clue about the dog that didn't bark in the night, the absence of a fact that has been swept off the scene by Occam's Broom is unnoticeable except by experts. 

Jootsing

…It is even harder to achieve what Doug Hofstadter calls joosting, which stands for “jumping out of the system.” This is an important tactic not just in science and philosophy, but also in the arts. Creativity, that ardently sought but only rarely found virtue, often is a heretofore unimagined violation of the rules of the system from which it springs. It might be the system of classical harmony in music, the rules for meter and rhyme in sonnets (or limericks, even), or the canons of good taste or good form in some genre of art. Or it might be the assumptions and principles of some theory or research program. Being creative is not just a matter of casting about for something novel–anbody can do that, since novelty can be found in any random juxtaposition of stuff–but of making the novelty jump out of some system, a system that has become somewhat established, for good reasons.

When an artistic tradition reaches the point where literally “anything goes,” those who want to be creative have a problem: there are no fixed rules to rebel against, no complacent expectations to shatter, nothing to subvert, no background against which to create something that is both surprising and yet meaningful. It helps to know the tradition if you want to subvert it. That's why so few dabblers or novices succeed in coming up with anything truly creative.

Rathering (Anti-thinking tool)

Rathering is a way of sliding you swiftly and gently past a false dichotomy. The general form of a rathering is “It is not the case that blahblahblah, as orthodoxy would have you believe; it is rather that suchandsuchandsuch–which is radically different.” Some ratherings are just fine; you really must choose between the two alternatives on offer; in these cases, you are not being offered a false, bur rather a genuine, inescapable dichotomy. But some ratherings are little more than sleight of hand, due to the fact that the word “rather” implies–without argument–that there is an important incompatibility between the claims flanking it.

The “Surely” Operator

When you're reading or skimming argumentative essays, especially by philosophers, here is a quick trick that may save you much time and effort, especially in this age of simple searching by computer: look for “surely” in the document, and check each occurrence. Not always, not even most of the time, but often the world “surely” is as good as a blinking light in locating a weak point in the argument….Why? Because it marks the very edge of what the author is actually sure about and hopes readers will also be sure about. (If the author were really sure all the readers would agree, it wouldn't be worth mentioning.)

The Deepity

A “deepity” is a proposition that seems both important and true–and profound–but that achieves this effect by being ambiguous. On one reading it is manifestly false, but it would be earth-shaking if it were true; on the other reading it is true but trivial. The unwary listener picks up on the glimmer of truth from the second reading, and the devastating importance from the first reading, and thinks, Wow! That's a deepity.

Here is an example. (Better sit down: this is heavy stuff.)

Love is just a word.

[…]

Richard Dawkins recently alerted me to a fine deepity by Rowan Williams, the Archbishop of Canterbury, who described his faith as a

silent waiting on the truth, pure sitting and breathing in the presence of a question mark.

***

Still Interested? Check out Dennett's book for a lot more of these interesting tools for critical thinking, many non-intuitive. I guarantee you'll generate food for thought as you go along. Also, try checking out 11 Rules for Critical Thinking and learn how to be Eager to be Wrong.

Atul Gawande and the Mistrust of Science

Continuing on with Commencement Season, Atul Gawande gave an address to the students of Cal Tech last Friday, delivering a message to future scientists, but one that applies equally to all of us as thinkers:

“Even more than what you think, how you think matters.”

Gawande addresses the current growing mistrust of “scientific authority” — the thought that because science creaks along one mistake at a time, that it isn't to be trusted. The misunderstanding of what scientific thinking is and how it works is at the root of much problematic ideology, and it's up to those who do understand it to promote its virtues.

It's important to realize that scientists, singular, are as fallible as the rest of us. Thinking otherwise only sets you up for a disappointment. The point of science is the collective, the forward advance of the hive, not the bee. It's sort of a sausage-making factory when seen up close, but when you pull back the view, it looks like a beautifully humming engine, steadily giving us more and more information about ourselves and the world around us. Science is, above all, a method of thought. A way of figuring out what's true and what we're just fooling ourselves about.

So explains Gawande:

Few working scientists can give a ground-up explanation of the phenomenon they study; they rely on information and techniques borrowed from other scientists. Knowledge and the virtues of the scientific orientation live far more in the community than the individual. When we talk of a “scientific community,” we are pointing to something critical: that advanced science is a social enterprise, characterized by an intricate division of cognitive labor. Individual scientists, no less than the quacks, can be famously bull-headed, overly enamored of pet theories, dismissive of new evidence, and heedless of their fallibility. (Hence Max Planck’s observation that science advances one funeral at a time.) But as a community endeavor, it is beautifully self-correcting.

Beautifully organized, however, it is not. Seen up close, the scientific community—with its muddled peer-review process, badly written journal articles, subtly contemptuous letters to the editor, overtly contemptuous subreddit threads, and pompous pronouncements of the academy— looks like a rickety vehicle for getting to truth. Yet the hive mind swarms ever forward. It now advances knowledge in almost every realm of existence—even the humanities, where neuroscience and computerization are shaping understanding of everything from free will to how art and literature have evolved over time.

He echoes Steven Pinker in the thought that science, traditionally left to the realm of discovering “physical” reality, is now making great inroads into what might have previously been considered philosophy, by exploring why and how our minds work the way they do. This can only be accomplished by deep critical thinking across a broad range of disciplines, and by the dual attack of specialists uncovering highly specific nuggets and great synthesizers able to suss out meaning from the big pile of facts.

The whole speech is worth a read and reflection, but Gawande's conclusion is particularly poignant for an educated individual in a Republic:

The mistake, then, is to believe that the educational credentials you get today give you any special authority on truth. What you have gained is far more important: an understanding of what real truth-seeking looks like. It is the effort not of a single person but of a group of people—the bigger the better—pursuing ideas with curiosity, inquisitiveness, openness, and discipline. As scientists, in other words.

Even more than what you think, how you think matters. The stakes for understanding this could not be higher than they are today, because we are not just battling for what it means to be scientists. We are battling for what it means to be citizens.

Still Interested? Read the rest, and read a few other of this year's commencements by Nassim Taleb and Gary Taubes. Or read about E.O. Wilson, the great Harvard biologist, and what he thought it took to become a great scientist. (Hint: The same stuff it takes for anyone to become a great critical thinker.)

Eager to Be Wrong

“You know what Kipling said? Treat those two impostors just the same — success and failure. Of course, there’s going to be some failure in making the correct decisions. Nobody bats a thousand. I think it’s important to review your past stupidities so you are less likely to repeat them, but I’m not gnashing my teeth over it or suffering or enduring it. I regard it as perfectly normal to fail and make bad decisions. I think the tragedy in life is to be so timid that you don’t play hard enough so you have some reverses.”
— Charlie Munger

***

When was the last time you said to yourself I hope I’m wrong and really meant it?

Have you ever really meant it?

Here’s the thing: In our search for truth we must realize, thinking along two tracks, that we’re frequently led to wrong solutions by the workings of our natural apparatus. Uncertainty is a very mentally demanding, and in a certain way, physically demanding process. The brain uses a lot of energy when it has to process conflicting information. To show yourself, try reading up on something contentious like the abortion debate, but with a completely open mind to either side (if you can). Pay attention as your brain starts twisting itself into a very uncomfortable state while you explore completely opposing sides of an argument.

This mental pain is called cognitive dissonance and it's really not that much fun. Charlie Munger calls the process of resolving this dissonance doubt avoidance tendency – the tendency to resolve conflicting information as quickly as possible to return to physical and mental comfort. To get back to your happy zone.

Combine this tendency to resolve doubt with the well-known first conclusion bias (something Francis Bacon knew about long ago), and the logical conclusion is that we land on a lot of wrong answers and stay there because it’s easier.

Let that sink in. We don’t stay there because we’re correct, but because it’s physically easier. It's a form of laziness.

Don’t believe me? Spend a single day asking yourself this simple question: Do I know this for sure, or have I simply landed on a comfortable spot?

You’ll be surprised how many things you do and believe just because it’s easy. You might not even know how you landed there. Don’t feel bad about it — it’s as natural as breathing. You were wired that way at birth.

But there is a way to attack this problem.

Munger has a dictum that he won’t allow himself to hold an opinion unless he knows the other side of the argument better than that side does. Such an unforgiving approach means that he’s not often wrong. (It sometimes takes many years to show, but posterity has rarely shown him to be way off.) It’s a tough, wise, and correct solution.

It’s still hard though, and doesn’t solve the energy expenditure problem. What can we tell ourselves to encourage ourselves to do that kind of work? The answer would be well-known to Darwin: Train yourself to be eager to be wrong.

Right to be Wrong

The advice isn't simply to be open to being wrong, which you’ve probably been told to do your whole life. That’s nice, and correct in theory, but frequently turns into empty words on a page. Simply being open to being wrong allows you to keep the window cracked when confronted with disconfirming evidence — to say Well, I was open to it! and keep on with your old conclusion.

Eagerness implies something more. Eager implies that you actively hope there is real, true, disconfirming information proving you wrong. It implies you’d be more than glad to find it. It implies that you might even go looking for it. And most importantly, it implies that when you do find yourself in error, you don’t need to feel bad about it. You feel great about it! Imagine how much of the world this unlocks for you.

Why be so eager to prove yourself wrong? Well, do you want to be comfortable or find the truth? Do you want to say you understand the world or do you want to actually understand it? If you’re a truth seeker, you want reality the way it is, so you can live in harmony with it.

Feynman wanted reality. Darwin wanted reality. Einstein wanted reality. Even when they didn’t like it. The way to stand on the shoulders of giants is to start the day by telling yourself I can't wait to correct my bad ideas, because then I’ll be one step closer to reality. 

*** 

Post-script: Make sure you apply this advice to things that matter. As stated above, resolving uncertainty takes great energy. Don’t waste that energy on deciding whether Nike or Reebok sneakers are better. They’re both fine. Pick the ones that feel comfortable and move on. Save your deep introspection for the stuff that matters.

Peter Thiel on the End of Hubris and the Lessons from the Internet Bubble of the Late 90s

Madness is rare in individuals—but in groups, parties, nations and ages it is the rule.

The best interview question — what important truth do very few people agree with you on?— is tough to answer. Just think about it for a second.

In his book Zero to One, Peter Thiel argues that it might be easier to start with what everyone seems to agree on and go until you disagree.

If you can identify a delusional popular belief, you can find what lies hidden behind it: the contrarian truth.

Consider the proposition that companies should make money for their shareholders and not lose it. This seems self-evident, but it wasn't so obvious to many in the late 90s. Remember back then? No loss was too big. (In my interview with Sanjay Bakshi he suggested that to some extent this still exists today.)

Making money? That was old school. In the late 1990s it was all about the new economy. Eyeballs first, profits later.

Conventional beliefs only ever come to appear arbitrary and wrong in retrospect; whenever one collapses, we call the old belief a bubble. But the distortions caused by bubbles don’t disappear when they pop. The internet craze of the ’90s was the biggest bubble since the crash of 1929, and the lessons learned afterward define and distort almost all thinking about technology today. The first step to thinking clearly is to question what we think we know about the past.

Peter Thiel:The first step to thinking clearly is to question what we think we know about the past

There's really no need to rehash the 1990s in this article. You can google it. Or you can read the summary in chapter two of Zero to One.

Where things get interesting, at least in the thinking context, are the lessons we drew from the late 90s. Thiel says the following were lessons most commonly learned:

The entrepreneurs who stuck with Silicon Valley learned four big lessons from the dot-com crash that still guide business thinking today:

1. Make incremental advances. Grand visions inflated the bubble, so they should not be indulged. Anyone who claims to be able to do something great is suspect, and anyone who wants to change the world should be more humble. Small, incremental steps are the only safe path forward.

2. Stay lean and flexible. All companies must be “lean,” which is code for “unplanned.” You should not know what your business will do; planning is arrogant and inflexible. Instead you should try things out, “iterate,” and treat entrepreneurship as agnostic experimentation.

3. Improve on the competition. Don’t try to create a new market prematurely. The only way to know you have a real business is to start with an already existing customer, so you should build your company by improving on recognizable products already offered by successful competitors.

4. Focus on product, not sales. If your product requires advertising or salespeople to sell it, it’s not good enough: technology is primarily about product development, not distribution. Bubble-era advertising was obviously wasteful, so the only sustainable growth is viral growth.

These lessons, Thiel argues, are now dogma in the startup world. Ignore them at your peril and risk near certain failure. In fact, many private companies I've worked with have adopted the same view. Governments too are attempting to replicate these ‘facts' — they have become conventional wisdom.

And yet … the opposites are probably just as true if not more correct.

1. It is better to risk boldness than triviality.
2. A bad plan is better than no plan.
3. Competitive markets destroy profits.
4. Sales matters just as much as product.

Such is the world of messy social science — hard and fast rules are difficult to come by, and frequently, good ideas lose value as they gain popularity. (This is the “everyone on their tip-toes at a parade” idea.) Just as importantly, what starts as a good hand tends to be overplayed by man-with-a-hammer types.

And so the lessons which have been culled from the tech crash are not necessarily wrong, they are just context-dependent. It is hard to generalize with them.

Peter Thiel Think For Yourself

According to Thiel, we must learn to use our brains as well as our emotions:

We still need new technology, and we may even need some 1999-style hubris and exuberance to get it. To build the next generation of companies, we must abandon the dogmas created after the crash. That doesn’t mean the opposite ideas are automatically true: you can’t escape the madness of crowds by dogmatically rejecting them. Instead ask yourself: how much of what you know about business is shaped by mistaken reactions to past mistakes? The most contrarian thing of all is not to oppose the crowd but to think for yourself.

In a nutshell, when everyone learns the same lessons, applying them to the point of religious devotion, there can be opportunity in the opposite. If everyone is thinking the same thing, no one is really thinking.

As Alfred Sloan, the heroic former CEO of General Motors, once put it:

Alfred Sloan

Ray Dalio: Open-Mindedness And The Power of Not Knowing

ray-dalio
Ray Dalio, founder of the investment firm Bridgewater Associates, offers a prime example of what a learning organization looks like in the best book I’ve ever read on learning, Learn or Die: Using Science to Build a Leading-Edge Learning Organization. He comes to us again with this bit of unconventional wisdom.

First, the context …

To make money in the markets, you have to think independently and be humble. You have to be an independent thinker because you can’t make money agreeing with the consensus view, which is already embedded in the price. Yet whenever you’re betting against the consensus there’s a significant probability you’re going to be wrong, so you have to be humble.

Early in my career I learned this lesson the hard way — through some very painful bad bets. The biggest of these mistakes occurred in 1981–’82, when I became convinced that the U.S. economy was about to fall into a depression. My research had led me to believe that, with the Federal Reserve’s tight money policy and lots of debt outstanding, there would be a global wave of debt defaults, and if the Fed tried to handle it by printing money, inflation would accelerate. I was so certain that a depression was coming that I proclaimed it in newspaper columns, on TV, even in testimony to Congress. When Mexico defaulted on its debt in August 1982, I was sure I was right. Boy, was I wrong. What I’d considered improbable was exactly what happened: Fed chairman Paul Volcker’s move to lower interest rates and make money and credit available helped jump-start a bull market in stocks and the U.S. economy’s greatest ever noninflationary growth period

What's important isn't that he was wrong, it's what the experience taught him and how he implemented those lessons at Bridgewater.

This episode taught me the importance of always fearing being wrong, no matter how confident I am that I’m right. As a result, I began seeking out the smartest people I could find who disagreed with me so that I could understand their reasoning. Only after I fully grasped their points of view could I decide to reject or accept them. By doing this again and again over the years, not only have I increased my chances of being right, but I have also learned a huge amount.

There’s an art to this process of seeking out thoughtful disagreement. People who are successful at it realize that there is always some probability they might be wrong and that it’s worth the effort to consider what others are saying — not simply the others’ conclusions, but the reasoning behind them — to be assured that they aren’t making a mistake themselves. They approach disagreement with curiosity, not antagonism, and are what I call “open-minded and assertive at the same time.” This means that they possess the ability to calmly take in what other people are thinking rather than block it out, and to clearly lay out the reasons why they haven’t reached the same conclusion. They are able to listen carefully and objectively to the reasoning behind differing opinions.

When most people hear me describe this approach, they typically say, “No problem, I’m open-minded!” But what they really mean is that they’re open to being wrong. True open-mindedness is an entirely different mind-set. It is a process of being intensely worried about being wrong and asking questions instead of defending a position. It demands that you get over your ego-driven desire to have whatever answer you happen to have in your head be right. Instead, you need to actively question all of your opinions and seek out the reasoning behind alternative points of view.

Still curious? Check out my lengthy interview with Ed Hess.

12