Tag: Steven Pinker

Scientific Concepts We All Ought To Know

John Brockman's online scientific roundtable Edge.org does something fantastic every year: It asks all of its contributors (hundreds of them) to answer one meaningful question. Questions like What Have You Changed Your Mind About? and What is Your Dangerous Idea?

This year's was particularly awesome for our purposesWhat Scientific Term or Concept Ought To Be More Known?

The answers give us a window into over 200 brilliant minds, with the simple filtering mechanism that there's something they know that we should probably know, too. We wanted to highlight a few of our favorites for you.

***

From Steven Pinker, a very interesting thought on The Second Law of Thermodynamics (Entropy). This reminded me of the central thesis of The Origin of Wealth by Eric Beinhocker. (Which we'll cover in more depth in the future: We referenced his work in the past.)


The Second Law of Thermodynamics states that in an isolated system (one that is not taking in energy), entropy never decreases. (The First Law is that energy is conserved; the Third, that a temperature of absolute zero is unreachable.) Closed systems inexorably become less structured, less organized, less able to accomplish interesting and useful outcomes, until they slide into an equilibrium of gray, tepid, homogeneous monotony and stay there.

In its original formulation the Second Law referred to the process in which usable energy in the form of a difference in temperature between two bodies is dissipated as heat flows from the warmer to the cooler body. Once it was appreciated that heat is not an invisible fluid but the motion of molecules, a more general, statistical version of the Second Law took shape. Now order could be characterized in terms of the set of all microscopically distinct states of a system: Of all these states, the ones that we find useful make up a tiny sliver of the possibilities, while the disorderly or useless states make up the vast majority. It follows that any perturbation of the system, whether it is a random jiggling of its parts or a whack from the outside, will, by the laws of probability, nudge the system toward disorder or uselessness. If you walk away from a sand castle, it won’t be there tomorrow, because as the wind, waves, seagulls, and small children push the grains of sand around, they’re more likely to arrange them into one of the vast number of configurations that don’t look like a castle than into the tiny few that do.

The Second Law of Thermodynamics is acknowledged in everyday life, in sayings such as “Ashes to ashes,” “Things fall apart,” “Rust never sleeps,” “Shit happens,” You can’t unscramble an egg,” “What can go wrong will go wrong,” and (from the Texas lawmaker Sam Rayburn), “Any jackass can kick down a barn, but it takes a carpenter to build one.”

Scientists appreciate that the Second Law is far more than an explanation for everyday nuisances; it is a foundation of our understanding of the universe and our place in it. In 1915 the physicist Arthur Eddington wrote:

[…]

Why the awe for the Second Law? The Second Law defines the ultimate purpose of life, mind, and human striving: to deploy energy and information to fight back the tide of entropy and carve out refuges of beneficial order. An underappreciation of the inherent tendency toward disorder, and a failure to appreciate the precious niches of order we carve out, are a major source of human folly.

To start with, the Second Law implies that misfortune may be no one’s fault. The biggest breakthrough of the scientific revolution was to nullify the intuition that the universe is saturated with purpose: that everything happens for a reason. In this primitive understanding, when bad things happen—accidents, disease, famine—someone or something must have wanted them to happen. This in turn impels people to find a defendant, demon, scapegoat, or witch to punish. Galileo and Newton replaced this cosmic morality play with a clockwork universe in which events are caused by conditions in the present, not goals for the future. The Second Law deepens that discovery: Not only does the universe not care about our desires, but in the natural course of events it will appear to thwart them, because there are so many more ways for things to go wrong than to go right. Houses burn down, ships sink, battles are lost for the want of a horseshoe nail.

Poverty, too, needs no explanation. In a world governed by entropy and evolution, it is the default state of humankind. Matter does not just arrange itself into shelter or clothing, and living things do everything they can not to become our food. What needs to be explained is wealth. Yet most discussions of poverty consist of arguments about whom to blame for it.

More generally, an underappreciation of the Second Law lures people into seeing every unsolved social problem as a sign that their country is being driven off a cliff. It’s in the very nature of the universe that life has problems. But it’s better to figure out how to solve them—to apply information and energy to expand our refuge of beneficial order—than to start a conflagration and hope for the best.

Richard Nisbett (a social psychologist) has a great one — a concept we've hit on before but is totally underappreciated by most people: The Fundamental Attribution Error.

Modern scientific psychology insists that explanation of the behavior of humans always requires reference to the situation the person is in. The failure to do so sufficiently is known as the Fundamental Attribution Error. In Milgram’s famous obedience experiment, two-thirds of his subjects proved willing to deliver a great deal of electric shock to a pleasant-faced middle-aged man, well beyond the point where he became silent after begging them to stop on account of his heart condition. When I teach about this experiment to undergraduates, I’m quite sure I‘ve never convinced a single one that their best friend might have delivered that amount of shock to the kindly gentleman, let alone that they themselves might have done so. They are protected by their armor of virtue from such wicked behavior. No amount of explanation about the power of the unique situation into which Milgram’s subject was placed is sufficient to convince them that their armor could have been breached.

My students, and everyone else in Western society, are confident that people behave honestly because they have the virtue of honesty, conscientiously because they have the virtue of conscientiousness. (In general, non-Westerners are less susceptible to the fundamental attribution error, lacking as they do sufficient knowledge of Aristotle!) People are believed to behave in an open and friendly way because they have the trait of extroversion, in an aggressive way because they have the trait of hostility. When they observe a single instance of honest or extroverted behavior they are confident that, in a different situation, the person would behave in a similarly honest or extroverted way.

In actual fact, when large numbers of people are observed in a wide range of situations, the correlation for trait-related behavior runs about .20 or less. People think the correlation is around .80. In reality, seeing Carlos behave more honestly than Bill in a given situation increases the likelihood that he will behave more honestly in another situation from the chance level of 50 percent to the vicinity of 55-57. People think that if Carlos behaves more honestly than Bill in one situation the likelihood that he will behave more honestly than Bill in another situation is 80 percent!

How could we be so hopelessly miscalibrated? There are many reasons, but one of the most important is that we don’t normally get trait-related information in a form that facilitates comparison and calculation. I observe Carlos in one situation when he might display honesty or the lack of it, and then not in another for perhaps a few weeks or months. I observe Bill in a different situation tapping honesty and then not another for many months.

This implies that if people received behavioral data in such a form that many people are observed over the same time course in a given fixed situation, our calibration might be better. And indeed it is. People are quite well calibrated for abilities of various kinds, especially sports. The likelihood that Bill will score more points than Carlos in one basketball game given that he did in another is about 67 percent—and people think it’s about 67 percent.

Our susceptibility to the fundamental attribution error—overestimating the role of traits and underestimating the importance of situations—has implications for everything from how to select employees to how to teach moral behavior.

Cesar Hidalgo, author of what looks like an awesome book, Why Information Grows, wrote about Criticality, which is a very important and central concept to understanding complex systems:

In physics we say a system is in a critical state when it is ripe for a phase transition. Consider water turning into ice, or a cloud that is pregnant with rain. Both of these are examples of physical systems in a critical state.

The dynamics of criticality, however, are not very intuitive. Consider the abruptness of freezing water. For an outside observer, there is no difference between cold water and water that is just about to freeze. This is because water that is just about to freeze is still liquid. Yet, microscopically, cold water and water that is about to freeze are not the same.

When close to freezing, water is populated by gazillions of tiny ice crystals, crystals that are so small that water remains liquid. But this is water in a critical state, a state in which any additional freezing will result in these crystals touching each other, generating the solid mesh we know as ice. Yet, the ice crystals that formed during the transition are infinitesimal. They are just the last straw. So, freezing cannot be considered the result of these last crystals. They only represent the instability needed to trigger the transition; the real cause of the transition is the criticality of the state.

But why should anyone outside statistical physics care about criticality?

The reason is that history is full of individual narratives that maybe should be interpreted in terms of critical phenomena.

Did Rosa Parks start the civil rights movement? Or was the movement already running in the minds of those who had been promised equality and were instead handed discrimination? Was the collapse of Lehman Brothers an essential trigger for the Great Recession? Or was the financial system so critical that any disturbance could have made the trick?

As humans, we love individual narratives. We evolved to learn from stories and communicate almost exclusively in terms of them. But as Richard Feynman said repeatedly: The imagination of nature is often larger than that of man. So, maybe our obsession with individual narratives is nothing but a reflection of our limited imagination. Going forward we need to remember that systems often make individuals irrelevant. Just like none of your cells can claim to control your body, society also works in systemic ways.

So, the next time the house of cards collapses, remember to focus on why we were building a house of cards in the first place, instead of focusing on whether the last card was the queen of diamonds or a two of clubs.

The psychologist Adam Alter has another good one on a concept we all naturally miss from time to time, due to the structure of our mind. The Law of Small Numbers.

In 1832, a Prussian military analyst named Carl von Clausewitz explained that “three quarters of the factors on which action in war is based are wrapped in a fog of . . . uncertainty.” The best military commanders seemed to see through this “fog of war,” predicting how their opponents would behave on the basis of limited information. Sometimes, though, even the wisest generals made mistakes, divining a signal through the fog when no such signal existed. Often, their mistake was endorsing the law of small numbers—too readily concluding that the patterns they saw in a small sample of information would also hold for a much larger sample.

Both the Allies and Axis powers fell prey to the law of small numbers during World War II. In June 1944, Germany flew several raids on London. War experts plotted the position of each bomb as it fell, and noticed one cluster near Regent’s Park, and another along the banks of the Thames. This clustering concerned them, because it implied that the German military had designed a new bomb that was more accurate than any existing bomb. In fact, the Luftwaffe was dropping bombs randomly, aiming generally at the heart of London but not at any particular location over others. What the experts had seen were clusters that occur naturally through random processes—misleading noise masquerading as a useful signal.

That same month, German commanders made a similar mistake. Anticipating the raid later known as D-Day, they assumed the Allies would attack—but they weren’t sure precisely when. Combing old military records, a weather expert named Karl Sonntag noticed that the Allies had never launched a major attack when there was even a small chance of bad weather. Late May and much of June were forecast to be cloudy and rainy, which “acted like a tranquilizer all along the chain of German command,” according to Irish journalist Cornelius Ryan. “The various headquarters were quite confident that there would be no attack in the immediate future. . . . In each case conditions had varied, but meteorologists had noted that the Allies had never attempted a landing unless the prospects of favorable weather were almost certain.” The German command was mistaken, and on Tuesday, June 6, the Allied forces launched a devastating attack amidst strong winds and rain.

The British and German forces erred because they had taken a small sample of data too seriously: The British forces had mistaken the natural clustering that comes from relatively small samples of random data for a useful signal, while the German forces had mistaken an illusory pattern from a limited set of data for evidence of an ongoing, stable military policy. To illustrate their error, imagine a fair coin tossed three times. You’ll have a one-in-four chance of turning up a string of three heads or tails, which, if you make too much of that small sample, might lead you to conclude that the coin is biased to reveal one particular outcome all or almost all of the time. If you continue to toss the fair coin, say, a thousand times, you’re far more likely to turn up a distribution that approaches five hundred heads and five hundred tails. As the sample grows, your chance of turning up an unbroken string shrinks rapidly (to roughly one-in-sixteen after five tosses; one-in-five-hundred after ten tosses; and one-in-five-hundred-thousand after twenty tosses). A string is far better evidence of bias after twenty tosses than it is after three tosses—but if you succumb to the law of small numbers, you might draw sweeping conclusions from even tiny samples of data, just as the British and Germans did about their opponents’ tactics in World War II.

Of course, the law of small numbers applies to more than military tactics. It explains the rise of stereotypes (concluding that all people with a particular trait behave the same way); the dangers of relying on a single interview when deciding among job or college applicants (concluding that interview performance is a reliable guide to job or college performance at large); and the tendency to see short-term patterns in financial stock charts when in fact short-term stock movements almost never follow predictable patterns. The solution is to pay attention not just to the pattern of data, but also to how much data you have. Small samples aren’t just limited in value; they can be counterproductive because the stories they tell are often misleading.

There are many, many more worth reading. Here's a great chance to build your multidisciplinary skill-set.

John Gray: Is Human Progress an Illusion?

“Straw Dogs is an attack on the unthinking beliefs of thinking people.”
— John Gray

***

We like to think that the tide of history is an inexorable march from barbarity to civilization, with humans “progressing” from one stage to the next through a gradual process of enlightenment. Modern humanists like Steven Pinker argue forcefully for this method of thinking.

But is this really so? Is this reality?

One of the leading challengers to that type of thinking has been the English writer and philosopher John Gray, the idiosyncratic author of books like Straw Dogs: Thoughts on Humans and Other Animals, The Soul of the Marionette, and The Silence of Animals.

To Gray, the concept of “progress” is closer to an illusion, or worse a delusion of the modern age. Civilization is not a permanent state of being, but something which can quickly recede during a time of stress.

He outlines his basic idea in a foreword to Straw Dogs:

Straw Dogs is an attack on the unthinking beliefs of thinking people. Today, liberal humanism has the pervasive power that was once possessed by revealed religion. Humanists like to think they have a rational view of the world; but their core belief in progress is a superstition, further from the truth about the human animal than any of the world's religions.

Outside of science, progress is simply a myth. In some readers of Straw Dogs this observation seems to have produced a moral panic. Surely, they ask, no one can question the central article of faith of liberal societies? Without it, will we not despair? Like trembling Victorians terrified of losing their faith, these humanists cling to the moth-eaten brocade of progressive hope. Today religious believers are more free-thinking. Driven to the margins of a culture in which science claims authority over all of human knowledge, they have had to cultivate a capacity for doubt. In contrast, secular believers — held fast by the conventional wisdom of the time — are in the grip of unexamined dogmas.

And what, pray tell, are those dogmas? They are numerous, but the central one must be that the human march of science and technology creates good for the world. Gray's not as sure: He sees science and technology as magnifying humanity “warts and all”.

Our tools allow us to go to the Moon but also murder each other with great alacrity. They have no morality attached to them.

In science, the growth of knowledge is cumulative. But human life as a whole is not a cumulative activity; what is gained in one generation may be lost in the next. In science, knowledge is an unmixed god; in ethics and politics it is bad as well as good. Science increases human power — and magnifies the flaws in human nature. It enables us to live longer and have higher living standards than in the past. At the same time it allows us to wreak destruction — on each other and the Earth — on a larger scale than ever before.

The idea of progress rests on the belief that the growth of knowledge and the advance of the species go together—if not now, then in the long run. The biblical myth of the Fall of Man contains the forbidden truth. Knowledge does not make us free. It leaves us as we have always been, prey to every kind of folly. The same truth is found in Greek myth. The punishment of Prometheus, chained to a rock for stealing fire from the gods, was not unjust.

Gray has a fairly heretical view of technology itself, pointing out that no one really controls its development or use; making humanity as a group closer to subjects than masters. Technology is both a giver of good and an ongoing source of tragedy, because it is used by fallible human beings.

Those who ignore the destructive potential of future technologies can do so only because they ignore history. Pogroms are as old as Christendom; but without railways, the telegraph and poison gas there could have been no Holocaust. There have always been tyrannies; but without modern means of transport and communication, Stalin and Mao could not have built their gulags. Humanity's worst crimes were made possible only by modern technology.

There is a deeper reason why “humanity” will never control technology. Technology is not something that humankind can control. It as an event that has befallen the world.

Once a technology enters human life — whether it be fire, the wheel, the automobile, radio, television, or the internet — it changes it in ways we can never fully understand.

[…]

Nothing is more commonplace than to lament that moral progress has failed to keep pace with scientific knowledge. If only we were more intelligent and more moral, we could use technology only for benign ends. The fault is not in our tools, we say, but in ourselves.

In one sense this is true. Technical progress leaves only one problem unsolved: the frailty of human nature. Unfortunately that problem is insoluble.

This reminds one of Garrett Hardin's idea that no system, however technically advanced, can be flawless because the human being at the center of it will always be fallible. (Our technologies, after all, are geared around our needs.) Even if we create technologies that “don't need us” — we are still fallible creators.

Gray's real problem with the idea of moral progress, technical progress, and scientific progress are they, even were they real, would be unending. In the modern conception of the world, unlike the ancient past where everything was seen as cyclical, growth has no natural stop-point. It's just an infinite path to the heavens. This manifests itself in our constant disdain for idleness.

Nothing is more alien to the present age than idleness. If we think of resting from our labours, it is only in order to return to them.

In thinking so highly of work we are aberrant. Few other cultures have ever done so. For nearly all of history and all prehistory, work was an indignity.

Among Christians, only Protestants have ever believed that work smacks of salvation; the work and prayer of medieval Christendom were interspersed with festivals. The ancient Greeks sought salvation in philosophy, the Indians in meditation, the Chinese in poetry and the love of nature. The pygmies of the African rainforests — now nearly extinct — work only to meet the needs of the day, and spend most of their lives idling.

Progress condemns idleness. The work needed to delivery humanity is vast. Indeed it is limitless, since as one plateau of achievement is reached another looms up. Of course this is only a mirage; but the worst of progress is not that it is an illusion. It is that it is endless.

Gray then goes on to compare our ideas of progress to Sisyphus forever pushing the bolder up the mountain.

He's an interesting thinker, Gray. In all of his works, though he certainly raises issue with our current modes of liberal progressive thought and is certainly not a religious man, one only finds hints of a “better” worldview being proposed. One is never sure if he even believes in “better”.

The closest thing to advice comes from the conclusion to his book The Silence of Animals. What is the point of life if not progress? Simply to see. Simply to be human. To contemplate. We must deal with human life the way we always have.

Godless contemplation is a more radical and transient condition: a temporary respite from the all-too-human world, with nothing particular in mind. In most traditions the life of contemplation promises redemption from being human: in Christianity, the end of tragedy and a glimpse of the divine comedy; in Jeffers's pantheism, the obliteration of the self in an ecstatic unity. Godless mysticism cannot escape the finality of tragedy, or make beauty eternal. It does not dissolve inner conflict into the false quietude of any oceanic calm. All it offers is mere being.

There is no redemption from being human. But no redemption is needed.

In the end, reading Gray is a good way to challenge yourself; to think about the world in a different way, and to examine your dogmas. Even the most cherished one of all.

Using Multidisciplinary Thinking to Approach Problems in a Complex World

Complex outcomes in human systems are a tough nut to crack when it comes to deciding what's really true. Any phenomena we might try to explain will have a host of competing theories, many of them seemingly plausible.

So how do we know what to go with?

One idea is to take a nod from the best. One of the most successful “explainers” of human behavior has been the cognitive psychologist Steven Pinker. His books have been massively influential, in part because they combine scientific rigor, explanatory power, and plainly excellent writing.

What's unique about Pinker is the range of sources he draws on. His book The Better Angels of Our Nature, a cogitation on the decline in relative violence in recent human history, draws on ideas from evolutionary psychology, forensic anthropology, statistics, social history, criminology, and a host of other fields. Pinker, like Vaclav Smil and Jared Diamond, is the opposite of the man with a hammer, ranging over much material to come to his conclusions.

In fact, when asked about the progress of social science as an explanatory arena over time, Pinker credited this cross-disciplinary focus:

Because of the unification with the sciences, there are more genuinely explanatory theories, and there’s a sense of progress, with more non-obvious things being discovered that have profound implications.

But, even better, Pinker gives out an outline for how a multidisciplinary thinker should approach problems in a complex world.

***

Here's the issue at stake: When we're viewing a complex phenomena—say, the decline in certain forms of violence in human history—it can be hard to come with up a rigorous explanation. We can't just set up repeated lab experiments and vary the conditions of human history to see what pops out, as with physics or chemistry.

So out of necessity, we must approach the problem in a different way.

In the above referenced interview, Pinker gives a wonderful example how to do it: Note how he carefully “cross-checks” from a variety of sources of data, developing a 3D view of the landscape he's trying to assess:

Pinker: Absolutely, I think most philosophers of science would say that all scientific generalizations are probabilistic rather than logically certain, more so for the social sciences because the systems you are studying are more complex than, say, molecules, and because there are fewer opportunities to intervene experimentally and to control every variable. But the exis­tence of the social sciences, including psychology, to the extent that they have discovered anything, shows that, despite the uncontrollability of human behavior, you can make some progress: you can do your best to control the nuisance variables that are not literally in your control; you can have analogues in a laboratory that simulate what you’re interested in and impose an experimental manipulation.

You can be clever about squeezing the last drop of causal information out of a correlational data set, and you can use converging evi­dence, the qualitative narratives of traditional history in combination with quantitative data sets and regression analyses that try to find patterns in them. But I also go to traditional historical narratives, partly as a sanity check. If you’re just manipulating numbers, you never know whether you’ve wan­dered into some preposterous conclusion by taking numbers too seriously that couldn’t possibly reflect reality. Also, it’s the narrative history that provides hypotheses that can then be tested. Very often a historian comes up with some plausible causal story, and that gives the social scientists something to do in squeezing a story out of the numbers.

Warburton: I wonder if you’ve got an example of just that, where you’ve combined the history and the social science?

Pinker: One example is the hypothesis that the Humanitarian Revolution during the Enlightenment, that is, the abolition of slavery, torture, cruel punishments, religious persecution, and so on, was a product of an expansion of empathy, which in turn was fueled by literacy and the consumption of novels and journalis­tic accounts. People read what life was like in other times and places, and then applied their sense of empathy more broadly, which gave them second thoughts about whether it’s a good idea to disembowel someone as a form of criminal punish­ment. So that’s a historical hypothesis. Lynn Hunt, a historian at the University of California–Berkeley, proposed it, and there are some psychological studies that show that, indeed, if people read a first-person account by someone unlike them, they will become more sympathetic to that individual, and also to the category of people that that individual represents.

So now we have a bit of experimental psychology supporting the historical qualita­tive narrative. And, in addition, one can go to economic histo­rians and see that, indeed, there was first a massive increase in the economic efficiency of manufacturing a book, then there was a massive increase in the number of books pub­lished, and finally there was a massive increase in the rate of literacy. So you’ve got a story that has at least three vertices: the historian’s hypothesis; the economic historians identifying exogenous variables that changed prior to the phenomenon we’re trying to explain, so the putative cause occurs before the putative effect; and then you have the experimental manipulation in a laboratory, showing that the intervening link is indeed plausible.

Pinker is saying, Look we can't just rely on “plausible narratives” generated by folks like the historians. There are too many possibilities that could be correct.

Nor can we rely purely on correlations (i.e., the rise in literacy statistically tracking the decline in violence) — they don't necessarily offer us a causative explanation. (Does the rise in literacy cause less violence, or is it vice versa? Or, does a third factor cause both?)

However, if we layer in some other known facts from areas we can experiment on — say, psychology or cognitive neuroscience — we can sometimes establish the causal link we need or, at worst, a better hypothesis of reality.

In this case, it would be the finding from psychology that certain forms of literacy do indeed increase empathy (for logical reasons).

Does this method give us absolute proof? No. However, it does allow us to propose and then test, re-test, alter, and strengthen or ultimately reject a hypothesis. (In other words, rigorous thinking.)

We can't stop here though. We have to take time to examine competing hypotheses — there may be a better fit. The interviewer continues on asking Pinker about this methodology:

Warburton: And so you conclude that the de-centering that occurs through novel-reading and first-person accounts probably did have a causal impact on the willingness of people to be violent to their peers?

Pinker: That’s right. And, of course, one has to rule out alternative hypotheses. One of them could be the growth of affluence: perhaps it’s simply a question of how pleasant your life is. If you live a longer and healthier and more enjoyable life, maybe you place a higher value on life in general, and, by extension, the lives of others. That would be an alternative hypothesis to the idea that there was an expansion of empathy fueled by greater literacy. But that can be ruled out by data from eco­nomic historians that show there was little increase in afflu­ence during the time of the Humanitarian Revolution. The increase in affluence really came later, in the 19th century, with the advent of the Industrial Revolution.

***

Let's review the process that Pinker has laid out, one that we might think about emulating as we examine the causes of complex phenomena in human systems:

  1. We observe an interesting phenomenon in need of explanation, one we feel capable of exploring.
  2. We propose and examine competing hypotheses that would explain the phenomena (set up in a falsifiable way, in harmony with the divide between science and pseudoscience laid out for us by the great Karl Popper).
  3. We examine a cross-section of: Empirical data relating to the phenomena; sensible qualitative inference (from multiple fields/disciplines, the more fundamental the better), and finally;  “Demonstrable” aspects of nature we are nearly certain about, arising from controlled experiment or other rigorous sources of knowledge ranging from engineering to biology to cognitive neuroscience.

What we end up with is not necessarily a bulletproof explanation, but probably the best we can do if we think carefully. A good cross-disciplinary examination with quantitative and qualitative sources coming into equal play, and a good dose of judgment, can be far more rigorous than the gut instinct or plausible nonsense type stories that many of us lazily spout.

A Word of Caution

Although Pinker's “multiple vertices” approach to problem solving in complex domains can be powerful, we always have to be on guard for phenomena that we simply cannot explain at our current level of competence: We must have a “too hard” pile when competing explanations come out “too close to call” or we otherwise feel we're outside of our circle of competence. Always tread carefully and be sure to follow Darwin's Golden Rule: Contrary facts are more important than confirming ones. Be ready to change your mind, like Darwin, when the facts don't go your way.

***

Still Interested? For some more Pinker goodness check out our prior posts on his work, or check out a few of his books like How the Mind Works or The Blank Slate: The Modern Denial of Human Nature.

The Map is Not the Territory

Spring 2016 Reading List — More Curated Recommendations For a Curious Mind

We hear a lot from people who want to read more. That's a great sentiment. But it won't actually happen until you decide what you're going to do less of. We all get 24 hours a day and 7 days a week. It's up to you how you'll spend that time.

For those who want to spend it reading, we've come across a lot of great books so far this year. Here are seven recommendations across a variety of topics. Some are newer, some are older — true knowledge has no expiration date.

1. The Evolution of Everything

Matt Ridley is a longtime favorite. Originally a PhD zoologist, Ridley went on to write great books like The Red Queen and The Rational Optimist, and wrote for The Economist for a while. This book makes the argument for how trial-and-error style evolution occurs across a wide range of phenomena. I don't know that I agree with all of it, but he's a great thinker and a lot of people will really enjoy the book.

2. A Powerful Mind: The Self-Education of George Washington

What a cool book idea by Adrienne Harrison. There are a zillion biographies of GW out there, with Chernow's getting a lot of praise recently. But Harrison narrows in on Washington's self-didactic nature. Why did he read so much? How did he educate himself? Any self-motivated learner is probably going to enjoy this. We'll certainly cover it here at some point.

3. The Tiger

A Ryan Holiday recommendation, The Tiger is the story of a man-eating tiger in Siberia. Like, not that long ago. Pretty damn scary, but John Vaillant is an amazing writer who not only tells the tale of the tiger-hunt, but weaves in Russian history, natural science, the relationship between man and predator over time, and a variety of other topics in a natural and interesting way. Can't wait to read his other stuff. I read this in two flights.

4. The Sense of Style

This is such a great book on better writing, by the incomparable Steven Pinker. We have a post about it here, but it's worth re-recommending. If you're trying to understand great syntax in a non-dry and practical way — Pinker is careful to show that great writing can take many forms but generally shares a few underlying principles — this is your book. He weaves in some cognitive science, which must be a first for a style guide.

5. Creativity, Inc.: Overcoming the Unseen Forces That Stand in the Way of True Inspiration

I really loved this book. It's written by Ed Catmull, who along with John Lasseter built the modern Pixar, which is now part of Disney. Catmull talks about the creative process at Pixar and how their movies go from a kernel of an idea to a beautiful and moving finished product. (Hint: It takes a long time.) Pixar is one of the more brilliant modern companies, and Bob Iger's decision to buy it when he was named CEO of Disney ten years ago was a masterful stroke. I suspect Catmull and Lasseter are hugely responsible for the resurgence of Disney animation.

6. The Song Machine

This is a tough recommendation because it simultaneously fascinates and horrors me. The book is about the development of modern glossy pop music. I suspect anyone with an interest in music will be interested to see how this goes, with some people reading out of morbid curiosity and some because they want to learn more about the music they actually listen to. Pursue at your peril. I pulled out my old '90s rock music to soothe myself.

7. Plato at the Googleplex

Does philosophy still matter? Rebecca Goldstein, who is a modern analytical philosopher, goes after this topic in a pretty interesting way by exploring what it'd be like if Plato were interacting with the modern world. Very quirky subject matter and approach, but I actually appreciated that. There's a lot of cookie-cutter writing going on and Goldstein breaks out as she explores a timeless topic. Probably most reserved for those actually interested in philosophy, but even if you're not, it might stretch your brain a bit.

Bonus Bestseller

Alexander Hamilton

Farnam Street related travel has brought me to quite a few airports recently. I make a habit of checking out the airport bookstores because bookstores are awesome. Recently, I noticed that Chernow's biography of Hamilton was suddenly sitting amongst the bestsellers. Chernow's books are amazing, but airport bestsellers? It wasn't until I realized that Hamilton's life had been turned into a massive smash hit Broadway play, based on the book, that everything clicked. In any case, if you want to learn about an amazing American life and also be “part of the conversation,” check out Hamilton.

Can We Reason Our Way to a Better Morality?

In a 2012 TED talk, NYU professor Rebecca Goldstein, author of Plato at the Googleplex, sat down with her husband Harvard professor Steven Pinker for an interesting (and polarizing) conversation: Does pure reason eventually lead us to a better morality?

Goldstein argues yes; all progress is necessarily reason-based, and this should give us hope. Arguing against that fact would, indeed, require reasoning! Pinker, author of the controversial but well-received The Better Angels of our Nature, takes the devil's advocate position (though clearly for rhetorical effect). Perhaps reason is overrated? Perhaps morality is indeed a matter of the heart?

The animated (literally) conversation is below. Here's an interesting excerpt, and if you make it to the end, they also speculate on the things that we do today which may eventually be judged harshly by history.

Rebecca: Well, you didn't mention what might be one of our most effective better angels: reason. Reason has muscle. It's reason that provides the push to widen that circle of empathy. Every one of the humanitarian developments that you mentioned originated with thinkers who gave reasons for why some practice was indefensible. They demonstrated that the way people treated some particular group of others was logically inconsistent with the way they insisted on being treated themselves.

Steven: Are you saying that reason can actually change people's minds? Don't people just stick with whatever conviction serves their interests or conforms to the culture that they grew up in?

Rebecca: Here's a fascinating fact about us: Contradictions bother us, at least when we're forced to confront them, which is just another way of saying that we are susceptible to reason. And if you look at the history of moral progress, you can trace a direct pathway from reasoned arguments to changes in the way that we actually feel. Time and again, a thinker would lay out an argument as to why some practice was indefensible, irrational, inconsistent with values already held. Their essay would go viral, get translated into many languages, get debated at pubs and coffee houses and salons, and at dinner parties, and influence leaders, legislators, popular opinion. Eventually their conclusions get absorbed into the common sense of decency, erasing the tracks of the original argument that had gotten us there. Few of us today feel any need to put forth a rigorous philosophical argument as to why slavery is wrong or public hangings or beating children. By now, these things just feel wrong. But just those arguments had to be made, and they were, in centuries past.

 

Still Interested? Check out Pinker on how to educate yourself properly and how to improve your professional writing.

Towards a Greater Synthesis: Steven Pinker on How to Apply Science to the Humanities

The fundamental idea behind Farnam Street is to learn to think across disciplines and synthesize, using ideas in combination to solve problems in novel ways.

An easy example would be to take a fundamental idea of psychology like the concept of a near-miss (deprival super-reaction) and use it to help explain the success of a gambling enterprise. Or, similarly, using the idea of the endowment effect to help explain why lotteries are a lot more successful if you allow people to choose their own numbers. Sometimes we take ideas from hard science, like the idea of runaway feedback (think of a nuclear reaction gaining steam), to explain why small problems can become large problems or small advantages can become large ones.

This kind of reductionism and synthesis helps one understand the world at a fundamental level and solve new problems.

We’re sometimes asked about untapped ways that this thinking can be applied. In hearing this, it occasionally seems that people fall into the trap of believing all of the great cross-disciplinary thinking has been done. Or maybe even that all of the great thinking has been done, period.

Steven-Pinker-by-Rebecca-Goldstein

Harvard psychologist Steven Pinker is here to say we have a long way to go.

We’ve written before about Pinker’s ideas on a broad education and on writing, but he's also got a great essay on Edge.org called Writing in the 21st Century wherein he addresses some of the central concepts of his book on writing — The Sense of Style. While the book's ideas are wonderful, later in the article he moves to a more general point useful for our purposes: Systematic application of the “harder” sciences to the humanities is a huge untapped source of knowledge.

He provides some examples that are fascinating in their potential:

This combination of science and letters is emblematic of what I hope to be a larger trend we spoke of earlier, namely the application of science, particularly psychology and cognitive science, to the traditional domains of humanities. There's no aspect of human communication and cultural creation that can't benefit from a greater application of psychology and the other sciences of mind. We would have an exciting addition to literary studies, for example, if literary critics knew more about linguistics.Poetry analysts could apply phonology (the study of sound structure) and the cognitive psychology of metaphor. An analysis of plot in fiction could benefit from a greater understanding of the conflicts and confluences of ultimate interests in human social relationships. The genre of biography would be deepened by an understanding of the nature of human memory, particularly autobiographical memory. How much of the memory of our childhood is confabulated? Memory scientists have a lot to say about that. How much do we polish our image of ourselves in describing ourselves to others, and more importantly, recollecting our own histories? Do we edit our memories in an Orwellian manner to make ourselves more coherent in retrospect? Syntax and semantics are relevant as well. How does a writer use the tense system of English to convey a sense of immediacy or historical distance?

In music the sciences of auditory and speech perception have much to contribute to understanding how musicians accomplish their effects. The visual arts could revive an old method of analysis going back to Ernst Gombrich and Rudolf Arnheim in collaboration with the psychologist Richard Gregory. Indeed, even the art itself in the 1920s was influenced by psychology, thanks in part to Gertrude Stein, who as an undergraduate student of William James did a wonderful thesis on divided attention, and then went to Paris and brought the psychology of perception to the attention of artists like Picasso and Braque. Gestalt psychology may have influenced Paul Klee and the expressionists. Since then we have lost that wonderful synergy between the science of visual perception and the creation of visual art.

Going beyond the arts, the social sciences, such as political science could benefit from a greater understanding of human moral and social instincts, such as the psychology of dominance, the psychology of revenge and forgiveness, and the psychology of gratitude and social competition. All of them are relevant, for example, to international negotiations. We talk about one country being friendly to another or allying or competing, but countries themselves don't have feelings. It's the elites and leaders who do, and a lot of international politics is driven by the psychology of its leaders.

In this short section alone, Pinker offers realistically that we can apply:

  • Linguistics to literature
  • Phonology and psychology to poetry
  • The biology of groups to understand fiction
  • The biology of memory to understand biography
  • Semantics to understand historical writing
  • Psychology and biology to understand art and music
  • Psychology and biology to understand politics

Turns out, there’s a huge amount of thinking left to be done. Effectively, Pinker is asking us to imitate the scientist Linus Pauling, who sought to systematically understand chemistry by using the next most fundamental discipline, physics, an approach which led to great breakthroughs and a consilience of knowledge in the two fields which is taken for granted in modern science.

Towards a Greater Synthesis

Even if we're not trying to make great scientific advances, think about how we could apply this idea to all of our lives. Fields like basic mathematics, statistics, biology, physics, and psychology provide deep insight into the “higher level” functions of humanity like law, medicine, politics, business, and social groups. Or, as Munger has put it, “When you get down to it, you’ll find worldly wisdom looks pretty darn academic.” And it isn’t as hard as it sounds: We don’t need to understand the deep math of relativity to grasp the idea that two observers can see the same event in a different way depending on perspective. The rest of the world's models are similar, although having some mathematical fluency is necessary.

Pinker, like Munger, doesn’t stop there. He also believes in what Munger calls the ethos of hard science, which is a way of rigorously considering the problems of the practical world.

Even beyond applying the findings of psychology and cognitive science and social and affective neuroscience, it's the mindset of science that ought to be exported to cultural and intellectual life as a whole. That consists in increased skepticism and scrutiny about factual conventional wisdom: How much of what you think is true really is true if you go to the numbers? For me this has been a salient issue in analyzing violence, because the conventional wisdom is that we're living in extraordinarily violent times.

But if you take into account the psychology of risk perception, as pioneered by Daniel Kahneman, Amos Tversky, Paul Slovic, Gerd Gigerenzer, and others, you realize that the conventional wisdom is systematically distorted by the source of our information about the world, namely the news. News is about the stuff that happens; it's not about the stuff that doesn't happen. Human risk perception is affected by memorable examples, according to Tversky and Kahneman's availability heuristic. No matter what the rate of violence is objectively, there are always enough examples to fill the news. And since our perception of risk is influenced by memorable examples, we'll always think we're living in violent times. It's only when you apply the scientific mindset to world events, to political science and history, and try to count how many people are killed now as opposed to ten years ago, a hundred years ago, or a thousand years ago that you get an accurate picture about the state of the world and the direction that it's going, which is largely downward. That conclusion only came from applying an empirical mindset to the traditional subject matter of history and political science.

Nassim Taleb has been on a similar hunt for a long time (although, amusingly, he doesn’t like Pinker’s book on violence at all). The question is relatively straightforward: How do we know what we know? Traditionally, what we know has simply been based on what we can see, something now called the availability bias. In other words, because we see our grandmother live to 95 years old while eating carrots every day, we think carrots prevent cancer. (A conflation of correlation and causation.)

But Pinker and Taleb call for a higher standard called empiricism, which requires pushing beyond anecdote into an accumulation of sound data to support a theory, with disconfirming examples weighted as heavily as confirming ones. This shift from anecdote to empiricism led humanity to make some of its greatest leaps of understanding, yet we’re still falling into the trap regularly, an outcome which itself can be explained by evolutionary biology and modern psychology. (Hint: It’s in the deep structure of our minds to extrapolate.)

Learning to Ask Why

Pinker continues with a claim that Munger would dearly appreciate: The search for explanations is how we push into new ideas. The deeper we push, the better we understand.

The other aspect of the scientific mindset that ought to be exported to the rest of intellectual life is the search for explanations. That is, not to just say that history is one damn thing after another, that stuff happens, and there's nothing we can do to explain why, but to relate phenomena to more basic or general phenomena … and to try to explain those phenomena with still more basic phenomena. We've repeatedly seen that happen in the sciences, where, for example, biological phenomena were explained in part at the level of molecules, which were explained by chemistry, which was explained by physics.

There's no reason that that this process of explanation can't continue. Biology gives us a grasp of the brain, and human nature is a product of the organization of the brain, and societies unfold as they do because they consist of brains interacting with other brains and negotiating arrangements to coordinate their behavior, and so on.

This idea certainly takes heat. The biologist E.O. Wilson calls it Consilience, and has gone as far as saying that all human knowledge can eventually be reduced to extreme fundamentals like mathematics and particle physics. (Leading to something like The Atomic Explanation of the Civil War.)

Whether or not you take it to such an extreme depends on your boldness and your confidence in the mental acuity of human beings. But even if you think Wilson is crazy, you can still learn deeply from the more fundamental knowledge in the world. This push to reduce things to their simplest explanations (but not simpler) is how we array all new knowledge and experience on a latticework of mental models.

For example, instead of taking Warren Buffett’s dictum that markets are irrational on its face, try to understand why. What about human nature and the dynamics of human groups leads to that outcome? What about biology itself leads to human nature? And so on. You’ll eventually hit a wall, that’s a certainty, but the further you push, the more fundamentally you understand the world. Elon Musk calls this first principles thinking and credits it with helping him do things in engineering and business that almost everyone considered impossible.

***

From there, Pinker concludes with a thought that hits near and dear to our hearts:

There is no “conflict between the sciences and humanities,” or at least there shouldn't be. There should be no turf battle as to who gets to speak about what matters. What matters are ideas. We should seek the ideas that give us the deepest, richest, best-informed understanding of the human condition, regardless of which people or what discipline originates them. That has to include the sciences, but it can't come only from the sciences. The focus should be on ideas, not on people, disciplines, or academic traditions.


Still Interested?
Start building your mental models and read some more Pinker for more goodness.