Farnam Street helps you make better decisions, innovate, and avoid stupidity.
With over 350,000 monthly readers and more than 88,000 subscribers to our popular weekly digest, we've become an online intellectual hub.
Farnam Street helps you make better decisions, innovate, and avoid stupidity.
With over 350,000 monthly readers and more than 88,000 subscribers to our popular weekly digest, we've become an online intellectual hub.
Harvard’s cognitive psychology giant Steven Pinker has had no shortage of big, interesting topics to write about so far.
Starting in 1994 with his first book aimed at popular audiences, The Language Instinct, Pinker has discussed not only the origins of language, but the nature of human beings, the nature of our minds, the nature of human violence, and a host of related topics.
His most recent book The Sense of Style narrows in on how to write well, but continues to showcase his brilliant synthetical mind. It’s a 21st century version of Strunk & White, a book aimed to help us understand why our writing often sucks, and how we might make it suck a little less.
His deep background in linguistics and cognitive psychology allows him to discuss language and writing more deeply than your average style guide; it’s also funny as hell in parts, which can’t be said of nearly any style guide.
In the third chapter, Pinker addresses the familiar problem of academese, legalese, professionalese…all the eses that make one want to throw a book, paper, or article in the trash rather than finish it. What causes them? Is it because we seek to obfuscate, as is commonly thought? Sometimes yes — especially when the author is trying to sell the reader something, be it a product or an idea.
But Pinker’s not convinced that concealment is driving most of our frustration with professional writing:
I have long been skeptical of the bamboozlement theory, because in my experience it does not ring true. I know many scholars who have nothing to hide and no need to impress. They do groundbreaking work on important subjects, reason well about clear ideas, and are honest, down-to-earth people, the kind you’d enjoy having a beer with. Still, their writing stinks.
So, if it’s not that we’re trying to mislead, what’s the problem?
Pinker first calls attention to the Curse of Knowledge — the inability to put ourselves in the shoes of a less informed reader.
The curse of knowledge is the single best explanation I know of why good people write bad prose. It simply doesn’t occur to the writer that her readers don’t know what she knows — that they haven’t mastered the patois of her guild, can’t divine the missing steps that seem too obvious to mention, have no way to visualize a scene that to her is as clear as day. And so she doesn’t bother to explain the jargon, or spell out the logic, or supply the necessary detail.
The first, simple, way this manifests itself is one we all encounter too frequently: Over-Abbreviation. It’s when we’re told to look up the date of the SALT conference for MLA sourcing on the HELMET system after our STEM meeting. (I only made one of those up.) Pinker’s easy way out is to recommend we always spell out acronyms the first time we use them, unless we’re absolutely sure readers will know what they mean. (And still maybe even then.)
The second obvious manifestation is our overuse of technical terms which the reader may or may not have encountered before. A simple fix is to add a few words of expository the first time you use the term, as in “Arabidopsis, a flowering mustard plant.” Don’t assume the reader knows all of your jargon.
In addition, the use of examples is so powerful that we might call them a necessary component of persuasive writing. If I give you a long rhetorical argument in favor of some action or another without anchoring it on a concrete example, it’s as if I haven’t explained it at all. Something like: “Reading a source of information that contradicts your existing beliefs is a useful practice, as in the case of a Democrat spending time reading Op-Eds written by Republicans.” The example makes the point far stronger.
Another deeper part of the problem is a little less obvious but a lot more interesting than you might think. Pinker ascribes a big source of messy writing to a mental process called chunking, in which we package groups of concepts into ever further abstraction in order to save space in our brain. Here’s a great example of chunking:
As children we see one person hand a cookie to another, and we remember it as an act of giving. One person gives another one a cookie in exchange for a banana; we chunk the two acts of giving together and think of the sequence as trading. Person 1 trades a banana to Person 2 for a shiny piece of metal, because he knows he can trade it to Person 3 for a cookie; we think of it as selling. Lots of people buying and selling make up a market. Activity aggregated over many markets get chunked into the economy. The economy can now be thought of as an entity which responds to action by central banks; we call that monetary policy. One kind of monetary policy, which involves the central bank buying private assets, is chunked as quantitative easing.
As we read and learn, we master a vast number of these abstractions, and each becomes a mental unit which we can bring to mind in an instant and share with others by uttering its name.
Chunking is an amazing and useful component of higher intelligence, but it gets us in trouble when we write because we assume our readers’ chunks are just like our own. They’re not.
A second issue is something he terms functional fixity. This compounds the problem induced by chunking:
Sometimes wording is maddeningly opaque without being composed of technical terminology from a private clique. Even among cognitive scientists, a “poststimulus event” is not a standard way way to refer to a tap on the arm. A financial customer might be reasonably familiar with the world of investments and still have to puzzle over what a company brochure means by “capital charges and rights.” A computer-savvy user trying to maintain his Web site might be mystified by instructions on the maintenance page which refer to “nodes,” “content type” and “attachments.” And heaven help the sleepy traveler trying to set the alarm clock in his hotel room who must interpret “alarm function” and “second display mode.”
Why do writers invent such confusing terminology? I believe the answer lies in another way in which expertise can make our thoughts more idiosyncratic and thus harder to share: as we become familiar with something, we think about it more in terms of the use we put it to and less in terms of what it looks like and what it is made of. This transition, another staple of the cognitive psychology curriculum, is called functional fixity (sometimes functional fixedness).
The opposite of functional fixity would be familiar to those who have bought their dog or cat a toy only to be puzzled to see them playing with the packaging it came in. The animal hasn’t fixated on the function of the objects, to him an object is just an object. The toy and the packaging are not categorized as toy and thing toy comes in the way they are for us. In this case, we have functional fixity and they do not.
And so Pinker continues:
Now, if you combine functional fixity with chunking, and stir in the curse that hides each one from our awareness, you get an explanation of why specialists use so much idiosyncratic terminology, together with abstractions, metaconcepts, and zombie nouns. They are not trying to bamboozle us, that’s just the way they think.
In a similar way, writers stop thinking — and thus stop writing — about tangible objects and instead refer to them by the role those objects play in their daily travails. Recall the example from chapter 2 in which a psychologist showed people sentences, followed by the label TRUE or FALSE. He explained what he did as “the subsequent presentation of an assessment word,” referring to the [true/false] label as an “assessment word” because that’s why he put it there — so that the participants in the experiment could assess whether it applied to the preceding sentence Unfortunately, he left it up to us to figure out what an “assessment word” is–while saving no characters, and being less rather than more scientifically precise.
In the same way, a tap on the wrist became a “stimulus” and a [subsequent] tap on the elbow become a “post-stimulus event,” because the writer cared about the fact that one event came after the other and no longer cared about the fact that the events were taps on the arm.
As we get deeper into our expertise, we substitute concrete, useful, everyday imagery for abstract, technical fluff that brings nothing to the mind’s eye of a lay reader. We use metaconcepts like levels, issues, contexts, frameworks, and perspectives instead of describing the actual thing in plain language. (Thus does a book become a “tangible thinking framework.”)
How do we solve the problem, then? Pinker partially defuses the obvious solution — remembering the reader over your shoulder while you write — because he feels it doesn’t always work. Even when we’re made aware that we need to simplify and clarify for our audience, we find it hard to regress our minds to a time when our professional knowledge was more primitive.
Pinker’s prescription has a few parts:
“Ants and bees can also work together in huge numbers, but they do so in a very rigid manner and only with close relatives. Wolves and chimpanzees cooperate far more flexibly than ants, but they can do so only with small numbers of other individuals that they know intimately. Sapiens can cooperate in extremely flexible ways with countless numbers of strangers. That’s why Sapiens rule the world, whereas ants eat our leftovers and chimps are locked up in zoos and research laboratories.” —Yuval Noah Harari, Sapiens
Yuval Noah Harari‘s Sapiens is one of those uniquely breathtaking books that comes along very rarely. It’s broad, yet scientific. It’s written for a popular audience but never feels dumbed down. It’s new and fresh, but is not based on any brand new primary research. Near and dear to our heart, Sapiens is pure synthesis.
An immediate influence that comes to mind is Jared Diamond, author of Guns, Germs, and Steel, The Third Chimpanzee, and other broad-yet-scientific works with vast synthesis and explanatory power. And of course, Harari, a history professor at the Hebrew University of Jerusalem, has noted that key influence and what it means to how he works:
(Harari) credits author Jared Diamond with encouraging him to take a much broader view—his Guns, Germs and Steel was an enormous influence. Harari says: “It made me realise that you can ask the biggest questions about history and try to give them scientific answers. But in order to do so, you have to give up the most cherished tools of historians. I was taught that if you’re going to study something, you must understand it deeply and be familiar with primary sources. But if you write a history of the whole world you can’t do this. That’s the trade-off.”
With this working model in mind, Harari sought to understand the history of humankind’s domination of the earth and its development of complex modern societies. His synthesis involves using evolutionary theory, forensic anthropology, genetics and the basic tools of the historian to generate a new conception of our past: Man’s success was due to its ability to create and sustain grand, collaborative myths.
Harari uses a smart trick to make his narrative more palatable and sensible: He uses the term Sapiens to refer to human beings. With this bit of depersonalization, Harari can go on to make some extremely bold statements about the history of humanity. We’re just another animal: the Homo Sapiens and our history can be described just like that of any other species. Our successes, failures, flaws and credits are part of the makeup of the Sapiens. (This biological approach to history is one we’ve looked at before with the work of Will and Ariel Durant.)
Sapiens was, of course, just one of many animals on the savannah if we go back about 100,000 years.
There were humans long before there was history. Animals much like modern humans first appeared about 2.5 million years ago. But for countless generations they did not stand out from the myriad other organisms with which they shared their habitats….
These archaic humans loved, played, formed close friendships and competed for status and power, but so did chimpanzees, baboons, and elephants. There was nothing special about humans. Nobody, least of all humans themselves had any inkling their descendants would one way walk on the moon, split the atom, fathom the genetic code and write history books. The most important thing to know about prehistoric humans is that they were insignificant animals with no more impact on their environment than gorillas, fireflies or jellyfish.
We like to think we have been a privileged species right from the start; that through a divine spark, we had the ability to dominate our environment and the lesser mammals we co-habitated with. But that was not so, at least not at first. We were simply another smart, social ape trying to survive in the wild. We had cousins: Homo neanderthalensis, Homo erectus, Homo rudolfensis…all considered human and with similar traits. If chimps and bonobos were our second cousins, these were our first cousins.
Eventually things changed. About 70,000 or so years ago, our DNA showed a mutation (Harari claims we’re not sure why — I don’t know the research well enough to disagree) which allowed us to make a leap that no other species, human or otherwise, was able to make: Cooperating flexibly in large groups with a unique and complex language. Harari calls this the “Cognitive Revolution.”
What was the Sapiens’ secret of success? How did we manage to settle so rapidly in so many distant and ecologically different habitats? How did we push all other human species into oblivion? Why couldn’t even the strong, brainy, cold-proof Neanderthals survive our onslaught? The debate continues to rage. The most likely answer is the very thing that makes the debate possible: Homo sapiens conquered the world thanks above all to its unique language.
Our newfound language had many attributes that couldn’t be found in our cousins’ languages, or in any other languages from ants to whales.
Firstly, we could give detailed explanations of events that had transpired. I saw a large lion in the forest three days back, with three companions, near the closest tree to the left bank of the river and I think, but am not totally sure, they were hunting us. Why don’t we ask for help from a neighboring tribe so we don’t all end up as lion meat?
Secondly, and maybe more importantly, we could also gossip about each other. I noticed Frank and Steve have not contributed to the hunt in about three weeks. They are not holding up their end of the bargain, and I don’t think we should include them in distributing the proceeds of our next major slaughter. Hey, does this headdress make me look fat?
As important as both of these abilities were to the development of Sapiens, they are probably not the major insights by Harari. Steven Pinker has written about The Language Instinct and where it got us over time, as have others.
Harari’s insight is that the above are not the most important reasons why our “uniquely supple” language gave us a massive, exponential, survival advantage: It was because we could talk about things that were not real.
As far as we know, only Sapiens can talk about entire kinds of entities that they have never seen, touched, or smelled. Legends, myths, gods, and religions appeared for the first time with the Cognitive Revolution. Many animals and human species could previously say ‘Careful! A lion! Thanks to the Cognitive Revolution, Homo sapiens acquired the ability to say. ‘The lion is the guardian spirit of our tribe.’ This ability to speak about fictions is the most unique feature of Sapiens language…You could never convince a monkey to give you a banana by promising him limitless bananas after death in monkey heaven.
This is the core of Harari’s provocative thesis: It is our collected fictions that define us. Predictably, he mentions religion as one of the important fictions. But other fictions are just as important; the limited liability corporation; the nation-state; the concept of human “rights” deliverable at birth; the concept of money itself. All of these inventions allow us to do the thing that other species cannot do: Cooperate effectively and flexibly in large groups.
Ants and bees can also work together in huge numbers, but they do so in a very rigid manner and only with close relatives. Wolves and chimpanzees cooperate far more flexibly than ants, but they can do so only with small numbers of other individuals that they know intimately. Sapiens can cooperate in extremely flexible ways with countless numbers of strangers. That’s why Sapiens rule the world, whereas ants eat our leftovers and chimps are locked up in zoos and research laboratories.
Our success is intimately linked to scale, which we have discussed before. In many systems and in all species but ours, as far as we know, there are hard limits to the number of individuals that can cooperate in groups in a flexible way. (Ants can cooperate in great numbers with their relatives, but only based on simple algorithms. Munger has mentioned in The Psychology of Human Misjudgment that ants’ rules are so simplistic that if a group of ants start walking in a circle, their “follow-the-leader” algorithm can cause them to literally march until their collective death.)
Sapiens diverged when it discovered an ability to generate a collective myth, and there was almost no limit to the number of cooperating, believing individuals who could belong to a belief-group. And thus we see extremely different results in human culture than in whale culture, or dolphin culture, or bonobos culture. It’s a lollapalooza result from a combination of critical elements.
Any large-scale human cooperation — whether a modern state, a medieval church, an ancient city, or an archaic tribe — is rooted in common myths that exist only in people’s collective imagination. Churches are rooted in common religious myths. Two Catholics who have never met can nevertheless go together on crusade or pool funds to build a hospital because they both believe God was incarnated in human flesh and allowed Himself to be crucified to redeem our sins. States are rooted in common national myths. Two Serbs who have never met might risk their lives to save one another because both believe in the existence of the Serbian nation, the Serbian homeland and the Serbian flag. Judicial systems are rooted in common legal myths. Two lawyers who have never met can nevertheless combine efforts to defend a complete stranger because they both believe in the existence of laws, justice, human rights, and money paid out in fees.
Harari is quick to point out that these aren’t lies. We truly believe them, and we believe in them as a collective. They have literal truth in the sense that if I trust that you believe in money as much as I do, we can use it as an exchange of value. But just as you can’t get a chimpanzee to forgo a banana today for infinite bananas in heaven, you also can’t get him to accept 3 apples today with the idea that if he invests them in a chimp business wisely, he’ll get 6 bananas from it in five years, no matter how many compound interest tables you show him. This type of collaborative and complex fiction is uniquely human, and capitalism is as much of a collective myth as religion.
Of course, this leads to a fascinating result of human culture: If we collectively decide to to alter the myths, we can alter population behavior dramatically and quickly. We can decide slavery, one of the oldest institutions in human history, is no longer acceptable. We can declare monarchy an outdated form of governance. We can decide females should have the right to as much power as men, reversing the pattern of history. (Of course, we can also decide all Sapiens must worship the same religious text and devote ourselves to slaughtering the resisters.)
There is no parallel I’m aware of in other species for these quick, large-scale shifts. General behavior patterns in dogs or fish or ants change due to a change in environment or broad genetic evolution over a period of time. Lions will never sign a Declaration of Lion Rights and decide to banish the idea of an alpha male lion; their hierarchies are rigid.
But humans can collectively change the narrative in a period of a few years and begin acting very differently, with the same DNA and the same set of physical environments. And thus, says Harari: “The Cognitive Revolution is accordingly the point when history declared its independence from biology.” These ever shifting alliances, beliefs, myths, and ultimately, cultures, define what we call human history.
For now we will leave it here, but a thorough reading of Sapiens is recommended to understand where Professor Harari takes this idea, from the earliest humans to the fate of our descendants.
John Pollack is a former Presidential Speechwriter. If anyone knows the power of words to move people to action, shape arguments, and persuade, it is he.
In Shortcut: How Analogies Reveal Connections, Spark Innovation, and Sell Our Greatest Ideas, he explores the powerful role of analogy in persuasion and creativity.
One of the key tools he uses for this is analogy.
While they often operate unnoticed, analogies aren’t accidents, they’re arguments—arguments that, like icebergs, conceal most of their mass and power beneath the surface. In arguments, whoever has the best argument wins.
But analogies do more than just persuade others — they also play a role in innovation and decision making.
From the bloody Chicago slaughterhouse that inspired Henry Ford’s first moving assembly line, to the “domino theory” that led America into the Vietnam War, to the “bicycle for the mind” that Steve Jobs envisioned as a Macintosh computer, analogies have played a dynamic role in shaping the world around us.
Despite their importance, many people have only a vague sense of the definition.
In broad terms, an analogy is simply a comparison that asserts a parallel—explicit or implicit—between two distinct things, based on the perception of a share property or relation. In everyday use, analogies actually appear in many forms. Some of these include metaphors, similes, political slogans, legal arguments, marketing taglines, mathematical formulas, biblical parables, logos, TV ads, euphemisms, proverbs, fables and sports clichés.
Because they are so disguised they play a bigger role than we consciously realize. Not only do analogies effectively make arguments, but they trigger emotions. And emotions make it hard to make rational decisions.
While we take analogies for granted, the ideas they convey are notably complex.
All day every day, in fact, we make or evaluate one analogy after the other, because some comparisons are the only practical way to sort a flood of incoming data, place it within the content of our experience, and make decisions accordingly.
Remember the powerful metaphor — that arguments are war. This shapes a wide variety of expressions like “your claims are indefensible,” “attacking the weakpoints,” and “You disagree, OK shoot.”
Or consider the Map and the Territory — Analogies give people the map but explain nothing of the territory.
Warren Buffett is one of the best at using analogies to communicate effectively. One of my favorite analogies is when he noted “You never know who’s swimming naked until the tide goes out.” In other words, when times are good everyone looks amazing. When times suck, hidden weaknesses are exposed. The same could be said for analogies:
We never know what assumptions, deceptions, or brilliant insights they might be hiding until we look beneath the surface.
Most people underestimate the importance of a good analogy. As with many things in life, this lack of awareness comes at a cost. Ignorance is expensive.
Evidence suggests that people who tend to overlook or underestimate analogy’s influence often find themselves struggling to make their arguments or achieve their goals. The converse is also true. Those who construct the clearest, most resonant and apt analogies are usually the most successful in reaching the outcomes they seek.
The key to all of this is figuring out why analogies function so effectively and how they work. Once we know that, we should be able to craft better ones.
Effective, persuasive analogies frame situations and arguments, often so subtly that we don’t even realize there is a frame, let alone one that might not work in our favor. Such conceptual frames, like picture frames, include some ideas, images, and emotions and exclude others. By setting a frame, a person or organization can, for better or worse, exert remarkable influence on the direction of their own thinking and that of others.
He who holds the pen frames the story. The first person to frame the story controls the narrative and it takes a massive amount of energy to change the direction of the story. Sometimes even the way that people come across information, shapes it — stories that would be a non-event if disclosed proactively became front page stories because someone found out.
In Don’t Think of an Elephant, George Lakoff explores the issue of framing. The book famously begins with the instruction “Don’t think of an elephant.”
What’s the first thing we all do? Think of an elephant, of course. It’s almost impossible not to think of an elephant. When we stop consciously thinking about it, it floats away and we move on to other topics — like the new email that just arrived. But then again it will pop back into consciousness and bring some friends — associated ideas, other exotic animals, or even thoughts of the GOP.
“Every word, like elephant, evokes a frame, which can be an image of other kinds of knowledge,” Lakoff writes. This is why we want to control the frame rather than be controlled by it.
In Shortcut Pollack tells of Lakoff talking about an analogy that President George W. Bush made in the 2004 State of the Union address, in which he argued the Iraq war was necessary despite the international criticism. Before we go on, take Bush’s side here and think about how you would argue this point – how would you defend this?
In the speech, Bush proclaimed that “America will never seek a permission slip to defend the security of our people.”
As Lakoff notes, Bush could have said, “We won’t ask permission.” But he didn’t. Instead he intentionally used the analogy of permission slip and in so doing framed the issue in terms that would “trigger strong, more negative emotional associations that endured in people’s memories of childhood rules and restrictions.”
Commenting on this, Pollack writes:
Through structure mapping, we correlate the role of the United States to that of a young student who must appeal to their teacher for permission to do anything outside the classroom, even going down the hall to use the toilet.
But is seeking diplomatic consensus to avoid or end a war actually analogous to a child asking their teacher for permission to use the toilet? Not at all. Yet once this analogy has been stated (Farnam Street editorial: and tweeted), the debate has been framed. Those who would reject a unilateral, my-way-or-the-highway approach to foreign policy suddenly find themselves battling not just political opposition but people’s deeply ingrained resentment of childhood’s seemingly petty regulations and restrictions. On an even subtler level, the idea of not asking for a permission slip also frames the issue in terms of sidestepping bureaucratic paperwork, and who likes bureaucracy or paperwork.
Deconstructing analogies, we find out how they function so effectively. Pollack argues they meet five essential criteria.
Let’s explore how these work in greater detail. Let’s use the example of master-thief, Bruce Reynolds, who described the Great Train Robbery as his Sistine Chapel.
In the dark early hours of August 8, 1963, an intrepid gang of robbers hot-wired a six-volt battery to a railroad signal not far from the town of Leighton Buzzard, some forty miles north of London. Shortly, the engineer of an approaching mail train, spotting the red light ahead, slowed his train to a halt and sent one of his crew down the track, on foot, to investigate. Within minutes, the gang overpowered the train’s crew and, in less than twenty minutes, made off with the equivalent of more than $60 million in cash.
Years later, Bruce Reynolds, the mastermind of what quickly became known as the Great Train Robbery, described the spectacular heist as “my Sistine Chapel.”
Use the familiar to explain something less familiar
Reynolds exploits the public’s basic familiarity with the famous chapel in the Vatican City, which after Leonardo da Vinci’s Mona Lisa is perhaps the best-known work of Renaissance art in the world. Millions of people, even those who aren’t art connoisseurs, would likely share the cultural opinion that the paintings in the chapel represent “great art” (as compared to a smaller subset of people who might feel the same way about Jackson Pollock’s drip paintings, or Marcel Duchamp’s upturned urinal).
Highlight similarities and obscure differences
Reynold’s analogy highlights, through implication, similarities between the heist and the chapel—both took meticulous planning and masterful execution. After all, stopping a train and stealing the equivalent of $60m—and doing it without guns—does require a certain artistry. At the same time, the analogy obscures important differences. By invoking the image of a holy sanctuary, Reynolds triggers a host of associations in the audience’s mind—God, faith, morality, and forgiveness, among others—that camouflage the fact that he’s describing an action few would consider morally commendable, even if the artistry involved in robbing that train was admirable.
Identify useful abstractions
The analogy offers a subtle but useful abstraction: Genius is genius and art is art, no matter what the medium. The logic? If we believe that genius and artistry can transcend genre, we must concede that Reynolds, whose artful, ingenious theft netted millions, is an artist.
Tell a coherent story
The analogy offers a coherent narrative. Calling the Great Train Robbery his Sistine Chapel offers the audience a simple story that, at least on the surface makes sense: Just as Michelangelo was called by God, the pope, and history to create his greatest work, so too was Bruce Reynolds called by destiny to pull off the greatest robbery in history. And if the Sistine Chapel endures as an expression of genius, so too must the Great Train Robbery. Yes, robbing the train was wrong. But the public perceived it as largely a victimless crime, committed by renegades who were nothing if not audacious. And who but the most audacious in history ever create great art? Ergo, according to this narrative, Reynolds is an audacious genius, master of his chosen endeavor, and an artist to be admired in public.
There is an important point here. The narrative need not be accurate. It is the feelings and ideas the analogy evokes that make it powerful. Within the structure of the analogy, the argument rings true. The framing is enough to establish it succulently and subtly. That’s what makes it so powerful.
The analogy resonates emotionally. To many people, mere mention of the Sistine Chapel brings an image to mind, perhaps the finger of Adam reaching out toward the finger of God, or perhaps just that of a lesser chapel with which they are personally familiar. Generally speaking, chapels are considered beautiful, and beauty is an idea that tends to evoke positive emotions. Such positive emotions, in turn, reinforce the argument that Reynolds is making—that there’s little difference between his work and that of a great artist.
Daniel Kahneman explains the two thinking structures that govern the way we think: System one and system two . In his book, Thinking Fast and Slow, he writes “Jumping to conclusions is efficient if the conclusions are likely to be correct and the costs of an occasional mistake are acceptable, and if the jump saves much time and effort.”
“A good analogy serves as an intellectual springboard that helps us jump to conclusions,” Pollack writes. He continues:
And once we’re in midair, flying through assumptions that reinforce our preconceptions and preferences, we’re well on our way to a phenomenon known as confirmation bias. When we encounter a statement and seek to understand it, we evaluate it by first assuming it is true and exploring the implications that result. We don’t even consider dismissing the statement as untrue unless enough of its implications don’t add up. And consider is the operative word. Studies suggest that most people seek out only information that confirms the beliefs they currently hold and often dismiss any contradictory evidence they encounter.
The ongoing battle between fact and fiction commonly takes place in our subconscious systems. In The Political Brain: The Role of Emotion in Deciding the Fate of the Nation, Drew Westen, an Emory University psychologist, writes: “Our brains have a remarkable capacity to find their way toward convenient truths—even if they are not all true.”
This also helps explain why getting promoted has almost nothing to do with your performance.
Remember Apollo Robbins? He’s a professional pickpocket. While he has unique skills, he succeeds largely through the choreography of people’s attention. “Attention,” he says “is like water. It flows. It’s liquid. You create channels to divert it, and you hope that it flows the right way.”
“Pickpocketing and analogies are in a sense the same,” Pollack concludes, “as the misleading analogy picks a listener’s mental pocket.”
And this is true whether someone else diverts our attention through a resonant but misleading analogy—“Judges are like umpires”—or we simply choose the wrong analogy all by ourselves.
We rarely stop to see how much of our reasoning is done by analogy. In a 2005 study published in the Harvard Business Review, Giovanni Gavettie and Jan Rivkin wrote: “Leaders tend to be so immersed in the specifics of strategy that they rarely stop to think how much of their reasoning is done by analogy.” As a result they miss things. They make connections that don’t exist. They don’t check assumptions. They miss useful insights. By contrast “Managers who pay attention to their own analogical thinking will make better strategic decisions and fewer mistakes.”
Shortcut goes on to explore when to use analogies and how to craft them to maximize persuasion.
What makes us human? In part, argues evolutionary biologist Mark Pagel in Wired for Culture: Origins of the Human Social Mind, language is one of the keys to our evolutionary success, especially in the context of culture.
Humans had acquired the ability to learn from others, and to copy, imitate and improve upon their actions. This meant that elements of culture themselves— ideas, languages, beliefs, songs, art, technologies— could act like genes, capable of being transmitted to others and reproduced. But unlike genes, these elements of culture could jump directly from one mind to another, shortcutting the normal genetic routes of transmission. And so our cultures came to define a second great system of inheritance, able to transmit knowledge down the generations.
To be human at some point came to mean access to a growing and shared repository of “information, technologies, wisdom, and good luck.”
Our cultural inheritance is something we take for granted today, but its invention forever altered the course of evolution and our world. This is because knowledge could accumulate as good ideas were retained, combined, and improved upon, and others were discarded. And, being able to jump from mind to mind granted the elements of culture a pace of change that stood in relation to genetical evolution something like an animal’s behavior does to the more leisurely movement of a plant. Where you are stuck from birth with a sample of the genes that made your parents, you can sample throughout your life from a sea of evolving ideas. Not surprisingly, then, our cultures quickly came to take over the running of our day-to -day affairs as they outstripped our genes in providing solutions to the problems of our existence. Having culture means we are the only species that acquires the rules of its daily living from the accumulated knowledge of our ancestors rather than from the genes they pass to us. Our cultures and not our genes supply the solutions we use to survive and prosper in the society of our birth; they provide the instructions for what we eat, how we live, the gods we believe in, the tools we make and use, the language we speak, the people we cooperate with and marry, and whom we fight or even kill in a war.
Culture evolved primarily though language. This was the foundation of social learning. The best ideas were able to be passed on without having to reinvent them.
Pagel’s take on social learning is fascinating. “Theft” became part of our culture and part of what propelled us forward with such ferocity.
Social learning is really visual theft, and in a species that has it, it would become positively advantageous for you to hide your best ideas from others, lest they steal them. This not only would bring cumulative cultural adaptation to a halt, but our societies might have collapsed as we strained under the weight of suspicion and rancor.
So, beginning about 200,000 years ago, our fledgling species, newly equipped with the capacity for social learning had to confront two options for managing the conflicts of interest social learning would bring. One is that these new human societies could have fragmented into small family groups so that the benefits of any knowledge would flow only to one’s relatives. Had we adopted this solution we might still be living like the Neanderthals, and the world might not be so different from the way it was 40,000 years ago, when our species first entered Europe. This is because these smaller family groups would have produced fewer ideas to copy and they would have been more vulnerable to chance and bad luck. The other option was for our species to acquire a system of cooperation that could make our knowledge available to other members of our tribe or society even though they might be people we are not closely related to — in short, to work out the rules that made it possible for us to share goods and ideas cooperatively. Taking this option would mean that a vastly greater fund of accumulated wisdom and talent would become available than any one individual or even family could ever hope to produce.
This is the path we choose and our world is the result.
Two categories of people that can be hard to have a conversation with are good friends and people who have worked together for a long time. Sometimes it’s like they are speaking their own language — and they are. But these connections can transcend conversation and touch on life.
In Powers of Two: Finding the Essence of Innovation in Creative Pairs, Joshua Shenk explores how the identity of pairs resemble that of a mosaic, “a series of pieces that connect to one another.”
A good place to begin is with ritual, since this is often the foundation of creative practice. Igor Stravinsky came into his studio and, first thing, sat down and played a Bach fugue. When he was writing The End of the Affair, Graham Greene produced five hundred words every day, and only five hundred, even if it meant stopping in the middle of a scene. The choreographer Twyla Tharp rises every morning at 5:30, puts on her workout clothes, and catches a taxi to the Pumping Iron gym at Ninety-First Street and First Avenue in Manhattan. “The ritual,” she writes in The Creative Habit, “is not the stretching and weight training I put my body through each morning at the gym; the ritual is the cab. The moment I tell the driver where to go I have completed the ritual.”
Tharp’s point is that ritual emerges from the smallest, most concrete action. For pairs, the most basic thing is a regular meeting time. James Watson and Francis Crick had lunch most days at the Eagle pub in Cambridge. Facebook CEO Mark Zuckerberg and COO Sheryl Sandberg begin and end every week with hourlong private meetings. After they began to exchange their work, J.R.R. Tolkien and C. S. Lewis set aside Mondays to meet at a pub and later met with a group, the Inklings, every Thursday night at Lewis’s apartment.
Meeting rituals may be tied to moments in time — as when partners like Buffett and Munger begin every day with a call—or to a physical space, as when Lennon and McCartney met at Paul’s house to write. Watson and Crick ended up sharing an office at the Cavendish Laboratory in Cambridge because the other scientists in the lab couldn’t stand their incessant chatter.
Moving towards each other as people often means leaving the rest of the world behind. “Every real friendship is a sort of secession, even a rebellion,” C. S. Lewis writes in The Four Loves.
In the midst of the feverish and entwined six-year collaboration between Braque and Picasso that led to cubism, both artists signed the back of each of their canvases; only they would know who did what. “People always ask Ulay and me the same questions,” Marina Abramovic told me. ‘”Whose idea was it?’ or ‘How was this done?’… But we never specify. Everything was interrelated and interdependent.”
Partnerships often form impediments to others trying to look in. Outsiders are not part of the club, they are not doing the work, they don’t have the shared understanding, the common goals, the …
This is one reason many epic partnerships end up as historical footnotes or become entirely effaced: “Things were said with Picasso during those years,” Braque said, “that no one will ever say again, things that no one could ever say any more, that no one could ever understand… things that would be incomprehensible and which gave us such joy.” This was one of the very few lines either man ever spoke about the relationship that helped give birth to modern art.
In addition to the physical gestures that a pair can share, there is also an unmistakable private language. This is the key to high-bandwidth communication.
Many pairs have what we could fairly call a private language. Tom Hanks described the communication between director Ron Howard and producer Brian Grazer as “some gestalt Vulcan.” Akio Morita and Ma- saru Ibuka, the cofounders of Sony, “would sit there talking to each other,” Morita’s son Hideo said, “and we would listen but we had no idea what they were saying … It was gibberish to us, but they were understanding each other, and interrupting them for any reason was forbidden.”
Private language emerges organically from constant exchange. Intimate pairs talk fluidly and naturally, having let go of what psychologists call “self-monitoring”—the process of watching impulses and protean thoughts, censoring some, allowing others to pass one’s lips. … The psychologist Daniel Kahneman makes the same point. “Like most people, I am somewhat cautious about exposing tentative thoughts to others,” he said. But after a while with Amos Tversky, “this caution was completely absent.”
You just get so high-bandwidth,” Bill Gates said about talking to Steve Ballmer, his longtime deputy at Microsoft (and eventual successor). “Steve and I would just be going from talking to meeting to talking to meeting, and then I’d stay up late at night, and write him five e-mails. He’d get up early in the morning and maybe not necessarily respond to them, but start thinking about them. And the minute I see him, he’s [at the office whiteboard] saying we could move this guy over here and do this thing here.” Facebook’s CEO Mark Zuckerberg used that same term, high- bandwidth, to describe his exchanges with his COO Sheryl Sandberg. “We can talk for 30 seconds and have more meaning be exchanged than in a lot of meetings that I have for an hour,” he said.
More than shared language, people develop into shared rhythms and syntactical structures of speech.
This is due in part to the astonishing power of mimicry, which psychologists call “social contagion.” Just by being near each other, the psychologist Elaine Hatfield has shown, people come to match accents, speech rates, vocal intensity, vocal frequency, pauses, and quickness to respond.
Psychologists used to think that people imitated each other in a deliberate attempt to be liked, but mimicry is far more pervasive than this — and largely nonconscious. Intimate partners share physical postures and breathing patterns too. They use the same muscles so often, the psychologist Robert Zajonc and colleagues found in a study of spouses, that they even come to look alike. Warren Buffett has said that he and Charlie Munger are “Siamese twins, practically.” In addition to wearing the same gray suits, the same Clark Kent glasses, and the same comb-overs, writes Buffett biographer Alice Schroeder, they also share a “lurching, awkward gait” and a flickering intensity in their eyes. Whether or not this is due to what Zajonc calls “repeated empathic mimicry,” we can’t be sure, but one does wonder.
The larger point about any physical convergence is that it reflects what psychologists call a “shared coordinative structure.” Shared mannerisms, like similar walking gaits, often come along with shared emotions and ideas. Just as physical qualities are “highly communicable,” write psychologists Molly Ireland and James Pennebaker, so are behaviors, affective states, and beliefs.
Language is an unusually potent mechanism for psychic convergence, because it is so closely tied to thinking. “Linguistic coordination,” Ireland and Pennebaker explain, leads to “the cultivation of common ground (i.e., matching cognitive frameworks in which conversants adopt shared assumptions, linguistic referents, and knowledge).”
Of course eventually this goes telepathic.
Barry Sonnenfeld, who has directed photography on several films for the Coen brothers, remembers Ethan saying, after a take, “Hey, Joel, you know what?” And Joel replying: “Yeah, I know, I’m going to tell him.” When the writer David Zax visited The Daily Show to profile Steve Bodow, Jon Stewart’s head writer at the time, Zax could understand only a small fraction of their exchanges, given the dominance of “workplace argot and quasi-telepathy.” “If you work with Jon for any length of time, you learn to interpret the short hand,” Bodow said. For example, Stewart might say: “Cut the thing and bring the thing around and do the thing.” ‘”Cut the thing’: You know what thing needs to be cut,” Bodow explained. “‘Bring the thing around’: There’s a thing that works, but it needs to move up in order to set up the ‘do the thing’ thing, which is probably the ‘blow,’ the big joke at the end. It takes time and repetition and patience and frustration, and suddenly you know how to bring the thing around and do the thing.”
In The Domestication of Language, Daniel Cloud explores the wonderful world of conversations.
Our difficulty in accounting for conversation isn’t a sign that nobody’s ever tried to understand it. The intense focus on rhetoric by classical philosophers, for example, was the organized study of a certain rather formal kind of public conversation, and our interest in the phenomenon has continued until the present.
Cloud points us to H.P. Grice and his 1975 paper “Logic and Conversation.” Grice, according to Cloud, argues that human conversations happen around a shared purpose.
We sometimes may be deluded in thinking that such a shared purpose exists—for example, when talking to a confidence man—but the supposition is required to make us willing to participate. The purpose may be obvious—the car is out of gas, we have to figure out what to do—or frivolous, extremely serious, or horrific. The torturer seeks to create a common interest so he can have a truthful conversation with us, even though his method involves the stick and not the carrot.
Grice admitted that he was perplexed about the exact nature of the understanding involved. … Common interests in the absence of enforceable contracts create coordination games … Each of us would rather converse on some mutually agreed topic than not converse at all, provided that all the others do. It isn’t a helpful form of participation in the conversation to periodically interject irrelevant remarks on completely unrelated topics, so we would prefer that all participants converse about the same topic as everyone else is or at least change the subject in culturally acceptable, legitimate ways. We often would be happy to converse about some other, slightly different topic if that topic had been raised by one of the participants instead. There always are alternatives, unless the people are enacting a play or another ritual, and real conversations change and drift as they go along, so the topic may well morph into one of those almost equally good alternatives. The conversation may acrimoniously disintegrate into no conversation, on no topic, if it goes badly, or it may gently evaporate into a resolve to have other conversations later. There always are different conversations we could have had instead. If someone new enters the discussion, we’d prefer that he stick to the topic, though if we were discussing something else, we’d prefer that he discuss that instead.
Topics are temporary and convey and establish conventions. They are also malleable and complex, changing directions in real time.
A shared common ground is first established, and then it’s extended and amended by the successive remarks of those involved. The changes may be incremental, or—if it’s possible to bring the other participants along with us, if people are agreeable and the transition isn’t too complicated to be made in unison without much preparation—they may be abrupt.
Conversations are not intended for everyone. From professions that have specialized vocabulary to conversations so generic only people who know each other would understand the hidden meaning, we have ways of excluding people, who, even if physically present, will not be a participant or even benefit.
What’s also true of most conversations is that not just anyone can participate. Perhaps we all haven’t been properly introduced. Or the conversation may be one that only topologists or elk hunters or members of the president’s national security team can engage in, or one that only Romeo and Juliet can be a part of. We may seek admission to a conversation and be welcomed or rebuffed. Yet this isn’t usually because there’s something scarce being shared by those conversing, which they would necessarily receive less of if someone else participated. Although there are conversations like that, many conversations are not. Sometimes new participants, even excluded ones, would have added something. In the language of economics, conversations are excludable and non-rivalrous. People can be prevented from benefiting from them or they can be excluded, but those who share in them don’t necessarily diminish their worth for the others. …
It seems that a conversation—like the highway system or the community that speaks Welsh—is a particularly informal, spontaneous, and fleeting club, an ephemeral microinstitution that flickers into and out of existence in a few seconds, minutes, hours, days, weeks, or years after its initial convening and that is organized around a temporary set of conventions about its topic, manner, and so on. By seeking admission, we represent ourselves as willing to conform to these conventions unless we can persuade the other participants to amend them. Sometimes some of the conventions established in a conversation also acquire contractual force—for example, when the conversation itself is a negotiation—but many do not.
How can participants in a conversation advance a common interest? Grice argues that most of what is conveyed in conversations is “implicature.”
Consider the following exchange:
a: Will Susan be at the game?
b: She has to teach that day.
In Grice’s terminology, B has implicated, but not said, that she won’t be at the game. … This conclusion depends on the common knowledge, known by both participants to be known by both participants, that teaching would preclude going to the game, perhaps because they will take place at the same time. Knowing this, A can work out what B is trying to tell him, what B is attempting to implicate
Grice distinguishes between this sort of context-dependent, situational implicature, which he calls “conversational implicature,” and mere conventional elisions of the following kind: “Socrates is a man, and therefore mortal.” Here I’ve left out a premise that would be required for the “therefore”; I’ve neglected to mention that all men are mortal. But I didn’t have to, because you and I, like everyone else, already know that. Without having to think about it, you naturally will assume that I am assuming that you will extract this information from the incomplete argument I’ve offered. Grice calls this slightly different phenomenon … “conventional implicature.”
How do we work out the intended conversational implicatures? Through various maxims.
[T]hese maxims (are organized) under the more general principles of quantity, quality, relation, and manner. We assume that the speaker is telling us as much as we need to know for the purposes of the conversation, but no more (quantity). We assume that he’s attempting to tell us only things that he knows to be true and is not asserting things that he believes to be false or for which he has no evidence (quality). We assume that what he’s saying is somehow relevant to the mutually understood, though constantly evolving, topic of the conversation (relation). We assume that he’s attempting to be perspicuous, that he would prefer to avoid ambiguity and obscurity, avoid prolixity, and present his narration or his argument in an orderly way (manner).
In answering A’s question about Susan, B must be understood to be telling A as much as he needs to know for his question to be answered. Likewise, A must assume that B believes it to be true that she has to work and has reasonably good grounds for that belief. A must assume that this information is somehow relevant to the topic raised by his question. Assuming these things, A is in a position to interpret B’s remark as intended to produce the implicature that Susan will not be at the game because it conflicts with her work. If her work has a special relationship to the game or its venue that means that the remark should produce the opposite conclusion, then B has failed to follow the principle of quantity correctly, because he’s left out something he would have had to tell A to make his remark interpretable. He has assumed the existence of a piece of common ground that’s actually missing.
Common knowledge is really the key to conversations because it’s the key to common ground.
Common knowledge, first created as common ground in formal or informal conversation and then conserved and referred to in later conversations, marks the boundaries of skill-centered speech communities, of the subcommunity of shamans or eel farmers or navigators or structural biochemists or Shinto priests. These are things that these people must know in order to converse with one another, making them unable to converse as freely with people who lack their skill set.
In conversation, the method used for creating new items of common knowledge is the participants explicitly or implicitly informing one another of things, so conversations create parts of their own common ground as they go along. By so doing, the participants may become partly isolated from the rest of their speech community, which now doesn’t share the newly created common ground. New tacit conventions also are negotiated indirectly and obliquely in particular conversations, by means of concerted choices among competing, unstated alternatives, which can make it even harder for an outsider to follow them.
A conversation consists of a series of its participants’ dovetailed and cumulative modifications of their common ground, and at the end of the conversation, they may share different knowledge or intentions (“Then yes, let’s do that”) or expectations (“Well, then I guess we can expect the same thing to happen every time”) or different explicit conventions (“OK, I guess next time, whoever called originally should be the one to call back”) than they did before.
This new knowledge has become common knowledge in the group conversing and now can be used as such, can be assumed to be part of the common ground for subsequent discussions by the same group. B will expect A to remember that Susan has to work on the day of the game. We’ll be expected to remember the new plan or expectation or convention that’s finally been arrived at.
Cloud argues that conversations convert knowledge, expectations and beliefs from private knowledge to common knowledge within the conversing group.
What is common knowledge can support conventional (as opposed to conversational) implicatures, so the group’s stock of possible conventional implicatures is enlarged as a result. From now on, it may not be necessary to mention that Susan has to work on the day of the game; perhaps it can simply be assumed. Every successive conversation among a certain group means that less must be said in subsequent conversations, that more and more can be “taken for granted.”
This fact can produce a sort of cultural microversion of songbirds’ local dialects, local, group-specific assumptions that make it harder and harder for newcomers who lack the same shared history to participate in the group’s conversations. Conversations make us clannish; they erode the barriers to communication and trust within the group while erecting new ones around it, in a tiny, temporary, ultrafast cultural version of one of John Maynard Smith and Eörs Szathmáry’s (1998) “major transitions.” A conversation creates a club that subsequently may function in some ways like a single, self-interested unit, which may see itself as competing with other, rival clubs and may exclude interlopers or impose its own rules on new entrants.
Conversations are much wider than the term implies. In fact, no words need be spoken at all.
When the master holds out his hand for a hammer, the apprentice can understand the gesture as a request of that kind only because he assumes that the master isn’t making an unnecessary gesture, isn’t trying to trick him, is asking for something relevant to the collaborative task at hand and not his hunting spear, and isn’t making a gesture he thinks the apprentice will be unable to interpret.
If you’re into the evolution of language, you’ll love The Domestication of Language.