Random Posts

Can one person successfully play different roles that require different, and often competing, perspectives?

No, according to research by Max Bazerman, author of the best book on decision making I've ever read: Judgment in Managerial Decision Making.

Contrary to F. Scott Fitzgerald's famous quote, “the test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function,” evidence suggests that even the most intelligent find it difficult to sustain opposing beliefs without the two influencing each other.

Why?

One reason is a bias from incentives. Another is bounded awareness. The auditor who desperately wants to retain a client’s business may have trouble adopting the perspective of a dispassionate referee when it comes time to prepare a formal evaluation of the client’s accounting practices.

* * * * *

In many situations, professionals are called upon to play dual roles that require different perspectives. For example, attorneys embroiled in pretrial negotiations may exaggerate their chances of winning in court to extract concessions from the other side. But when it comes time to advise the client on whether to accept a settlement offer, the client needs objective advice.

Professors, likewise, have to evaluate the performance of graduate students and provide them with both encouragement and criticism. But public criticism is less helpful when faculty serve as their students’ advocates in the job market. And, although auditors have a legal responsibility to judge the accuracy of their clients’ financial accounting, the way to win a client’s business is not by stressing one’s legal obligation to independence, but by emphasizing the helpfulness and accommodation one can provide.

Are these dual roles psychologically feasible?; that is, can one person successfully play different roles that require different, and often competing, perspectives? No.

Abstract

This paper explores the psychology of conflict of interest by investigating how conflicting interests affect both public statements and private judgments. The results suggest that judgments are easily influenced by affiliation with interested partisans, and that this influence extends to judgments made with clear incentives for objectivity. The consistency we observe between public and private judgments indicates that participants believed their biased assessments. Our results suggest that the psychology of conflict of interest is at odds with the way economists and policy makers routinely think about the problem. We conclude by exploring implications of this finding for professional conduct and public policy.

Full Paper (PDF)

Read what you've been missing. Subscribe to Farnam Street via Email, RSS, or Twitter.

Shop at Amazon.com and support Farnam Street

Hammurabi’s Code

hammurabi's code

Nearly 4,000 years ago, Hammurabi’s code specified:

“229. If a builder builds a house for a man and does not make its construction firm, and the house which he has built collapses and causes the death of the owner of the house, that builder shall be put to death.”

“230. If it causes the death of the son of the owner of the house they shall put to death a son of that builder.”

“231. If it causes the death of a slave of the owner of the house he shall give to the owner of the house a slave of equal value.”

“232. If it destroys property, he shall restore whatever it destroyed, and because he did not make the house which he build firm and it collapsed, he shall rebuild the house which collapsed at his own expense.”

“233. If a builder builds a house for a man and does not make its construction meet the requirements and a wall falls in, that builder shall strengthen the wall at his own expense.”

The image above shows 230 and 231 written in the original cuneiform script.

Hammurabi was the best-known king of Babylon's first dynasty. In addition to being a lesson in incentives, the code is one of the earliest, if not the earliest, recorded construction law.

“In all,” writes Nael Bunni in Risk and Insurance in Construction, “there were 282 rules found inscribed on an imposing stone stele in cuneiform script. The rules were divided into three sections: property law, family law, and laws relating to retaliation and restitution.”

The severe penalty imposted by these rules ensured that building work achieved the required standards of construction and safety and helped to ensure that houses were free from the defects resulting from bad design, materials or workmanship. The assurance that this would be so was based on the principle of ‘an eye for an eye' in accordance with the law of that time, a principle that still exists today in some legal systems.

In a 2011 Op-ed, Nassim Taleb wrote:

The Babylonians understood that the builder will always know more about the risks than the client, and can hide fragilities and improve his profitability by cutting corners — in, say, the foundation. The builder can also fool the inspector; the person hiding risk has a large informational advantage over the one who has to find it.

How To Avoid Getting Sick

This time of year brings out more than just the holiday spirit. It's cold and flu season and not a day goes by where I don't see someone sniffling or coughing.

Here are 7 simple tips to keep in mind that will help prevent cold and flu.

1. Wash your hands.
This is something you should be doing a lot. Most of what we do every day, involves touch. Consider my local coffee shop, at least two—and often three—people touch that cup before it even gets to me. I'm not a germaphobe, yet if you're only going to do one thing, do this.

2. Don't pick your nose, rub your eyes, or otherwise touch your face.
My mom told me ‘this is the way germs get in' and she was right. Even with relatively clean hands, odds are there are some germs. One of the easiest ways to transmit virus is through your nose, mouth, and eyes. Keep your hands away. Oh and don't bite your nails.

3. Avoid sick people.
Sick people often have sick germs. Stay away from these people. If you're sick don't go to work. Every office has that person who shows up to ‘tough-it-out' and everyone secretly hates that they are at work.

4. Avoid the social jet-lag (i.e., sleep).
Not getting enough sleep increases the risk of catching a cold. When you feel like you're starting to get sick do the world a favour and take the day off to rest.

5. Drink plenty of water.
Not juice, water. If you want juice, eat an orange.

6. Pass on the booze.
If your body is fighting a cold or the flu, why would you ask it to do even more. That's like taking the busiest person you know and saying, hey can you do this too? Skip the booze for a few days if you think you're fighting something.

7. Fast
Skip a meal. When you're sick your body does this naturally through lack of appetite. But when you're fighting something, you can choose to do it. This is what animals do when they're fighting an illness or serious infection. Don't skip the water.

Breakpoint — Bigger is Not Better

Jeff Stibel
Jeff Stibel

“What is missing—what everyone is missing—is that the unit of measure for progress isn’t size, it’s time.”

Jeff Stibel's book Breakpoint: Why the Web will Implode, Search will be Obsolete, and Everything Else you Need to Know about Technology is in Your Brain is an interesting read. The book is about “understanding what happens after a breakpoint. Breakpoints can't and shouldn't be avoided, but they can be identified.”

In any system continuous growth is impossible. Everything reaches a breakpoint. The real question is how the system responds to this breakpoint. “A successful network has only a small collapse, out of which a stronger network emerges wherein it reaches equilibrium, oscillating around an ideal size.”

The book opens with an interesting example.

In 1944 , the United States Coast Guard brought 29 reindeer to St. Matthew Island, located in the Bering Sea just off the coast of Alaska. Reindeer love eating lichen, and the island was covered with it, so the reindeer gorged, grew large, and reproduced exponentially. By 1963, there were over 6,000 reindeer on the island, most of them fatter than those living in natural reindeer habitats.

There were no human inhabitants on St. Matthew Island, but in May 1965 the United States Navy sent an airplane over the island, hoping to photograph the reindeer. There were no reindeer to be found, and the flight crew attributed this to the fact that the pilot didn’t want to fly very low because of the mountainous landscape. What they didn’t realize was that all of the reindeer, save 42 of them, had died. Instead of lichen, the ground was covered with reindeer skeletons.

The network of St. Matthew Island reindeer had collapsed: the result of a population that grew too large and consumed too much. The reindeer crossed a pivotal point , a breakpoint, when they began consuming more lichen than nature could replenish. Lacking any awareness of what was happening to them, they continued to reproduce and consume. The reindeer destroyed their environment and, with it, their ability to survive. Within a few short years, the remaining 42 reindeer were dead. Their collapse was so extreme that for these reindeer there was no recovery.

In the wild of course reindeer can move if they run out of lichen, which allows lichen in the area to be replenished before they return.

Nature rarely allows the environment to be pushed so far that it collapses. Ecosystems generally keep life balanced. Plants create enough oxygen for animals to survive, and the animals, in turn, produce carbon dioxide for the plants. In biological terms, ecosystems create homeostasis.

We evolved to reproduce and consume whatever food is available.

Back when our ancestors started climbing down from the trees, this was a good thing: food was scarce so if we found some , the right thing to do was gorge. As we ate more, our brains were able to grow, becoming larger than those of any other primates. This was a very good thing. But brains consume disproportionately large amounts of energy and, as a result, can only grow so big relative to body size. After that point, increased calories are actually harmful. This presents a problem for humanity, sitting at the top of the food pyramid. How do we know when to stop eating? The answer, of course, is that we don’t. People in developed nations are growing alarmingly obese, morbidly so. Yet we continue to create better food sources, better ways to consume more calories with less bite.

Mother Nature won’t help us because this is not an evolutionary issue: most of the problems that result from eating too much happen after we reproduce, at which point we are no longer evolutionarily important. We are on our own with this problem. But that is where our big brains come in. Unlike reindeer, we have enough brainpower to understand the problem, identify the breakpoint, and prevent a collapse.

We all know that physical things have limits. But so do the things we can't see or feel. Knowledge is an example. “Our minds can only digest so much. Sure, knowledge is a good thing. But there is a point at which even knowledge is bad.” This is information overload.

We have been conditioned to believe that bigger is better and this is true across virtually every domain. When we try to build artificial intelligence, we start by shoveling as much information into a computer as possible. Then we stare dumbfounded when the machine can't figure out how to tie its own shoes. When we don't get the results we want, we just add more data. Who doesn't believe that the smartest person is the one with the biggest memory and the most degrees, that the strongest person has the largest muscles, that the most creative person has the most ideas?

Growth is great until it goes too far.

[W]e often destroy our greatest innovations by the constant pursuit of growth. An idea emerges, takes hold, crosses the chasm, hits a tipping point, and then starts a meteoric rise with seemingly limitless potential. But more often than not, it implodes, destroying itself in the process.

Growth isn't bad. It's just not as good as we think.

Nature has a lesson for us if we care to listen: the fittest species are typically the smallest. The tinest insects often outlive the largest lumbering animals. Ants, bees, and cockroaches all outlived the dinosaurs and will likely outlive our race. … The deadliest creature is the mosquito, not the lion. Bigger is rarely better in the long run. What is missing—what everyone is missing—is that the unit of measure for progress isn't size, it's time.

Of course, “The world is a competitive place, and the best way to stomp out potential rivals is to consume all the available resources necessary for survival.”

Otherwise, the risk is that someone else will come along and use those resources to grow and eventually encroach on the ones we need to survive.

Networks rarely approach limits slowly “… they often don't know the carrying capacity of their environments until they've exceeded it. This is a characteristic of limits in general: the only way to recognize a limit is to exceed it. ” This is what happened with MySpace. It grew too quickly. Pages became cluttered and confusing. There was too much information. It “grew too far beyond its breakpoint.”

There is an interesting paradox here though: unless you want to keep small social networks, the best way to keep the site clean is actually to use a filter that prevents you from seeing a lot of information, which creates a filter bubble.

Stibel offers three phases to any successful network.

first, the network grows and grows and grows exponentially; second, the network hits a breakpoint, where it overshoots itself and overgrows to a point where it must decline, either slightly or substantially; finally, the network hits equilibrium and grows only in the cerebral sense, in quality rather than in quantity.

He offers some advice:

Rather than endless growth, the goal should be to grow as quickly as possible—what technologists call hypergrowth—until the breakpoint is reached. Then stop and reap the benefits of scale alongside stability.

Breakpoint goes on to predict the fall of facebook.

Proximate vs Root Causes: Why You Should Keep Digging to Find the Answer

“Anything perceived has a cause.
All conclusions have premises.
All effects have causes.
All actions have motives.”
— Arthur Schopenhauer

***

The Basics

One of the first principles we learn as babies is that of cause and effect. Infants learn that pushing an object will cause it to move, crying will cause people to give them attention, and bumping into something will cause pain. As we get older, this understanding becomes more complex. Many people love to talk about the causes of significant events in their lives (if I hadn’t missed the bus that day I would never have met my partner! or if I hadn’t taken that class in college I would never have discovered my passion and got my job!) Likewise, when something bad happens we have a tendency to look for somewhere to pin the blame.

The mental model of proximate vs root causes is a more advanced version of this reasoning, which involves looking beyond what appears to be the cause and finding the real cause. As a higher form of understanding, it is useful for creative and innovative thinking. It can also help us to solve problems, rather than relying on band-aid solutions.

Much of our understanding of cause and effect comes from Isaac Newton. His work examined how forces lead to motion and other effects. Newton’s laws explain how a body remains stationary unless a force acts upon it. From this, we can take a cause to be whatever causes something to happen.

For example, someone might ask: Why did I lose my job?

  • Proximate cause: the company was experiencing financial difficulties and could not continue to pay all its employees.
  • Root cause: I was not of particular value to the company and they could survive easily without me.

This can then be explored further: Why was I not of value to the company?

  • Ultimate cause: I allowed my learning to stagnate and did not seek constant improvement. I continued doing the same as I had been for years which did not help the company progress.
  • Even further: Newer employees were of more value because they had more up-to-date knowledge and could help the company progress.

This can then help us to find solutions: How can I prevent this from happening again?

  • Answer: In future jobs, I can continually aim to learn more, keep to date with industry advancements, read new books on the topic and bring creative insights to my work. I will know this is working if I find myself receiving increasing amounts of responsibility and being promoted to higher roles.

This example illustrates the usefulness of this line of thinking. If our hypothetical person went with the proximate cause, they would walk away feeling nothing but annoyance at the company which fired them. By establishing the root causes, they can mitigate the risk of the same thing happening in the future.

There are a number of relevant factors which we must take into account when figuring out root causes. These are known as predisposing factors and can be used to prevent a future repeat of an unwanted occurrence.

Predisposing factors tend to include:

  • The location of the effect
  • The exact nature of the effect
  • The severity of the effect
  • The time at which the effect occurs
  • The level of vulnerability to the effect
  • The cause of the effect
  • The factors which prevented it from being more severe.

Looking at proximate vs root causes is a form of abductive reasoning- a process used to unearth simple, probable explanations. We can use it in conjunction with philosophical razors (such as Occam’s and Hanlon’s) to make smart decisions and choices.

In Root Cause Analysis, Paul Wilson defines root causes as:

Root cause is that most basic reason for an undesirable condition or problem which, if eliminated or corrected, would have prevented it from existing or occurring.

In Leviathan, Chapter XI (1651) Thomas Hobbes wrote:

Ignorance of remote causes disposeth men to attribute all events to the causes immediate and instrumental: for these are all the causes they perceive…Anxiety for the future time disposeth men to inquire into the causes of things: because the knowledge of them maketh men the better able to order the present to their best advantage. Curiosity, or love of the knowledge of causes, draws a man from consideration of the effect to seek the cause; and again, the cause of that cause; till of necessity he must come to this thought at last that there is some cause whereof there is no former cause.

In Maxims of the Law, Francis Bacon wrote:

It were infinite for the law to consider the causes of causes, and their impulsions one of another; therefore it contented itself with the immediate cause, and judgeth of acts by that, without looking to any further degree.

A rather tongue in cheek perspective comes from the ever satirical George Orwell:

Man is the only real enemy we have. Remove Man from the scene, and the root cause of hunger and overwork is abolished forever.”
The issue with root cause analysis is that it can lead to oversimplification and it is rare for there to be one single root cause. It can also lead us to go too far (as George Orwell illustrates.) Over emphasising root causes is common among depressed people who end up seeing their existence as the cause of all their problems. As a consequence, suicide can seem like a solution (although it is the exact opposite.) The same can occur after a relationship ends, as people imagine their personality and nature to be the cause. To use this mental model in an effective manner, we must avoid letting it lead to self blame or negative thought spirals. When using it to examine our lives, it is best to only do so with a qualified therapist, rather than while ruminating in bed late at night. Finding root causes should be done with the future in mind, not for dwelling on past issues. Expert root cause analysts use it to prevent further problems and create innovative solutions. We can do the same in our own lives and work.

“Shallow men believe in luck or in circumstance. Strong men believe in cause and effect.”

— Ralph Waldo Emerson

Establishing Root Causes

Establishing root causes is rarely an easy task. However, there a number of techniques we can use to simplify the deduction process. These are similar to the methods used to find first principles:

Socratic questioning
Socratic questioning is a technique which can be used to establish root causes through strict analysis. This a disciplined questioning process used to uncover truths, reveal underlying assumptions and separate knowledge from ignorance. The key distinction between Socratic questioning and normal discussions is that the former seeks to draw out root causes in a systematic manner. Socratic questioning generally follows this process:

  1. Clarifying thinking and explaining origins of ideas. (What happened? What do I think caused it?)
  2. Challenging assumptions. (How do I know this is the cause? What could have caused that cause)
  3. Looking for evidence. (How do I know that was the cause? What can I do to prove or disprove my ideas?)
  4. Considering alternative perspectives. (What might others think? What are all the potential causes? )
  5. Examining consequences and implications. (What are the consequences of the causes I have established? How can they help me solve problems?)
  6. Questioning the original questions. (What can I do differently now that I know the root cause? How will this help me?)

The 5 Whys
This technique is simpler and less structured than Socratic questioning. Parents of young children will no doubt be familiar with this process, which necessitates asking ‘why?’ five times to a given statement. The purpose is to understand cause and effect relationships, leading to the root causes. Five is generally the necessary number of repetitions required. Each question is based on the previous answer, not the initial statement.

Returning to the example of our hypothetical laid off employee (mentioned in the introduction), we can see how this technique works.

  • Effect: I lost my job.
  • Why? Because I was not valuable enough to the company and they could let me go without it causing any problems.
  • Why? Because a newer employee in my department was getting far more done and having more creative ideas than me.
  • Why? Because I had allowed my learning to stagnate and stopped keeping up with industry developments. I continued doing what I have for years because I thought it was effective.
  • Why? Because I only received encouraging feedback from people higher up in the company, and even when I knew my work was substandard, they avoided mentioning it.
  • Why? Because whenever I received negative feedback in the past, I got angry and defensive. After a few occurrences of this, I was left to keep doing work which was not of much use. Then, when the company began to experience financial difficulties, firing me was the practical choice.
  • Solution: In future jobs, I must learn to be responsive to feedback, aim to keep learning and make myself valuable. I can also request regular updates on my performance. To avoid becoming angry when I receive negative feedback, I can try meditating during breaks to stay calmer at work.

As this example illustrates, the 5 whys technique is useful for drawing out root causes and finding solutions.

Cause and Effect Mapping

This technique is often used to establish causes of accidents, disasters, and other mishaps. Let’s take a look at how cause and effect mapping can be used to identify the root cause of a disaster which occurred in 1987: The King's Cross fire. This was a shocking event, where 31 people died and 100 were injured in a tube station fire. It was the first fatal fire to have occurred on the London Underground and led to tremendous changes in rules and regulations. This diagram shows the main factors which led to the fire, and how they all combined to lead to the tragic event. Factors included: flammable grease on the floors which allowed flames to spread, flammable out of date wooden escalators, complacent fire staff who failed to control the initial flames, untrained staff with no knowledge of how to evacuate people, blocked exits (believed to be due to cleaning staff negligence) and a dropped match (assumed to have been discarded by someone lighting a cigarette.)

Once investigators had established these factors which led to the fire, they could begin looking for solutions to prevent another fatal fire. Of course, solving the wrong problem would have been ineffective. Let’s take a look at each of the causes and figure out the root problem:

  • Cause: A dropped match. Smoking on Underground trains had been banned 3 years prior, but many people still lit cigarettes on the escalators as they left. Investigators were certain that the fire was caused by a match, and was not arson. Research found that many other fires had began in the past, yet had not spread. This alone did not explain the severity of this particular fire. Better measures have since been put into place to prevent smoking in stations (although Londoners can vouch for the fact that it still occasionally happens late at night or in secluded stations.)
  • Cause: flammable grease on escalators. Research found that this was indeed highly flammable. Solving this would have been almost impossible- the sheer size of stations and the numbers of people passing through them made thorough cleaning difficult. Solving this alone would not have been sufficient.
  • Cause: wooden escalators. Soon after the fire, stations began replacing these with metal (although it took until 2014 for the entire Underground network to replace every single one.
  • Cause: untrained staff. This was established to be the root cause. Even if the other factors were resolved, the lack of staff training or access to fire fighting equipment still left a high risk of another fatal incident. Investigations found that staff were only instructed to call the Fire Brigade once a fire was out of control, most had no training and little ability to communicate with each other. Once this root cause was found, it could be dealt with. Staff were given improved emergency training and better radio tools for communicating. Heat detectors and sprinklers were fitted in stations.

From this example, we can see how useful finding root causes is. The lack of staff training was the root cause, while the other factors were proximate causes which contributed.

From this information, we can create this diagram to illustrate the relationship between causes.

“All our knowledge has its origins in our perceptions … In nature, there is no effect without a cause … Experience never errs; it is only your judgments that err by promising themselves effects such as are not caused by your experiments.”

- Leonardo da Vinci

How We Can Use This Mental Model as Part of our Latticework

  • Hanlon’s Razor — This mental model states: never attribute to malice that which can be attributed to incompetence. It is relevant when looking for root causes. Take the aforementioned example of the Kings Cross fire. It could be assumed that staff failed to control the fire due to malice. However, we can be 99% certain that their failure to act was due to incompetence (the result of poor training and miscommunication.) When analysing root causes, we must be sure not to attribute blame where it does not exist.
  • Occam’s Razor — This model states: the simplest solution is usually correct. In the case of the fire, there are infinite possible causes which could be investigated. It could be said that the fire was started on purpose, the builders of the station made it flammable on purpose so they would be required to rebuild it that the whole thing is a conspiracy theory and people actually died in an alternate manner. However, the simplest solution is that the fire was caused by a discarded match. When looking for root causes, it is wise to first consider the simplest potential causes, rather than looking at everything which could have contributed.
  • Arguing from first principles — This mental model involves establishing the first principles of any given area of knowledge- information which cannot be deduced from anything else. Understanding the first principles of how fire spreads (such as the fire triangle) could have helped to prevent the event.
  • Black swans — This model, developed by Nassim Taleb is: “an event with the following three attributes. First, it is an outlier, as it lies outside the realm of regular expectations because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact…. Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable.” The King's Cross fire was a black swan- surprising, impactful and much analyzed afterwards. Understanding that black swans do occur can help us to plan for serious events before they happen.
  • Availability bias — This model states: we misjudge the frequency of events which have happened recently and information which is vivid. Imagine a survivor of the Kings Cross fire who had also been on a derailed train a few months earlier. The intensity of the two memories would be likely to lead them to see travelling on the Underground as dangerous. However, this is not the case – only one in 300 million journeys experience issues (much safer than driving.) When devising root causes, we must be sure to consider all information, not just that which comes to mind with ease.
  • Narrative fallacy — This model states: we tend to turn the past into a narrative, imagining events as fitting together in a manner which is usually false.
  • Hindsight bias — This model states: we see events as predictable when looking back on them.
  • Confirmation bias — This model states: we tend to look for information which confirms pre-existing beliefs and ideas.

David Foster Wallace: The Paradox of Plagiarism

David Foster Wallace (1962–2008) remains one of the most revered authors of our time. His timeless collection of wisdom includes everything from his famous commencement speech This is Water to his profound thoughts on the relationship between ambition and perfectionism and writing in the age of information.

In The Pale King, published posthumously, Foster Wallace describes the paradox of plagiarism.

It was all pretty incredible. In many respects, this college was my introduction to the stark realities of class, economic stratification, and the very different financial realities that different sorts of Americans inhabited.

Some of these upper-class students were indeed spoiled, cretinous, and/or untroubled by questions of ethics. Others were under great family pressure and failing, for whatever reasons, to work up to what their parents considered their true grade potential. Some just didn't manage their time and responsibilities well, and found themselves up against the wall on an assignment. I'm sure you get the basic picture. Let's just say that, as a way of positioning myself to pay off some of my loans at an accelerated rate, I provided a certain service. This service was not cheap, but I was quite good at it, and careful. E.g., I always demanded a large enough sample of a client's prior writing to determine how he tended to think and sound, and I never made the mistake of delivering something that was unrealistically superior to someone's own previous work. You can probably also see why these sorts of exercises would be good apprentice training for someone interested in so-called ‘creative writing.'

… To anticipate a likely question, let me concede that the ethics here were gray at best. This is why I chose to be honest, just above, about not being impoverished or needing the extra income in order to eat anything. I was not desperate. I was, though, trying to accumulate some savings against what I anticipated to be debilitating post-grad debt. I am aware that this is not an excuse in the strict sense, but I do believe it serves as at least an explanation; and there were also other, more general factors and contexts that might be seen as mitigating. For one, the college itself turned out to have a lot of moral hypocrisy about it, e.g., congratulating itself on its diversity and the leftist piety of its politics while in reality going about the business of preparing elite kids to enter elite professions and make a great deal of money, thus increasing the pool of prosperous alumni donors.

…The basic view I held was that, whereas there may have been elements of my enterprise that might technically qualify as aiding or abetting a client's decision to violate the college's Code of Academic Honesty; that decision, as well as the practical and moral responsibility for it rested with the client. I was undertaking certain freelance writing assignments for pay; why certain students wanted certain papers of a certain length on certain topics, and why they chose to do them with them after delivery, were not my business.

Suffice it to say that this view was not shared by the college's Judicial Board in late 1984. Here the story gets complex and a bit lurid, and a SOP memoir would probably linger on the details and the rank unfairness of hypocrisies involved.

The paradox of plagiarism

The paradox of plagiarism is that it actually requires a lot of care and hard work to pull off successfully, since the original text's style, substance, and logical sequences have to be modified enough so that the plagiarism isn't totally, insultingly obvious to the professor who's grading it.

Update: D.T. Max questions whether David Foster Wallace ever did the stuff he says he did in the Pale King.

Practically speaking, one of the great struggles was to figure out what really happened or at least get close to it. David was writing his autobiography even as he was living it and the life and the narrative coincided but were not identical. Take an example. David used to tell people he sold thesis help for pot or money at Amherst. He even has his doppel do it in The Pale King and Stonecipher LaVache Beadsman, something of a stand-in for David, does it in The Broom of the System. David was one of the smartest people anyone ever met in their lives—everyone agrees on that—so it's obvious that in philosophy or English, and probably history or French or economics, all subjects he got A-pluses in—he could have done it. Anyone would have been smart to make that trade with him. But did it ever happen? His college roommate and confidante Mark Costello, the one who knew him best in those years, thinks not. He thinks it's David's self-mythologizing. I never found anyone on the receiving end of such a transaction or had direct knowledge of one, so it's not in the book. If I were writing a novel with David as the protagonist, it would certainly be something the character did. It's something he should have done if he didn't.

Still curious? Learn more about David Foster Wallace by reading Every Love Story Is a Ghost Story: A Life of David Foster Wallace. If you want to read some of his work, start with Consider the Lobster and work your way up to The Pale King, which was left unfinished at the time of his suicide, and Infinite Jest, his masterpiece. Finally, top off with some wonderful cultural insights.

Predators and Robots at War

Scary stuff.

…the ethical and legal implications of the new technology already go far beyond the relatively circumscribed issue of targeted killing. Military robots are on their way to developing considerable autonomy. As noted earlier, UAVs can already take off, land, and fly themselves without human intervention. Targeting is still the exclusive preserve of the human operator—but how long will this remain the case? As sensors become more powerful and diverse, the amount of data gathered by the machines is increasing exponentially, and soon the volume and velocity of information will far exceed the controller’s capacity to process it all in real time, meaning that more and more decision-making will be left to the robot.

A move is already underway toward systems that allow a single operator to handle multiple drones simultaneously, and this, too, will tend to push the technology toward greater autonomy. We are not far from the day when it will become manifest that our mechanical warriors are better at protecting the lives of our troops than any human soldier, and once that happens the pressure to let robots take the shot will be very hard to resist. Pentagon officials who have been interviewed on the subject predictably insist that the decision to kill will never be ceded to a machine. That is reassuring. Still, this is an easy thing to say at a point when robots are not yet in the position to take the initiative against the enemy on a battlefield. Soon, much sooner than most of us realize, they will be able to do just that.

While the Pentagon may state the decision to kill will never be ceded to a machine, I'm not so sure. What if another country, with similar technological capability, delegates kill decisions to their toy army? A “robot” v. “human controlled robot” battle will quickly be won by the robot. A robot without needing a human to pull the trigger can process more information and act much quicker than a human. It seems almost inevitable that if one country cedes kill decisions to robots we all will.

Continue Reading

Shop at Amazon.com and support Farnam Street.

How Warren Buffett Keeps up with a Torrent of Information

A telling excerpt from an interview of Warren Buffett (below) on the value of reading.

Seems like he's taking the opposite approach to Nassim Taleb in some ways.

Warren Buffett on How he Keeps up with Information

Interviewer: How do you keep up with all the media and information that goes on in our crazy world and in your world of Berkshire Hathaway? What's your media routine?

Warren Buffett: I read and read and read. I probably read five to six hours a day. I don't read as fast now as when I was younger. But I read five daily newspapers. I read a fair number of magazines. I read 10-Ks. I read annual reports. I read a lot of other things, too. I've always enjoyed reading. I love reading biographies, for example.

Interviewer: You process information very quickly.

Warren Buffett: I have filters in my mind. If somebody calls me about an investment in a business or an investment in securities, I usually know in two or three minutes whether I have an interest. I don't waste any time with the ones which I don't have an interest.

I always worry a little bit about even appearing rude because I can tell very, very, very quickly whether it's going to be something that will lead to something, or whether it's a half an hour or an hour or two hours of chatter.

What's interesting about these filters is that Buffett has consciously developed them as heuristics to allow for rapid processing. They allow him to move quickly with few mistakes — that's what heuristics are designed to do. Most of us are trying to get rid of our heuristics to reduce error but here is one of the smartest people alive and he's doing the opposite: he's creating these filters as a means for allowing for information processing. He's moving fast and in the right direction.

Scarcity: Why Having Too Little Means So Much

scarcity

“The biggest mistake we make about scarcity is we view it as a physical phenomenon. It’s not.”

We're busier than ever. The typical inbox is perpetually swelling with messages awaiting attention. Meetings need to be rescheduled because something came up. Our relationships suffer. We don't spend as much time as we should with those who mean something to us. We have little time for new people; potential friends eventually get the hint and stop proposing ideas for things to do together. Falling behind turns into a vicious cycle.

Does this sound anything like your life?

You have something in common with people who fall behind on their bills, argue Harvard economist Sendhil Mullainathan and Princeton psychologist Eldar Shafir in their book Scarcity: Why Having Too Little Means So Much. The resemblance, they write, is clear.

Missed deadlines are a lot like over-due bills. Double-booked meetings (committing time you do not have) are a lot like bounced checks (spending money you do not have). The busier you are, the greater the need to say no. The more indebted you are, the greater the need to not buy. Plans to escape sound reasonable but prove hard to implement. They require constant vigilance—about what to buy or what to agree to do. When vigilance flags—the slightest temptation in time or in money—you sink deeper.

Some people end up sinking further into debt. Others with more commitments. The resemblance is striking.

We normally think of time management and money management as distinct problems. The consequences of failing are different: bad time management leads to embarrassment or poor job performance; bad money management leads to fees or eviction. The cultural contexts are different: falling behind and missing a deadline means one thing to a busy professional; falling behind and missing a debt payment means something else to an urban low-wage worker.

What's common between these situations? Scarcity. “By scarcity,” they write, “we mean having less than you feel you need.”

And what happens when we feel a sense of scarcity? To show us Mullainathan and Shafir bring us back to the past. Near the end of World War II, the Allies realized they would need to feed a lot of Europeans on the edge of starvation. The question wasn't where to get the food but, rather, something more technical. What is the best way to start feeding them? Should you begin with normal meals or small quantities that gradually increase? Researchers at the University of Minnesota undertook an experiment with healthy male volunteers in a controlled environment “where their calories were reduced until they were subsisting on just enough food so as not to permanently harm themselves.” The most surprising findings were psychological. The men became completely focused on food in unexpected ways:

Obsessions developed around cookbooks and menus from local restaurants. Some men could spend hours comparing the prices of fruits and vegetables from one newspaper to the next. Some planned now to go into agriculture. They dreamed of new careers as restaurant owners…. When they went to the movies, only the scenes with food held their interest.

“Scarcity captures the mind,” Mullainathan and Shafir write. Starving people have food on their mind to the point of irrationality. But we all act this way when we experience scarcity. “The mind,” they write, “orients automatically, powerfully, toward unfulfilled needs.”

Scarcity is like oxygen. When you don't need it, you don't notice it. When you do need it, however, it's all you notice.

For the hungry, that need is food. For the busy it might be a project that needs to be finished. For the cash-strapped it might be this month's rent payment; for the lonely, a lack of companionship. Scarcity is more than just the displeasure of having very little. It changes how we think. It imposes itself on our minds.

And when scarcity is taking up your mental cycles and putting your attention on what you lack, you can't attend to other things. How, for instance, can you learn?

(There was) a school in New Haven that was located next to a noisy railroad line. To measure the impact of this noise on academic performance, two researchers noted that only one side of the school faced the tracks, so the students in classrooms on that side were particularly exposed to the noise but were otherwise similar to their fellow students. They found a striking difference between the two sides of the school. Sixth graders on the train side were a full year behind their counterparts on the quieter side. Further evidence came when the city, prompted by this study, installed noise pads. The researchers found this erased the difference: now students on both sides of the building performed at the same level.

Cognitive load matters. Mullainathan and Shafir believe that scarcity imposes a similar mental tax, impairing our ability to perform well and exercise self control.

We are all susceptible to “the planning fallacy,” which means that we're too optimistic about how long it will take to complete a project. Busy people, however, are more vulnerable to this fallacy. Because they are focused on everything they must currently do, they are “more distracted and overwhelmed—a surefire way to misplan.” “The underlying problem,” writes Cass Sunstein in his review for the New York Review of Books, “is that when people tunnel, they focus on their immediate problem; ‘knowing you will be hungry next month does not capture your attention the same way that being hungry today does.' A behavioral consequence of scarcity is “juggling,” which prevents long-term planning.”

When we have abundance we don't have as much depletion. Wealthy people can weather a shock without turning their lives upside-down. The mental energy needed to prevail may be substantial but it will not create a feeling of scarcity.

Imagine a day at work where your calendar is sprinkled with a few meetings and your to-do list is manageable. You spend the unscheduled time by lingering at lunch or at a meeting or calling a colleague to catch up. Now, imagine another day at work where your calendar is chock-full of meetings. What little free time you have must be sunk into a project that is overdue. In both cases time was physically scarce. You had the same number of hours at work and you had more than enough activities to fill them. Yet in one case you were acutely aware of scarcity, of the finiteness of time; in the other it was a distant reality, if you felt it at all. The feeling of scarcity is distinct from its physical reality.

Mullainathan and Shafir sum up their argument:

In a way, our argument in this book is quite simple. Scarcity captures our attention, and this provides a narrow benefit: we do a better job of managing pressing needs. But more broadly, it costs us: we neglect other concerns, and we become less effective in the rest of life. This argument not only helps explain how scarcity shapes our behaviors; it also produces some surprising results and sheds new light on how we might go about managing our scarcity.

In a way this explains why diets never work.

Scarcity: Why Having Too Little Means So Much goes on to discuss some of the possible way to mitigate scarcity using defaults and reminders.

Being Wrong: Adventures in the Margin of Error

"It infuriates me to be wrong when I know I’m right." — Molière
“It infuriates me to be wrong when I know I’m right.” — Molière

“Why is it so fun to be right?”

That's the opening line from Kathryn Schulz' excellent book Being Wrong: Adventures in the Margin of Error.

As pleasures go, it is, after all, a second-order one at best. Unlike many of life’s other delights—chocolate, surfing, kissing—it does not enjoy any mainline access to our biochemistry: to our appetites, our adrenal glands, our limbic systems, our swoony hearts. And yet, the thrill of being right is undeniable, universal, and (perhaps most oddly) almost entirely undiscriminating.

While we take pleasure in being right, we take as much, if not more, in feeling we are right.

A whole lot of us go through life assuming that we are basically right, basically all the time, about basically everything: about our political and intellectual convictions, our religious and moral beliefs, our assessment of other people, our memories, our grasp of facts. As absurd as it sounds when we stop to think about it, our steady state seems to be one of unconsciously assuming that we are very close to omniscient.

Schulz argues this makes sense. We're right most of the time and in these moments we affirm “our sense of being smart.” But Being Wrong is about … well being wrong.

If we relish being right and regard it as our natural state, you can imagine how we feel about being wrong. For one thing, we tend to view it as rare and bizarre—an inexplicable aberration in the normal order of things. For another, it leaves us feeling idiotic and ashamed.

In our collective imagination, error is associated not just with shame and stupidity but also with ignorance, indolence, psychopathology, and moral degeneracy. This set of associations was nicely summed up by the Italian cognitive scientist Massimo Piattelli-Palmarini, who noted that we err because of (among other things) “inattention, distraction, lack of interest, poor preparation, genuine stupidity, timidity, braggadocio, emotional imbalance,…ideological, racial, social or chauvinistic prejudices, as well as aggressive or prevaricatory instincts.” In this rather despairing view—and it is the common one—our errors are evidence of our gravest social, intellectual, and moral failings.

But of all the things we are wrong about, “this idea of error might well top the list.”

It is our meta-mistake: we are wrong about what it means to be wrong. Far from being a sign of intellectual inferiority, the capacity to err is crucial to human cognition. Far from being a moral flaw, it is inextricable from some of our most humane and honorable qualities: empathy, optimism, imagination, conviction, and courage. And far from being a mark of indifference or intolerance, wrongness is a vital part of how we learn and change. Thanks to error, we can revise our understanding of ourselves and amend our ideas about the world.

“As with dying,” Schulz pens, “we recognize erring as something that happens to everyone, without feeling that it is either plausible or desirable that it will happen to us.”

Being wrong is something we have a hard time culturally admitting.

As a culture, we haven’t even mastered the basic skill of saying “I was wrong.” This is a startling deficiency, given the simplicity of the phrase, the ubiquity of error, and the tremendous public service that acknowledging it can provide. Instead, what we have mastered are two alternatives to admitting our mistakes that serve to highlight exactly how bad we are at doing so. The first involves a small but strategic addendum: “I was wrong, but…”—a blank we then fill in with wonderfully imaginative explanations for why we weren’t so wrong after all. The second (infamously deployed by, among others, Richard Nixon regarding Watergate and Ronald Reagan regarding the Iran-Contra affair) is even more telling: we say, “mistakes were made.” As that evergreen locution so concisely demonstrates, all we really know how to do with our errors is not acknowledge them as our own.

Being wrong feels a lot like being right.

This is the problem of error-blindness. Whatever falsehoods each of us currently believes are necessarily invisible to us. Think about the telling fact that error literally doesn’t exist in the first person present tense: the sentence “I am wrong” describes a logical impossibility. As soon as we know that we are wrong, we aren’t wrong anymore, since to recognize a belief as false is to stop believing it. Thus we can only say “I was wrong.” Call it the Heisenberg Uncertainty Principle of Error: we can be wrong, or we can know it, but we can’t do both at the same time.

Error-blindness goes some way toward explaining our persistent difficulty with imagining that we could be wrong. It’s easy to ascribe this difficulty to various psychological factors—arrogance, insecurity, and so forth—and these plainly play a role. But error-blindness suggests that another, more structural issue might be at work as well. If it is literally impossible to feel wrong—if our current mistakes remain imperceptible to us even when we scrutinize our innermost being for signs of them—then it makes sense for us to conclude that we are right.

If our current mistakes are necessarily invisible to us, our past errors have an oddly slippery status as well. Generally speaking, they are either impossible to remember or impossible to forget. This wouldn’t be particularly strange if we consistently forgot our trivial mistakes and consistently remembered the momentous ones, but the situation isn’t quite that simple.

It’s hard to say which is stranger: the complete amnesia for the massive error, or the perfect recall for the trivial one. On the whole, though, our ability to forget our mistakes seems keener than our ability to remember them.

Part of what’s going on here is, in essence, a database-design flaw. Most of us don’t have a mental category called “Mistakes I Have Made.”

Like our inability to say “I was wrong,” this lack of a category called “error” is a communal as well as an individual problem. As someone who tried to review the literature on wrongness, I can tell you that, first, it is vast; and, second, almost none of it is filed under classifications having anything to do with error. Instead, it is distributed across an extremely diverse set of disciplines: philosophy, psychology, behavioral economics, law, medicine, technology, neuroscience, political science, and the history of science, to name just a few. So too with the errors in our own lives. We file them under a range of headings—“embarrassing moments,” “lessons I’ve learned,” “stuff I used to believe”—but very seldom does an event live inside us with the simple designation “wrong.”

This category problem is only one reason why our past mistakes can be so elusive. Another is that (as we’ll see in more detail later) realizing that we are wrong about a belief almost always involves acquiring a replacement belief at the same time: something else instantly becomes the new right.

What with error-blindness, our amnesia for our mistakes, the lack of a category called “error,” and our tendency to instantly overwrite rejected beliefs, it’s no wonder we have so much trouble accepting that wrongness is a part of who we are. Because we don’t experience, remember, track, or retain mistakes as a feature of our inner landscape, wrongness always seems to come at us from left field—that is, from outside ourselves. But the reality could hardly be more different. Error is the ultimate inside job.

For us to learn from error, we have to see it differently. The goal of Being Wrong then is “to foster an intimacy with our own fallibility, to expand our vocabulary for and interest in talking about our mistakes, and to linger for a while inside the normally elusive and ephemeral experience of being wrong.”