Tag: Definition

Daniel Kahneman Explains The Machinery of Thought

Daniel Kahneman

Israeli-American psychologist and Nobel Laureate Daniel Kahneman is the founding father of modern behavioral economics. His work has influenced how we see thinking, decisions, risk, and even happiness.

In Thinking, Fast and Slow, his “intellectual memoir,” he shows us in his own words some of his enormous body of work.

Part of that body includes a description of the “machinery of … thought,” which divides the brain into two agents, called System 1 and System 2, which “respectively produce fast and slow thinking.” For our purposes these can also be thought of as intuitive and deliberate thought.

The Two Systems

Psychologists have been intensely interested for several decades in the two modes of thinking evoked by the picture of the angry woman and by the multiplication problem, and have offered many labels for them. I adopt terms originally proposed by the psychologists Keith Stanovich and Richard West, and will refer to two systems in the mind, System 1 and System 2.

  • System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control.
  • System 2 allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.

If asked to pick which thinker we are, we pick system 2. However, as Kahneman points out:

The automatic operations of System 1 generate surprisingly complex patterns of ideas, but only the slower System 2 can construct thoughts in an orderly series of steps . I also describe circumstances in which System 2 takes over, overruling the freewheeling impulses and associations of System 1. You will be invited to think of the two systems as agents with their individual abilities, limitations, and functions.

System One
These vary by individual and are often “innate skills that we share with other animals.”

We are born prepared to perceive the world around us, recognize objects, orient attention, avoid losses, and fear spiders. Other mental activities become fast and automatic through prolonged practice. System 1 has learned associations between ideas (the capital of France?); it has also learned skills such as reading and understanding nuances of social situations. Some skills, such as finding strong chess moves, are acquired only by specialized experts. Others are widely shared. Detecting the similarity of a personality sketch to an occupational stereotype requires broad knowledge of the language and the culture, which most of us possess. The knowledge is stored in memory and accessed without intention and without effort.

System Two
This is when we do something that does not come naturally and requires some sort of continuous exertion.

In all these situations you must pay attention, and you will perform less well, or not at all, if you are not ready or if your attention is directed inappropriately.

Paying attention is not really the answer as that is mentally expensive and can make people “effectively blind, even to stimuli that normally attract attention.” This is the point of Christopher Chabris and Daniel Simons in their book The Invisible Gorilla. Not only are we blind to what is plainly obvious when someone points it out but we fail to see that we are blind in the first place.

The Division of Labour

Systems 1 and 2 are both active whenever we are awake. System 1 runs automatically and System 2 is normally in a comfortable low-effort mode, in which only a fraction of its capacity is engaged. System 1 continuously generates suggestions for System 2: impressions, intuitions, intentions, and feelings. If endorsed by System 2, impressions and intuitions turn into beliefs, and impulses turn into voluntary actions. When all goes smoothly, which is most of the time, System 2 adopts the suggestions of System 1 with little or no modification. You generally believe your impressions and act on your desires, and that is fine— usually.

When System 1 runs into difficulty, it calls on System 2 to support more detailed and specific processing that may solve the problem of the moment. System 2 is mobilized when a question arises for which System 1 does not offer an answer, as probably happened to you when you encountered the multiplication problem 17 × 24. You can also feel a surge of conscious attention whenever you are surprised. System 2 is activated when an event is detected that violates the model of the world that System 1 maintains. In that world, lamps do not jump, cats do not bark, and gorillas do not cross basketball courts. The gorilla experiment demonstrates that some attention is needed for the surprising stimulus to be detected. Surprise then activates and orients your attention: you will stare, and you will search your memory for a story that makes sense of the surprising event. System 2 is also credited with the continuous monitoring of your own behavior—the control that keeps you polite when you are angry, and alert when you are driving at night. System 2 is mobilized to increased effort when it detects an error about to be made. Remember a time when you almost blurted out an offensive remark and note how hard you worked to restore control. In summary, most of what you (your System 2) think and do originates in your System 1, but System 2 takes over when things get difficult, and it normally has the last word.

The division of labor between System 1 and System 2 is highly efficient: it minimizes effort and optimizes performance. The arrangement works well most of the time because System 1 is generally very good at what it does: its models of familiar situations are accurate, its short-term predictions are usually accurate as well, and its initial reactions to challenges are swift and generally appropriate. System 1 has biases, however, systematic errors that it is prone to make in specified circumstances. As we shall see, it sometimes answers easier questions than the one it was asked, and it has little understanding of logic and statistics. One further limitation of System 1 is that it cannot be turned off.

[…]

Conflict between an automatic reaction and an intention to control it is common in our lives. We are all familiar with the experience of trying not to stare at the oddly dressed couple at the neighboring table in a restaurant. We also know what it is like to force our attention on a boring book, when we constantly find ourselves returning to the point at which the reading lost its meaning. Where winters are hard, many drivers have memories of their car skidding out of control on the ice and of the struggle to follow well-rehearsed instructions that negate what they would naturally do: “Steer into the skid, and whatever you do, do not touch the brakes!” And every human being has had the experience of not telling someone to go to hell. One of the tasks of System 2 is to overcome the impulses of System 1. In other words, System 2 is in charge of self-control.

[…]

The question that is most often asked about cognitive illusions is whether they can be overcome. The message of these examples is not encouraging. Because System 1 operates automatically and cannot be turned off at will, errors of intuitive thought are often difficult to prevent. Biases cannot always be avoided, because System 2 may have no clue to the error. Even when cues to likely errors are available, errors can be prevented only by the enhanced monitoring and effortful activity of System 2. As a way to live your life, however, continuous vigilance is not necessarily good, and it is certainly impractical. Constantly questioning our own thinking would be impossibly tedious, and System 2 is much too slow and inefficient to serve as a substitute for System 1 in making routine decisions. The best we can do is a compromise: learn to recognize situations in which mistakes are likely and try harder to avoid significant mistakes when the stakes are high. The premise of this book is that it is easier to recognize other people’s mistakes than our own.

Still Curious? Thinking, Fast and Slow is a tour-de-force when it comes to thinking.

(image source)

Nassim Taleb: A Definition of Antifragile and its Implications

"Complex systems are weakened, even killed, when deprived of stressors."
“Complex systems are weakened, even killed, when deprived of stressors.”

I was talking with someone the other day about Antifragility, and I realized that, while a lot of people use the word, not many people have read: Antifragile, where Nassim Taleb defines it.

Just as being clear on what constitutes a black swan allowed us to better discuss the subject, so too will defining antifragility.

The classic example of something antifragile is Hydra, the greek mythological creature that has numerous heads. When one is cut off, two grow back in its place.

From Antifragile: Things That Gain from Disorder:

Some things benefit from shocks; they thrive and grow when exposed to volatility, randomness, disorder, and stressors and love adventure , risk, and uncertainty. Yet, in spite of the ubiquity of the phenomenon, there is no word for the exact opposite of fragile. Let us call it antifragile. Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better. This property is behind everything that has changed with time: evolution, culture, ideas, revolutions, political systems, technological innovation, cultural and economic success, corporate survival, good recipes (say, chicken soup or steak tartare with a drop of cognac), the rise of cities, cultures, legal systems, equatorial forests, bacterial resistance … even our own existence as a species on this planet. And antifragility determines the boundary between what is living and organic (or complex), say, the human body, and what is inert, say, a physical object like the stapler on your desk.

The antifragile loves randomness and uncertainty, which also means— crucially—a love of errors, a certain class of errors. Antifragility has a singular property of allowing us to deal with the unknown, to do things without understanding them— and do them well. Let me be more aggressive: we are largely better at doing than we are at thinking, thanks to antifragility. I’d rather be dumb and antifragile than extremely smart and fragile, any time.

It is easy to see things around us that like a measure of stressors and volatility: economic systems , your body, your nutrition (diabetes and many similar modern ailments seem to be associated with a lack of randomness in feeding and the absence of the stressor of occasional starvation), your psyche. There are even financial contracts that are antifragile: they are explicitly designed to benefit from market volatility.

Antifragility makes us understand fragility better. Just as we cannot improve health without reducing disease, or increase wealth without first decreasing losses, antifragility and fragility are degrees on a spectrum.

Nonprediction

By grasping the mechanisms of antifragility we can build a systematic and broad guide to nonpredictive decision making under uncertainty in business, politics, medicine, and life in general— anywhere the unknown preponderates, any situation in which there is randomness, unpredictability, opacity, or incomplete understanding of things.

It is far easier to figure out if something is fragile than to predict the occurrence of an event that may harm it. Fragility can be measured; risk is not measurable (outside of casinos or the minds of people who call themselves “risk experts”). This provides a solution to what I’ve called the Black Swan problem— the impossibility of calculating the risks of consequential rare events and predicting their occurrence. Sensitivity to harm from volatility is tractable, more so than forecasting the event that would cause the harm. So we propose to stand our current approaches to prediction, prognostication, and risk management on their heads.

In every domain or area of application, we propose rules for moving from the fragile toward the antifragile, through reduction of fragility or harnessing antifragility. And we can almost always detect antifragility (and fragility) using a simple test of asymmetry : anything that has more upside than downside from random events (or certain shocks) is antifragile; the reverse is fragile.

Deprivation of Antifragility

Crucially, if antifragility is the property of all those natural (and complex) systems that have survived, depriving these systems of volatility, randomness, and stressors will harm them. They will weaken, die, or blow up. We have been fragilizing the economy, our health, political life, education, almost everything … by suppressing randomness and volatility. … stressors. Much of our modern, structured, world has been harming us with top-down policies and contraptions (dubbed “Soviet-Harvard delusions” in the book) which do precisely this: an insult to the antifragility of systems. This is the tragedy of modernity: as with neurotically overprotective parents, those trying to help are often hurting us the most (see iatrogenics)

Antifragile is the antidote to Black Swans. The modern world may increase technical knowledge but it will also make things more fragile.

… Black Swans hijack our brains, making us feel we “sort of” or “almost” predicted them, because they are retrospectively explainable. We don’t realize the role of these Swans in life because of this illusion of predictability. Life is more, a lot more, labyrinthine than shown in our memory— our minds are in the business of turning history into something smooth and linear, which makes us underestimate randomness. But when we see it, we fear it and overreact. Because of this fear and thirst for order, some human systems, by disrupting the invisible or not so visible logic of things, tend to be exposed to harm from Black Swans and almost never get any benefit. You get pseudo-order when you seek order; you only get a measure of order and control when you embrace randomness.

Complex systems are full of interdependencies— hard to detect— and nonlinear responses. “Nonlinear” means that when you double the dose of, say, a medication, or when you double the number of employees in a factory, you don’t get twice the initial effect, but rather a lot more or a lot less. Two weekends in Philadelphia are not twice as pleasant as a single one— I’ve tried. When the response is plotted on a graph, it does not show as a straight line (“linear”), rather as a curve. In such environments, simple causal associations are misplaced; it is hard to see how things work by looking at single parts.

Man-made complex systems tend to develop cascades and runaway chains of reactions that decrease, even eliminate, predictability and cause outsized events. So the modern world may be increasing in technological knowledge, but, paradoxically, it is making things a lot more unpredictable.

An annoying aspect of the Black Swan problem— in fact the central, and largely missed , point —is that the odds of rare events are simply not computable.

Robustness is not enough.

Consider that Mother Nature is not just “safe.” It is aggressive in destroying and replacing, in selecting and reshuffling . When it comes to random events, “robust” is certainly not good enough. In the long run everything with the most minute vulnerability breaks, given the ruthlessness of time— yet our planet has been around for perhaps four billion years and, convincingly, robustness can’t just be it: you need perfect robustness for a crack not to end up crashing the system. Given the unattainability of perfect robustness, we need a mechanism by which the system regenerates itself continuously by using, rather than suffering from, random events, unpredictable shocks, stressors, and volatility.

Fragile and antifragile are relative — there is no absolute. You may be more antifragile than your neighbor but that doesn't make you antifragile.

The Triad is FRAGILE — ROBUST — ANTIFRAGILE.

Here's an example
Antifragile

All of this can lead to some pretty significant conclusions. Often it's impossible to be antifragile, but falling short of that you should be robust, not fragile. How do you become robust? Make sure you're not fragile. Eliminate things that make you fragile. In an interview, Taleb offers some ideas:

You have to avoid debt because debt makes the system more fragile. You have to increase redundancies in some spaces. You have to avoid optimization. That is quite critical for someone who is doing finance to understand because it goes counter to everything you learn in portfolio theory. … I have always been very skeptical of any form of optimization. In the black swan world, optimization isn’t possible. The best you can achieve is a reduction in fragility and greater robustness.

If you haven't already, I highly encourage you to read Antifragile.

Image credit: velinov.