Tag: Science

Alexander von Humboldt and the Invention of Nature: Creating a Holistic View of the World Through A Web of Interdisciplinary Knowledge

In his piece in 2014’s Edge collection This Idea Must Die: Scientific Theories That Are Blocking Progress, dinosaur paleontologist Scott Sampson writes that science needs to “subjectify” nature. By “subjectify”, he essentially means to see ourselves connected with nature, and therefore care about it the same way we do the people with whom we are connected.

That's not the current approach. He argues: “One of the most prevalent ideas in science is that nature consists of objects. Of course, the very practice of science is grounded in objectivity. We objectify nature so that we can measure it, test it, and study it, with the ultimate goal of unraveling its secrets. Doing so typically requires reducing natural phenomena to their component parts.”

But this approach is ultimately failing us.

Why? Because much of our unsustainable behavior can be traced to a broken relationship with nature, a perspective that treats the nonhuman world as a realm of mindless, unfeeling objects. Sustainability will almost certainly depend upon developing mutually enhancing relations between humans and nonhuman nature.

This isn't a new plea, though. Over 200 years ago, the famous naturalist Alexander Von Humboldt (1769-1859) was facing the same challenges.

In her compelling book The Invention of Nature: Alexander Von Humboldt’s New World, Andrea Wulf explores Humboldt as the first person to publish works promoting a holistic view of nature, arguing that nature could only be understood in relation to the subjectivity of experiencing it.

Fascinated by scientific instruments, measurements and observations, he was driven by a sense of wonder as well. Of course nature had to be measured and analyzed, but he also believed that a great part of our response to the natural world should be based on the senses and emotions.

Humboldt was a rock star scientist who ignored conventional boundaries in his exploration of nature. Humboldt's desire to know and understand the world led him to investigate discoveries in all scientific disciplines, and to see the interwoven patterns embedded in this knowledge — mental models anyone?

If nature was a web of life, he couldn’t look at it just as a botanist, a geologist or a zoologist. He required information about everything from everywhere.

Humboldt grew up in a world where science was dry, nature mechanical, and man an aloof and separate chronicler of what was before him. Not only did Humboldt have a new vision of what our understanding of nature could be, but he put humans in the middle of it.

Humboldt’s Essay on the Geography of Plants promoted an entirely different understanding of nature. Instead of only looking at an organism, … Humboldt now presented relationships between plants, climate and geography. Plants were grouped into zones and regions rather than taxonomic units. … He gave western science a new lens through which to view the natural world.

Revolutionary for his time, Humboldt rejected the Cartesian ideas of animals as mechanical objects. He also argued passionately against the growing approach in the sciences that put man atop and separate from the rest of the natural world. Promoting a concept of unity in nature, Humboldt saw nature as a “reflection of the whole … an organism in which the parts only worked in relation to each other.”

Furthermore, that “poetry was necessary to comprehend the mysteries of the natural world.”

Wulf paints one of Humboldt’s greatest achievements as his ability and desire to make science available to everyone. No one before him had “combined exact observation with a ‘painterly description of the landscape”.

By contrast, Humboldt took his readers into the crowded streets of Caracas, across the dusty plains of the Llanos and deep into the rainforest along the Orinoco. As he described a continent that few British had ever seen, Humboldt captured their imagination. His words were so evocative, the Edinburgh Review wrote, that ‘you partake in his dangers; you share his fears, his success and his disappointment.'

In a time when travel was precarious, expensive and unavailable to most people, Humboldt brought his experiences to anyone who could read or listen.

On 3 November 1827, … Humboldt began a series of sixty-one lectures at the university. These proved so popular that he added another sixteen at Berlin’s music hall from 6 December. For six months he delivered lectures several days a week. Hundreds of people attended each talk, which Humboldt presented without reading from his notes. It was lively, exhilarating and utterly new. By not charging any entry fee, Humboldt democratized science: his packed audiences ranged from the royal family to coachmen, from students to servants, from scholars to bricklayers – and half of those attending were women. Berlin had never seen anything like it.

The subjectification of nature is about seeing nature, experiencing it. Humboldt was a master of bringing people to worlds they couldn’t visit, allowing them to feel a part of it. In doing so, he wanted to force humanity to see itself in nature. If we were all part of the giant web, then we all had a responsibility to understand it.

When he listed the three ways in which the human species was affecting the climate, he named deforestation, ruthless irrigation and, perhaps most prophetically, the ‘great masses of steam and gas’ produced in the industrial centres. No one but Humboldt had looked at the relationship between humankind and nature like this before.

His final opus, a series of books called Cosmos, was the culmination of everything that Humboldt had learned and discovered.

Cosmos was unlike any previous book about nature. Humboldt took his readers on a journey from outer space to earth, and then from the surface of the planet into its inner core. He discussed comets, the Milky Way and the solar system as well as terrestrial magnetism, volcanoes and the snow line of mountains. He wrote about the migration of the human species, about plants and animals and the microscopic organisms that live in stagnant water or on the weathered surface of rocks. Where others insisted that nature was stripped of its magic as humankind penetrated into its deepest secrets, Humboldt believed exactly the opposite. How could this be, Humboldt asked, in a world in which the coloured rays of an aurora ‘unite in a quivering sea flame’, creating a sight so otherworldly ‘the splendour of which no description can reach’? Knowledge, he said, could never ‘kill the creative force of imagination’ – instead it brought excitement, astonishment and wondrousness.

This is the ultimate subjectivity of nature. Being inspired by its beauty to try and understand how it works. Humboldt had respect for nature, for the wonders it contained, but also as the system in which we ourselves are an inseparable part.

Wulf concludes at the end that Humboldt,

…was one of the last polymaths, and died at a time when scientific disciplines were hardening into tightly fenced and more specialized fields. Consequently his more holistic approach – a scientific method that included art, history, poetry and politics alongside hard data – has fallen out of favour.

Maybe this is where the subjectivity of nature has gone. But we can learn from Humboldt the value of bringing it back.

In a world where we tend to draw a sharp line between the sciences and the arts, between the subjective and the objective, Humboldt’s insight that we can only truly understand nature by using our imagination makes him a visionary.

A little imagination is all it takes.

Mental Model: Occam’s Razor

The Basics

Occam’s razor (also known as the ‘law of parsimony’) is a problem-solving principle which serves as a useful mental model. A philosophical razor is a tool used to eliminate improbable options in a given situation, of which Occam’s is the best-known example.

Occam’s razor can be summarized as such:

Among competing hypotheses, the one with the fewest assumptions should be selected.

In simpler language, Occam’s razor states that the simplest solution is correct. Another good explanation of Occam’s razor comes from the paranormal writer, William J. Hall: ‘Occam’s razor is summarized for our purposes in this way: Extraordinary claims demand extraordinary proof.’

In other words, we should avoid looking for excessively complex solutions to a problem and focus on what works, given the circumstances. Occam’s razor is used in a wide range of situations, as a means of making rapid decisions and establishing truths without empirical evidence. It works best as a mental model for making initial conclusions before adequate information can be obtained.

A further literary summary comes from one of the best-loved fictional characters, Arthur Conan Doyle’s Sherlock Holmes. His classic aphorism is an expression of Occam’s razor: “If you eliminate the impossible, whatever remains, however improbable, must be the truth.”

A number of mathematical and scientific studies have backed up its validity and lasting relevance. In particular, the principle of minimum energy supports Occam’s razor. This facet of the second law of thermodynamics states that, wherever possible, the use of energy is minimized. In general, the universe tends towards simplicity. Physicists use Occam’s razor, in the knowledge that they can rely on everything to use the minimum energy necessary to function. A ball at the top of a hill will roll down in order to be at the point of minimum potential energy. The same principle is present in biology. For example, if a person repeats the same action on a regular basis in response to the same cue and reward, it will become a habit as the corresponding neural pathway is formed. From then on, their brain will use less energy to complete the same action.

The History of Occam’s Razor

The concept of Occam’s razor is credited to William of Ockham, a 13-14th-century friar, philosopher, and theologian. While he did not coin the term, his characteristic way of making deductions inspired other writers to develop the heuristic. Indeed, the concept of Occam’s razor is an ancient one which was first stated by Aristotle who wrote “we may assume the superiority, other things being equal, of the demonstration which derives from fewer postulates or hypotheses.”

Robert Grosseteste expanded on Aristotle's writing in the 1200s, declaring that:

That is better and more valuable which requires fewer, other circumstances being equal… For if one thing were demonstrated from many and another thing from fewer equally known premises, clearly that is better which is from fewer because it makes us know quickly, just as a universal demonstration is better than particular because it produces knowledge from fewer premises. Similarly, in natural science, in moral science, and in metaphysics the best is that which needs no premises and the better that which needs the fewer, other circumstances being equal.

Early writings such as this are believed to have lead to the eventual, (ironic) simplification of the concept. Nowadays, Occam’s razor is an established mental model which can form a useful part of a latticework of knowledge.

Examples of the Use of Occam’s Razor

Theology

In theology, Occam’s razor is used to prove or disprove the existence of God. William of Ockham, being a Christian friar, used his theory to defend religion. He regarded the scripture as true in the literal sense and therefore saw it as simple proof. To him, the bible was synonymous with reality and therefore to contradict it would conflict with established fact. Many religious people regard the existence of God as the simplest possible explanation for the creation of the universe.

In contrast, Thomas Aquinas used the concept in his radical 13th century work, The Summa Theologica. In it, he argued for atheism as a logical concept, not a contradiction of accepted beliefs. Aquinas wrote ‘it is superfluous to suppose that what can be accounted for by a few principles has been produced by many.’ He considered the existence of God to be a hypothesis which makes a huge number of assumptions, compared to scientific alternatives. Many modern atheists consider the existence of God to be unnecessarily complex, in particular, due to the lack of empirical evidence.

Taoist thinkers take Occam’s razor one step further, by simplifying everything in existence to the most basic form. In Taoism, everything is an expression of a single ultimate reality (known as the Tao.) This school of religious and philosophical thought believes that the most plausible explanation for the universe is the simplest- everything is both created and controlled by a single force. This can be seen as a profound example of the use of Occam’s razor within theology.

The Development of Scientific Theories

Occam’s razor is frequently used by scientists, in particular for theoretical matters. The simpler a hypothesis is, the more easily it can be proved or falsified. A complex explanation for a phenomenon involves many factors which can be difficult to test or lead to issues with the repeatability of an experiment. As a consequence, the simplest solution which is consistent with the existing data is preferred. However, it is common for new data to allow hypotheses to become more complex over time. Scientists chose to opt for the simplest solution the current data permits while remaining open to the possibility of future research allowing for greater complexity.

Failing to observe Occam’s razor is usually a sign of bad science and an attempt to cover poor explanations. The version used by scientists can best be summarized as: ‘when you have two competing theories that make exactly the same predictions, the simpler one is the better.’

Obtaining funding for simpler hypothesis tends to be easier, as they are often cheaper to prove. As a consequence, the use of Occam’s razor in science is a matter of practicality.

Albert Einstein referred to Occam’s razor when developing his theory of special relativity. He formulated his own version: ‘it can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.’ Or “everything should be made as simple as possible, but not simpler.” This preference for simplicity can be seen in one of the most famous equations ever devised: E=MC2. Rather than making it a lengthy equation requiring pages of writing, Einstein reduced the factors necessary down to the bare minimum. The result is usable and perfectly parsimonious.

The physicist Stephen Hawking advocates for Occam’s razor in A Brief History of Time:

We could still imagine that there is a set of laws that determines events completely for some supernatural being, who could observe the present state of the universe without disturbing it. However, such models of the universe are not of much interest to us mortals. It seems better to employ the principle known as Occam's razor and cut out all the features of the theory that cannot be observed.

Isaac Newton used Occam’s razor too when developing his theories. Newton stated: “we are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.” As a result, he sought to make his theories (including the three laws of motion) as simple as possible, with the fewest underlying assumptions necessary.

Medicine

Modern doctors use a version of Occam’s razor, stating that they should look for the fewest possible causes to explain their patient's multiple symptoms and also for the most likely causes. A doctor I know often repeats, “common things are common.” Interns are instructed, “when you hear hoofbeats, think horses, not zebras.” For example, a person displaying influenza-like symptoms during an epidemic would be considered more probable to be suffering from influenza than an alternative, rarer disease. Making minimal diagnoses reduces the risk of over treating a patient, or of causing dangerous interactions between different treatments. This is of particular importance within the current medical model, where patients are likely to see numerous different health specialists and communication between them can be poor.

Prison Abolition and Fair Punishment

Occam’s razor has long played a role in attitudes towards the punishment of crimes. In this context, it refers to the idea that people should be given the least punishment necessary for their crimes.

This is to avoid the excessive penal practices which were popular in the past, (for example, a Victorian could receive five years of hard labour for stealing a piece of food.) The concept of penal parsimony was pioneered by Jeremy Bentham, the founder of utilitarianism. He stated that punishments should not cause more pain than they prevent. Life imprisonment for murder could be seen as justified in that it may prevent a great deal of potential pain, should the perpetrator offend again. On the other hand, long-term imprisonment of an impoverished person for stealing food causes substantial suffering without preventing any.

Bentham’s writings on the application of Occam’s razor to punishment led to the prison abolition movement and our modern ideas of rehabilitation.

Crime solving and forensic work

When it comes to solving a crime, Occam’s razor is used in conjunction with experience and statistical knowledge. A woman is statistically more likely to be killed by a male partner than any other person. Should a female be found murdered in her locked home, the first person police interview would be any male partners. The possibility of a stranger entering can be considered, but the simplest possible solution with the fewest assumptions made would be that the crime was perpetrated by her male partner.

By using Occam’s razor, police officers can solve crimes faster and with fewer expenses.

Exceptions and Issues

It is important to note that, like any mental model, Occam’s razor is not failsafe and should be used with care, lest you cut yourself. This is especially crucial when it comes to important or risky decisions. There are exceptions to any rule, and we should never blindly follow a mental model which logic, experience, or empirical evidence contradict. The smartest people are those who know the rules, but also know when to ignore them. When you hear hoofbeats behind you, in most cases you should think horses, not zebras- unless you are out on the African savannah.

Simplicity is also a subjective topic- in the example of the NASA moon landing conspiracy theory, some people consider it simpler for them to have been faked, others for them to have been real. When using Occam’s razor to make deductions, we must avoid falling prey to confirmation bias and merely using it to backup preexisting notions. The same goes for the theology example mentioned previously – some people consider the existence of God to be the simplest option, others consider the inverse to be true. Semantic simplicity must not be given overt importance when selecting the solution which Occam’s razor points to. A hypothesis can sound simple, yet involve more assumptions than a verbose alternative.

Occam’s razor should not be used in the place of logic, scientific methods and personal insights. In the long term, a judgment must be backed by empirical evidence, not just its simplicity. Lisa Randall best expressed the issues with Occam’s razor in her book, Dark Matter and the Dinosaurs: The Astounding Interconnectedness of the Universe:

My second concern about Occam’s Razor is just a matter of fact. The world is more complicated than any of us would have been likely to conceive. Some particles and properties don’t seem necessary to any physical processes that matter—at least according to what we’ve deduced so far. Yet they exist. Sometimes the simplest model just isn’t the correct one.

Harlan Coben has disputed many criticisms of Occam’s razor by stating that people fail to understand its exact purpose:

Most people oversimplify Occam’s razor to mean the simplest answer is usually correct. But the real meaning, what the Franciscan friar William of Ockham really wanted to emphasize, is that you shouldn’t complicate that you shouldn’t “stack” a theory if a simpler explanation was at the ready. Pare it down. Prune the excess.

I once again leave you with Einstein: “Everything should be made as simple as possible, but not simpler.

Occam’s razor is complimented by other mental models, including fundamental error distribution, Hanlon’s razor, confirmation bias, availability heuristic and hindsight bias. The nature of mental models is that they tend to all interlock and work best in conjunction.

Naval Ravikant on Reading, Happiness, Systems for Decision Making, Habits, Honesty and More

Naval Ravikant (@naval) is the CEO and co-founder of AngelList. He’s invested in more than 100 companies, including Uber, Twitter, Yammer, and many others.

Don’t worry, we’re not going to talk about early stage investing. Naval’s an incredibly deep thinker who challenges the status quo on so many things.

In this wide-ranging interview, we talk about reading, habits, decision-making, mental models, and life.

Just a heads up, this is the longest podcast I’ve ever done. While it felt like only thirty minutes, our conversation lasted over two hours!

If you’re like me, you’re going to take a lot of notes so grab a pen and paper. I left some white space on the transcript below in case you want to take notes in the margin.

Enjoy this amazing conversation.

******

Listen

***

Books mentioned

Transcript

Normally only members of our learning community have access to transcripts, however, we wanted to make this one open to everyone. Here's the complete transcript of the interview with Naval.

How To Mentally Overachieve — Charles Darwin’s Reflections On His Own Mind

We’ve written quite a bit about the marvelous British naturalist Charles Darwin, who with his Origin of Species created perhaps the most intense intellectual debate in human history, one which continues up to this day.

Darwin’s Origin was a courageous and detailed thought piece on the nature and development of biological species. It's the starting point for nearly all of modern biology.

But, as we’ve noted before, Darwin was not a man of pure IQ. He was not Issac Newton, or Richard Feynman, or Albert Einstein — breezing through complex mathematical physics at a young age.

Charlie Munger thinks Darwin would have placed somewhere in the middle of a good private high school class. He was also in notoriously bad health for most of his adult life and, by his son’s estimation, a terrible sleeper. He really only worked a few hours a day in the many years leading up to the Origin of Species.

Yet his “thinking work” outclassed almost everyone. An incredible story.

In his autobiography, Darwin reflected on this peculiar state of affairs. What was he good at that led to the result? What was he so weak at? Why did he achieve better thinking outcomes? As he put it, his goal was to:

“Try to analyse the mental qualities and the conditions on which my success has depended; though I am aware that no man can do this correctly.”

In studying Darwin ourselves, we hope to better appreciate our own strengths and weaknesses and, not to mention understand the working methods of a “mental overachiever.

Let's explore what Darwin saw in himself.

***

1. He did not have a quick intellect or an ability to follow long, complex, or mathematical reasoning. He may have been a bit hard on himself, but Darwin realized that he wasn't a “5 second insight” type of guy (and let's face it, most of us aren't). His life also proves how little that trait matters if you're aware of it and counter-weight it with other methods.

I have no great quickness of apprehension or wit which is so remarkable in some clever men, for instance, Huxley. I am therefore a poor critic: a paper or book, when first read, generally excites my admiration, and it is only after considerable reflection that I perceive the weak points. My power to follow a long and purely abstract train of thought is very limited; and therefore I could never have succeeded with metaphysics or mathematics. My memory is extensive, yet hazy: it suffices to make me cautious by vaguely telling me that I have observed or read something opposed to the conclusion which I am drawing, or on the other hand in favour of it; and after a time I can generally recollect where to search for my authority. So poor in one sense is my memory, that I have never been able to remember for more than a few days a single date or a line of poetry.

2. He did not feel easily able to write clearly and concisely. He compensated by getting things down quickly and then coming back to them later, thinking them through again and again. Slow, methodical….and ridiculously effective: For those who haven't read it, the Origin of Species is extremely readable and clear, even now, 150 years later.

I have as much difficulty as ever in expressing myself clearly and concisely; and this difficulty has caused me a very great loss of time; but it has had the compensating advantage of forcing me to think long and intently about every sentence, and thus I have been led to see errors in reasoning and in my own observations or those of others.

There seems to be a sort of fatality in my mind leading me to put at first my statement or proposition in a wrong or awkward form. Formerly I used to think about my sentences before writing them down; but for several years I have found that it saves time to scribble in a vile hand whole pages as quickly as I possibly can, contracting half the words; and then correct deliberately. Sentences thus scribbled down are often better ones than I could have written deliberately.

3. He forced himself to be an incredibly effective and organized collector of information. Darwin's system of reading and indexing facts in large portfolios is worth emulating, as is the habit of taking down conflicting ideas immediately.

As in several of my books facts observed by others have been very extensively used, and as I have always had several quite distinct subjects in hand at the same time, I may mention that I keep from thirty to forty large portfolios, in cabinets with labelled shelves, into which I can at once put a detached reference or memorandum. I have bought many books, and at their ends I make an index of all the facts that concern my work; or, if the book is not my own, write out a separate abstract, and of such abstracts I have a large drawer full. Before beginning on any subject I look to all the short indexes and make a general and classified index, and by taking the one or more proper portfolios I have all the information collected during my life ready for use.

4. He had possibly the most valuable trait in any sort of thinker: A passionate interest in understanding reality and putting it in useful order in his headThis “Reality Orientation” is hard to measure and certainly does not show up on IQ tests, but probably determines, to some extent, success in life.

On the favourable side of the balance, I think that I am superior to the common run of men in noticing things which easily escape attention, and in observing them carefully. My industry has been nearly as great as it could have been in the observation and collection of facts. What is far more important, my love of natural science has been steady and ardent.

This pure love has, however, been much aided by the ambition to be esteemed by my fellow naturalists. From my early youth I have had the strongest desire to understand or explain whatever I observed,–that is, to group all facts under some general laws. These causes combined have given me the patience to reflect or ponder for any number of years over any unexplained problem. As far as I can judge, I am not apt to follow blindly the lead of other men. I have steadily endeavoured to keep my mind free so as to give up any hypothesis, however much beloved (and I cannot resist forming one on every subject), as soon as facts are shown to be opposed to it.

Indeed, I have had no choice but to act in this manner, for with the exception of the Coral Reefs, I cannot remember a single first-formed hypothesis which had not after a time to be given up or greatly modified. This has naturally led me to distrust greatly deductive reasoning in the mixed sciences. On the other hand, I am not very sceptical—a frame of mind which I believe to be injurious to the progress of science. A good deal of scepticism in a scientific man is advisable to avoid much loss of time, but I have met with not a few men, who, I feel sure, have often thus been deterred from experiment or observations, which would have proved directly or indirectly serviceable.

[…]

Therefore my success as a man of science, whatever this may have amounted to, has been determined, as far as I can judge, by complex and diversified mental qualities and conditions. Of these, the most important have been—the love of science—unbounded patience in long reflecting over any subject—industry in observing and collecting facts—and a fair share of invention as well as of common sense.

5. Most inspirational to us of average intellect, he outperformed his own mental aptitude with these good habits, surprising even himself with the results.

With such moderate abilities as I possess, it is truly surprising that I should have influenced to a considerable extent the belief of scientific men on some important points.

***

Still Interested? Read his autobiography, his The Origin of Species, or check out David Quammen's wonderful short biography of the most important period of Darwin's life. Also, if you missed it, check out our prior post on Darwin's Golden Rule.

The Island of Knowledge: Science and the Meaning of Life

“As the Island of Knowledge grows, so do the shores of our ignorance—the boundary between the known and unknown. Learning more about the world doesn't lead to a point closer to a final destination—whose existence is nothing but a hopeful assumption anyways—but to more questions and mysteries. The more we know, the more exposed we are to our ignorance, and the more we know to ask.”

***

Common across human history is our longing to better understand the world we live in, and how it works. But how much can we actually know about the world?

In his book, The Island of Knowledge: The Limits of Science and the Search for Meaning, Physicist Marcelo Gleiser traces our progress of modern science in the pursuit to the most fundamental questions on existence, the origin of the universe, and the limits of knowledge.

What we know of the world is limited by what we can see and what we can describe, but our tools have evolved over the years to reveal ever more pleats into our fabric of knowledge. Gleiser celebrates this persistent struggle to understand our place in the world and travels our history from ancient knowledge to our current understanding.

While science is not the only way to see and describe the world we live in, it is a response to the questions on who we are, where we are, and how we got here. “Science speaks directly to our humanity, to our quest for light, ever more light.

To move forward, science needs to fail, which runs counter to our human desire for certainty. “We are surrounded by horizons, by incompleteness.” Rather than give up, we struggle along a scale of progress. What makes us human is this journey to understand more about the mysteries of the world and explain them with reason. This is the core of our nature.

While the pursuit is never ending, the curious journey offers insight not just into the natural world, but insight into ourselves.

“What I see in Nature is a magnificent structure that we can comprehend only
very imperfectly,
and that must fill a thinking person with a feeling of humility.”
— Albert Einstein

We tend to think that what we see is all there is — that there is nothing we cannot see. We know it isn't true when we stop and think, yet we still get lulled into a trap of omniscience.

Science is thus limited, offering only part of the story — the part we can see and measure. The other part remains beyond our immediate reach.

What we see of the world,” Gleiser begins, “is only a sliver of what's out there.”

There is much that is invisible to the eye, even when we augment our sensorial perception with telescopes, microscopes, and other tools of exploration. Like our senses, every instrument has a range. Because much of Nature remains hidden from us, our view of the world is based only on the fraction of reality that we can measure and analyze. Science, as our narrative describing what we see and what we conjecture exists in the natural world, is thus necessarily limited, telling only part of the story. … We strive toward knowledge, always more knowledge, but must understand that we are, and will remain, surrounded by mystery. This view is neither antiscientific nor defeatist. … Quite the contrary, it is the flirting with this mystery, the urge to go beyond the boundaries of the known, that feeds our creative impulse, that makes us want to know more.

While we may broadly understand the map of what we call reality, we fail to understand its terrain. Reality, Gleiser argues, “is an ever-shifting mosaic of ideas.”

However…

The incompleteness of knowledge and the limits of our scientific worldview only add to the richness of our search for meaning, as they align science with our human fallibility and aspirations.

What we call reality is a (necessarily) limited synthesis. It is certainly our reality, as it must be, but it is not the entire reality itself:

My perception of the world around me, as cognitive neuroscience teaches us, is synthesized within different regions of my brain. What I call reality results from the integrated sum of countless stimuli collected through my five senses, brought from the outside into my head via my nervous system. Cognition, the awareness of being here now, is a fabrication of a vast set of chemicals flowing through myriad synaptic connections between my neurons. … We have little understanding as to how exactly this neuronal choreography engenders us with a sense of being. We go on with our everyday activities convinced that we can separate ourselves from our surroundings and construct an objective view of reality.

The brain is a great filtering tool, deaf and blind to vast amounts of information around us that offer no evolutionary advantage. Part of it we can see and simply ignore. Other parts, like dust particles and bacteria, go unseen because of limitations of our sensory tools.

As the Fox said to the Little Prince in Antoine de Saint-Exupery's fable, “What is essential is invisible to the eye.” There is no better example than oxygen.

Science has increased our view. Our measurement tools and instruments can see bacteria and radiation, subatomic particles and more. However precise these tools have become, their view is still limited.

There is no such thing as an exact measurement. Every measurement must be stated within its precision and quoted together with “error bars” estimating the magnitude of errors. High-precision measurements are simply measurements with small error bars or high confidence levels; there are no perfect, zero-error measurements.

[…]

Technology limits how deeply experiments can probe into physical reality. That is to say, machines determine what we can measure and thus what scientists can learn about the Universe and ourselves. Being human inventions, machines depend on our creativity and available resources. When successful, they measure with ever-higher accuracy and on occasion may also reveal the unexpected.

“All models are wrong, some are useful.”
— George Box

What we know about the world is only what we can detect and measure — even if we improve our “detecting and measuring” as time goes along. And thus we make our conclusions of reality on what we can currently “see.”

We see much more than Galileo, but we can't see it all. And this restriction is not limited to measurements: speculative theories and models that extrapolate into unknown realms of physical reality must also rely on current knowledge. When there is no data to guide intuition, scientists impose a “compatibility” criterion: any new theory attempting to extrapolate beyond tested ground should, in the proper limit, reproduce current knowledge.

[…]

If large portions of the world remain unseen or inaccessible to us, we must consider the meaning of the word “reality” with great care. We must consider whether there is such a thing as an “ultimate reality” out there — the final substrate of all there is — and, if so, whether we can ever hope to grasp it in its totality.

[…]

We thus must ask whether grasping reality's most fundamental nature is just a matter of pushing the limits of science or whether we are being quite naive about what science can and can't do.

Here is another way of thinking about this: if someone perceives the world through her senses only (as most people do), and another amplifies her perception through the use of instrumentation, who can legitimately claim to have a truer sense of reality? One “sees” microscopic bacteria, faraway galaxies, and subatomic particles, while the other is completely blind to such entities. Clearly they “see” different things and—if they take what they see literally—will conclude that the world, or at least the nature of physical reality, is very different.

Asking who is right misses the point, although surely the person using tools can see further into the nature of things. Indeed, to see more clearly what makes up the world and, in the process to make more sense of it and ourselves is the main motivation to push the boundaries of knowledge. … What we call “real” is contingent on how deeply we are able to probe reality. Even if there is such thing as the true or ultimate nature of reality, all we have is what we can know of it.

[…]

Our perception of what is real evolves with the instruments we use to probe Nature. Gradually, some of what was unknown becomes known. For this reason, what we call “reality” is always changing. … The version of reality we might call “true” at one time will not remain true at another. … Given that our instruments will always evolve, tomorrow's reality will necessarily include entitles not known to exist today. … More to the point, as long as technology advances—and there is no reason to suppose that it will ever stop advancing for as long as we are around—we cannot foresee an end to this quest. The ultimate truth is elusive, a phantom.

Gleiser makes his point with a beautiful metaphor. The Island of Knowledge.

Consider, then, the sum total of our accumulated knowledge as constituting an island, which I call the “Island of Knowledge.” … A vast ocean surrounds the Island of Knowledge, the unexplored ocean of the unknown, hiding countless tantalizing mysteries.

The Island of Knowledge grows as we learn more about the world and ourselves. And as the island grows, so too “do the shores of our ignorance—the boundary between the known and unknown.”

Learning more about the world doesn't lead to a point closer to a final destination—whose existence is nothing but a hopeful assumption anyways—but to more questions and mysteries. The more we know, the more exposed we are to our ignorance, and the more we know to ask.

As we move forward we must remember that despite our quest, the shores of our ignorance grow as the Island of Knowledge grows. And while we will struggle with the fact that not all questions will have answers, we will continue to progress. “It is also good to remember,” Gleiser writes, “that science only covers part of the Island.”

Richard Feynman has pointed out before that science can only answer the subset of question that go, roughly, “If I do this, what will happen?” Answers to questions like Why do the rules operate that way? and Should I do it? are not really questions of scientific nature — they are moral, human questions, if they are knowable at all.

There are many ways of understanding and knowing that should, ideally, feed each other. “We are,” Gleiser concludes, “multidimensional creatures and search for answers in many, complementary ways. Each serves a purpose and we need them all.”

“The quest must go on. The quest is what makes us matter: to search for more answers, knowing that the significant ones will often generate surprising new questions.”

The Island of Knowledge is a wide-ranging tour through scientific history from planetary motions to modern scientific theories and how they affect our ideas on what is knowable.

What’s So Significant About Significance?

How Not to be wrong

One of my favorite studies of all time took the 50 most common ingredients from a cookbook and searched the literature for a connection to cancer: 72% had a study linking them to increased or decreased risk of cancer. (Here's the link for the interested.)

Meta-analyses (studies examining multiple studies) quashed the effect pretty seriously, but how many of those single studies were probably reported on in multiple media outlets, permanently causing changes in readers' dietary habits? (We know from studying juries that people are often unable to “forget” things that are subsequently proven false or misleading — misleading data is sticky.)

The phrase “statistically significant” is one of the more unfortunately misleading ones of our time. The word significant in the statistical sense — meaning distinguishable from random chance — does not carry the same meaning in common parlance, in which we mean distinguishable from something that does not matterWe'll get to what that means.

Confusing the two gets at the heart of a lot of misleading headlines and it's worth a brief look into why they don't mean the same thing, so you can stop being scared that everything you eat or do is giving you cancer.

***

The term statistical significance is used to denote when an effect is found to be extremely unlikely to have occurred by chance. In order to make that determination, we have to propose a null hypothesis to be rejected. Let's say we propose that eating an apple a day reduces the incidence of colon cancer. The “null hypothesis” here would be that eating an apple a day does nothing to the incidence of colon cancer — that we'd be equally likely to get colon cancer if we ate that daily apple.

When we analyze the data of our study, we're technically not looking to say “Eating an apple a day prevents colon cancer” — that's a bit of a misconception. What we're actually doing is an inversion we want the data to provide us with sufficient weight to reject the idea that apples have no effect on colon cancer.

And even when that happens, it's not an all-or-nothing determination. What we're actually saying is “It would be extremely unlikely for the data we have, which shows a daily apple reduces colon cancer by 50%, to have popped up by chance. Not impossible, but very unlikely.” The world does not quite allow us to have absolute conviction.

How unlikely? The currently accepted standard in many fields is 5% — there is a less than 5% chance the data would come up this way randomly. That immediately tells you that at least 1 out of every 20 studies must be wrong, but alas that is where we're at. (The problem with the 5% p-value, and the associated problem of p-hacking has been subject to some intense debate, but we won't deal with that here.)

We'll get to why “significance can be insignificant,” and why that's so important, in a moment. But let's make sure we're fully on board with the importance of sorting chance events from real ones with another illustration, this one outlined by Jordan Ellenberg in his wonderful book How Not to Be WrongPay close attention:

Suppose we're in null hypothesis land, where the chance of death is exactly the same (say, 10%) for the fifty patients who got your drug and the fifty who got [a] placebo. But that doesn't mean that five of the drug patients die and five of the placebo patients die. In fact, the chance that exactly five of the drug patients die is about 18.5%; not very likely, just as it's not very likely that a long series of coin tosses would yield precisely as many heads as tails. In the same way, it's not very likely that exactly the same number of drug patients and placebo patients expire during the course of the trial. I computed:

13.3% chance equally many drug and placebo patients die
43.3% chance fewer placebo patients than drug patients die
43.3% chance fewer drug patients than placebo patients die

Seeing better results among the drug patients than the placebo patients says very little, since this isn't at all unlikely, even under the null hypothesis that your drug doesn't work.

But things are different if the drug patients do a lot better. Suppose five of the placebo patients die during the trial, but none of the drug patients do. If the null hypothesis is right, both classes of patients should have a 90% chance of survival. But in that case, it's highly unlikely that all fifty of the drug patients would survive. The first of the drug patients has a 90% chance; now the chance that not only the first but also the second patient survives is 90% of that 90%, or 81%–and if you want the third patient to survive as well, the chance of that happening is only 90% of that 81%, or 72.9%. Each new patient whose survival you stipulate shaves a little off the chances, and by the end of the process, where you're asking about the probability that all fifty will survive, the slice of probability that remains is pretty slim:

(0.9) x (0.9) x (0.9) x … fifty times! … x (0.9) x (0.9) = 0.00515 …

Under the null hypothesis, there's only one chance in two hundred of getting results this good. That's much more compelling. If I claim I can make the sun come up with my mind, and it does, you shouldn't be impressed by my powers; but if I claim I can make the sun not come up, and it doesn't, then I've demonstrated an outcome very unlikely under the null hypothesis, and you'd best take notice.

So you see, all this null hypothesis stuff is pretty important because what you want to know is if an effect is really “showing up” or if it just popped up by chance.

A final illustration should make it clear:

Imagine you were flipping coins with a particular strategy of getting more heads, and after 30 flips you had 18 heads and 12 tails. Would you call it a miracle? Probably not — you'd realize immediately that it's perfectly possible for an 18/12 ratio to happen by chance. You wouldn't write an article in U.S. News and World Report proclaiming you'd figured out coin flipping.

Now let's say instead you flipped the coin 30,000 times and you get 18,000 heads and 12,000 tails…well, then your case for statistical significance would be pretty tight.  It would be approaching impossible to get that result by chance — your strategy must have something to it. The null hypothesis of “My coin flipping technique is no better than the usual one” would be easy to reject! (The p-value here would be orders of magnitude less than 5%, by the way.)

That's what this whole business is about.

***

Now that we've got this idea down, we come to the big question that statistical significance cannot answer: Even if the result is distinguishable from chance, does it actually matter?

Statistical significance cannot tell you whether the result is worth paying attention to — even if you get the p-value down to a minuscule number, increasing your confidence that what you saw was not due to chance. 

In How Not to be Wrong, Ellenberg provides a perfect example:

A 1995 study published in a British journal indicated that a new birth control pill doubled the risk of venous thrombosis (potentially killer blood clot) in its users. Predictably, 1.5 million British women freaked out, and some meaningfully large percentage of them stopped taking the pill. In 1996, 26,000 more babies were born than the previous year and there were 13,600 more abortions. Whoops!

So what, right? Lots of mothers' lives were saved, right?

Not really. The initial probability of a women getting a venous thrombosis with any old birth control pill, was about 1 in 7,000 or about 0.01%. That means that the “Killer Pill,” even if was indeed increasing “thrombosis risk,” only increased that risk to 2 in 7,000, or about 0.02%!! Is that worth rearranging your life for? Probably not.

Ellenberg makes the excellent point that, at least in the case of health, the null hypothesis is unlikely to be right in most cases! The body is a complex system — of course what we put in it affects how it functions in some direction or another. It's unlikely to be absolute zero.

But numerical and scale-based thinking, indispensable for anyone looking to not be a sucker, tells us that we must distinguish between small and meaningless effects (like the connection between almost all individual foods and cancer so far) and real ones (like the connection between smoking and lung cancer).

And now we arrive at the problem of “significance” — even if an effect is really happening, it still may not matter!  We must learn to be wary of “relative” statistics (i.e., “the risk has doubled”), and look to favor “absolute” statistics, which tell us whether the thing is worth worrying about at all.

So we have two important ideas:

A. Just like coin flips, many results are perfectly possible by chance. We use the concept of “statistical significance” to figure out how likely it is that the effect we're seeing is real and not just a random illusion, like seeing 18 heads in 30 coin tosses.

B. Even if it is really happening, it still may be unimportant – an effect so insignificant in real terms that it's not worth our attention.

These effects should combine to raise our level of skepticism when hearing about groundbreaking new studies! (A third and equally important problem is the fact that correlation is not causation, a common problem in many fields of science including nutritional epidemiology. Just because x is associated with y does not mean that x is causing y.)

Tread carefully and keep your thinking cap on.

***

Still Interested? Read Ellenberg's great book to get your head working correctly, and check out our posts on Bayesian thinking, another very useful statistical tool, and learn a little about how we distinguish science from pseudoscience.