Category: Learning

Zero — Invented or Discovered?

It seems almost a bizarre question. Who thinks about whether zero was invented or discovered? And why is it important?

Answering this question, however, can tell you a lot about yourself and how you see the world.

Let’s break it down.

“Invented” implies that humans created the zero and that without us, the zero and its properties would cease to exist.

“Discovered” means that although the symbol is a human creation, what it represents would exist independently of any human ability to label it.

So do you think of the zero as a purely mathematical function, and by extension think of all math as a human construct like, say, cheese or self-driving cars? Or is math, and the zero, a symbolic language that describes the world, the content of which exists completely independently of our descriptions?

The zero is now a ubiquitous component of our understanding.

The concept is so basic it is routinely mastered by the pre-kindergarten set. Consider the equation 3-3=0. Nothing complicated about that. It is second nature to us that we can represent “nothing” with a symbol. It makes perfect sense now, in 2017, and it's so common that we forget that zero was a relatively late addition to the number scale.

Here's a fact that's amazing to most people: the zero is actually younger than mathematics. Pythagoras’s famous conclusion — that in a right-angled triangle, the square of the hypotenuse is equal to the sum of the squares of the other two sides — was achieved without a zero. As was Euclid’s entire Elements.

How could this be? It seems surreal, given the importance the zero now has to mathematics, computing, language, and life. How could someone figure out the complex geometry of triangles, yet not realize that nothing was also a number?

Tobias Dantzig, in Number: The Language of Science, offers this as a possible explanation: “The concrete mind of the ancient Greeks could not conceive the void as a number, let alone endow the void with a symbol.” This gives us a good direction for finding the answer to the original question because it hints that you must first understand the concept of the void before you can name it. You need to see that nothingness still takes up space.

It was thought, and sometimes still is, that the number zero was invented in the pursuit of ancient commerce. Something was needed as a placeholder; otherwise, 65 would be indistinguishable from 605 or 6050. The zero represents “no units” of the particular place that it holds. So for that last number, we have six thousands, no hundreds, five tens, and no singles.

A happy accident of no great original insight, zero then made its way around the world. In addition to being convenient for keeping track of how many bags of grain you were owed, or how many soldiers were in your army, it turned our number scale into an extremely efficient decimal system. More so than any numbering system that preceded it (and there were many), the zero transformed the power of our other numerals, propelling mathematics into fantastic equations that can explain our world and fuel incredible scientific and technological advances.

But there is, if you look closely, a missing link in this story.

What changed in humanity that made us comfortable with confronting the void and giving it a symbol? And is it reasonable to imagine creating the number without understanding what it represented? Given its properties, can we really think that it started as a placeholder? Or did it contain within it, right from the beginning, the notion of defining the void, of giving it space?

In Finding Zero, Amir Aczel offers some insight. Basically, he claims that the people who discovered the zero must have had an appreciation of the emptiness that it represented. They were labeling a concept with which they were already familiar.

He rediscovered the oldest known zero, on a stone tablet dating from 683 CE in what is now Cambodia.

On his quest to find this zero, Aczel realized that it was far more natural for the zero to first appear in the Far East, rather than in Western or Arab cultures, due to the philosophical and religious understandings prevalent in the region.

Western society was, and still is in many ways, a binary culture. Good and evil. Mind and body. You’re either with us or against us. A patriot or a terrorist. Many of us naturally try to fit our world into these binary understandings. If something is “A,” then it cannot be “not A.” The very definition of “A” is that it is not “not A.” Something cannot be both.

Aczel writes that this duality is not at all reflected in much Eastern thought. He describes the catuskoti, found in early Buddhist logic, that presents four possibilities, instead of two, for any state: that something is, is not, is both, or is neither.

At first, a typical Western mind might rebel against this kind of logic. My father is either bald or not bald. He cannot be both and he cannot be neither, so what is the use of these two other almost nonsensical options?

A closer examination of our language, though, reveals that the expression of the non-binary is understood, and therefore perhaps more relevant than we think. Take, for example, “you’re either with us or against us.” Is it possible to say “I’m both with you and against you”? Yes. It could mean that you are for the principles but against the tactics. Or that you are supportive in contrast to your values. And to say “I’m neither with you nor against you” could mean that you aren’t supportive of the tactic in question, but won’t do anything to stop it. Or that you just don’t care.

Feelings, in particular, are a realm where the binary is often insufficient. Watching my children, I know that it's possible to be both happy and sad, a traditional binary, at the same time. And the zero itself defies binary categorization. It is something and nothing simultaneously.

Aczel reflects on a conversation he had with a Buddhist monk. “Everything is not everything — there is always something that lies outside of what you may think covers all creation. It could be a thought, or a kind of void, or a divine aspect. Nothing contains everything inside it.”

He goes on to conclude that “Here was the intellectual source of the number zero. It came from Buddhist meditation. Only this deep introspection could equate absolute nothingness with a number that had not existed until the emergence of this idea.”

Which is to say, certain properties of the zero likely were understood conceptually before the symbol came about — nothingness was a thing that could be represented. This idea fits with how we treat the zero today; it may represent nothing, but that nothing still has properties. And investigating those properties demonstrates that there is power in the void — it has something to teach us about how our universe operates.

Further contemplation might illuminate that the zero has something to teach us about existence as well. If we accept zero, the symbol, as being discovered as part of our realization about the existence of nothingness, then trying to understand the zero can teach us a lot about moving beyond the binary of alive/not alive to explore other ways of conceptualizing what it means to be.

Let Go of the Learning Baggage

We all want to learn better. That means retaining information, processing it, being able to use it when needed. More knowledge means better instincts; better insights into opportunities for both you and your organization. You will ultimately produce better work if you give yourself the space to learn. Yet often organizations get in the way of learning.

How do we learn how to learn? Usually in school, combined with instructions from our parents, we cobble together an understanding that allows us to move forward through the school years until we matriculate into a job. Then because most initial learning comes from doing, less from books, we switch to an on-the-fly approach.

Which is usually an absolute failure. Why? In part, because we layer our social values on top and end up with a hot mess of guilt and fear that stymies the learning process.

Learning is necessary for our success and personal growth. But we can’t maximize the time we spend learning because our feelings about what we ‘should’ be doing get in the way.

We are trained by our modern world to organize our day into mutually exclusive chunks called ‘work’, ‘play’, and ‘sleep’. One is done at the office, the other two are not. We are not allowed to move fluidly between these chunks, or combine them in our 24 hour day. Lyndon Johnson got to nap at the office in the afternoon, likely because he was President and didn’t have to worry about what his boss was going to think. Most of us don’t have this option. And now in the open office debacle we can’t even have a quiet 10 minutes of rest in our cubicles.

We have become trained to equate working with doing. Thus the ‘doing’ has value. We deserve to get paid for this. And, it seems, only this.

What does this have to do with learning?

It’s this same attitude that we apply to the learning process when we are older, with similarly unsatisfying results.

If we are learning for work, then in our brains learning = work. So we have to do it during the day. At the office. And if we are not learning, then we are not working. We think that walking is not learning. It’s ‘taking a break’. We instinctively believe that reading is learning. Having discussions about what you’ve read, however, is often not considered work, again it’s ‘taking a break’.

To many, working means sitting at your desk for eight hours a day. Being physically present, mental engagement is optional. It means pushing out emails and rushing to meetings and generally getting nothing done. We’ve looked at the focus aspect of this before. But what about the learning aspect?

Can we change how we approach learning, letting go of the guilt associated with not being visibly active, and embrace what seems counter-intuitive?

Thinking and talking are useful elements of learning. And what we learn in our ‘play’ time can be valuable to our ‘work’ time, and there’s nothing wrong with moving between the two (or combining them) during our day.

When mastering a subject, our brains actually use different types of processing. Barbara Oakley explains in A Mind for Numbers: How to Excel at Math and Science (even if you flunked algebra) that our brain has two general modes of thinking – ‘focused’ and ‘diffuse’ – and both of these are valuable and required in the learning process.

The focused mode is what we traditionally associate with learning. Read, dive deep, absorb. Eliminate distractions and get into the material. Oakley says “the focused mode involves a direct approach to solving problems using rational, sequential, analytical approaches. … Turn your attention to something and bam – the focused mode is on, like the tight, penetrating beam of a flashlight.”

But the focused mode is not the only one required for learning because we need time to process what we pick up, to get this new information integrated into our existing knowledge. We need time to make new connections. This is where the diffuse mode comes in.

Diffuse-mode thinking is what happens when you relax your attention and just let your mind wander. This relaxation can allow different areas of the brain to hook up and return valuable insights. … Diffuse-mode insights often flow from preliminary thinking that’s been done in the focused mode.

Relying solely on the focused mode to learn is a path to burnout. We need the diffuse mode to cement our ideas, put knowledge into memory and free up space for the next round of focused thinking. We need the diffuse mode to build wisdom. So why does diffuse mode thinking at work generally involve feelings of guilt?

Oakley’s recommendations for ‘diffuse-mode activators’ are: go to the gym, walk, play a sport, go for a drive, draw, take a bath, listen to music (especially without words), meditate, sleep. Um, aren’t these all things to do in my ‘play’ time? And sleep? It’s a whole time chunk on its own.

Most organizations do not promote a culture that allow these activities to be integrated into the work day. Go to the gym on your lunch. Sleep at home. Meditate on a break. Essentially do these things while we are not paying you.

We ingest this way of thinking, associating the value of getting paid with the value of executing our task list. If something doesn’t directly contribute, it’s not valuable. If it’s not valuable I need to do it in my non-work time or not at all. This is learned behavior from our organizational culture, and it essentially communicates that our leaders would rather see us do less than trust in the potential payoff of pursuits that aren’t as visible or ones that don’t pay off as quickly. The ability to see something is often a large component of trust. So if we are doing any of these ‘play’ activities at work, which are invisible in terms of their contribution to the learning process, we feel guilty because we don’t believe we are doing what we get paid to do.

If you aren’t the CEO or the VP of HR, you can’t magic a policy that says ‘all employees shall do something meaningful away from their desks each day and won’t be judged for it’, so what can you do to learn better at work? Find a way to let go of the guilt baggage when you invest in proven, effective learning techniques that are out of sync with your corporate culture.

How do you let go of the guilt? How do you not feel it every time you stand up to go for a walk, close your email and put on some headphones, or have a coffee with a colleague to discuss an idea you have? Because sometimes knowing you are doing the right thing doesn’t translate into feeling it, and that’s where guilt comes in.

Guilt is insidious. Not only do we usually feel guilt, but then we feel guilty about feeling guilty. Like, I go to visit my grandmother in her old age home mostly because I feel guilty about not going, and then I feel guilty because I’m primarily motivated by guilt! Like if I were a better person I would be doing it out of love, but I’m not, so that makes me terrible.

Breaking this cycle is hard. Like anything new, it’s going to feel unnatural for a while but it can be done.

How? Be kind to yourself.

This may sound a bit touchy-feely, but it is really a just a cognitive-behavioral approach with a bit of mindfulness thrown in. Dennis Tirch has done a lot of research into the positive benefits of compassion for yourself on worry, panic and fear. And what is guilt but worry that you aren’t doing the right thing, fear that you’re not a good person, and panic about what to do about it?

In his book, The Compassionate-Mind Guide to Overcoming Anxiety, Tirch writes:

the compassion focused model is based on research showing that some of the ways in which we instinctively regulate our response to threats have evolved from the attachment system that operates between infant and mother and from other basic relationships between mutually supportive people. We have specific systems in our brains that are sensitive to the kindness of others, and the experience of this kindness has a major impact on the way we process these threats and the way we process anxiety in particular.

The Dalai Lama defines compassion as “a sensitivity to the suffering of others, with a commitment to do something about it,” and Tirch also explains that we are greatly impacted by our compassion to ourselves.

In order to manage and overcome emotions like guilt that can prevent us from learning and achieving, we need to treat ourselves the same way we would the person we love most in the world. “We can direct our attention to inner images that evoke feelings of kindness, understanding, and support,” writes Tirch.

So the next time you look up from that proposal on the new infrastructure schematics and see that the sun is shining, go for a walk, notice where you are, and give your mind a chance to go into diffuse-mode and process what you’ve been focusing on all morning. And give yourself a hug for doing it.

Language: Why We Hear More Than Words

It’s a classic complaint in relationships, especially romantic ones: “She said she was okay with me forgetting her birthday! Then why is she throwing dishes in the kitchen? Are the two things related? I wish I had a translator for my spouse. What is going on?”

The answer: Extreme was right, communication is more than words. It’s how those words are said, the tone, the order, even the choice of a particular word. It’s multidimensional.

In their book, Meaning and Relevance, Deirdre Wilson and Dan Sperber explore the aspects of communication that are beyond the definitions of the words that we speak but are still encoded in the words themselves.

Consider the following example:

Peter got angry and Mary left.

Mary left and Peter got angry.

We can instantly see that these two sentences, despite having exactly the same words, do not mean the same thing. The first one has us thinking, wow, Peter must get angry often if Mary leaves to avoid his behavior. Maybe she’s been the recipient of one too many tantrums and knows that there’s nothing she can do to diffuse his mood. The second sentence suggests that Peter wants more from Mary. He might have a crush on her! Same words – totally different context.

Human language is not a code. True codes have a one-to-one relationship with meaning. One sound, one definition. This is what we see with animals.

Wilson and Sperber explain that “coded communication works best when emitter and receiver share exactly the same code. Any difference between the emitter’s and receiver’s codes is a possible source of error in the communication process.” For animals, any evolutionary mutations that affected the innate code would be counter-adaptive. A song-bird one note off key is going to have trouble finding a mate.

Not so for humans. We communicate more than the definitions of our words would suggest. (Steven Pinker argues language itself as a DNA-level instinct.) And we decode more than the words spoken to us. This is inferential communication, and it means that we understand not only the words spoken, but the context in which they are spoken. Contrary to the languages of other animals, which are decidedly less ambiguous, human language requires a lot of subjective interpretation.

This is probably why we can land in a country where we don’t speak the language and can’t read the alphabet, yet get the gist of what the hotel receptionist is telling us. We can find our room, and know where the breakfast is served in the morning. We may not understand her words, but we can comprehend her tone and make inferences based on the context.

Wilson and Sperber argue that mutations in our inferential abilities do not negatively impact communication and potentially even enhance it. Essentially, because our human language is not simply a one-to-one code because more can be communicated beyond the exact representations of certain words, we can easily adapt to changes in communication and interpretation that may evolve in our communities.

For one thing, we can laugh at more than physical humor. Words can send us into stitches. Depending on how they are conveyed, the tone, the timing, the expressions that come along with them, we can find otherwise totally innocuous words hysterical.

Remember Abbott and Costello?

Who’s on first.”
“No, what’s on second.”

Consider Irony

Irony is a great example of how powerfully we can communicate context with a few simple words.

I choose my words as indicators of a more complex thought that may include emotions, opinions, biases, and these words will help you infer this entire package. And one of my goals as the communicator is to make it as easy as possible for you to get the meaning I’m intending to convey.

Irony is more than just stating the opposite. There must be an expectation of that opposite in at least some of the population. And choosing irony is more of a commentary on that group. Wilson and Sperber argue that “what irony essentially communicates is neither the proposition literally expressed not the opposite of that proposition, but an attitude to this proposition and to those who might hold or have held it.”

For example

When Mary says, after a boring party, ‘That was fun’, she is neither asserting literally that the party was fun nor asserting ‘ironically’ that the party was boring. Rather, she is expressing an attitude of scorn towards (say) the general expectation among the guests that the party would be fun.

This is a pretty complex linguistic structure. It allows us to communicate our feelings on cultural norms fairly succinctly. Mary says ‘That was fun’. Three little words. And I understand that she hated the party, couldn’t wait to get out of there, feels distant from the other party-goers and is rejecting that whole social scene. Very powerful!

Irony works because it is efficient. To communicate the same information without irony involves more sentences. And my desire as a communicator is always to express myself in the most efficient way possible to my listener.

Wilson and Sperber conclude that human language developed and became so powerful because of two unique cognitive abilities of humans, language and the power to attribute mental states to others. We look for context for the words we hear. And we are very proficient at absorbing this context to infer meaning.

The lesson? If you want to understand reality, don't be pedantic.

Friedrich Nietzsche on Making Something Worthwhile of Ourselves

Friedrich Nietzsche (1844-1900) explored many subjects, perhaps the most important was himself.

A member of our learning community directed me to the passage below, written by Richard Schacht in the introduction to Nietzsche: Human, All Too Human: A Book for Free Spirits.

​If we are to make something worthwhile of ourselves, we have to take a good hard look at ourselves. And this, for Nietzsche, means many things. It means looking at ourselves in the light of everything we can learn about the world and ourselves from the natural sciences — most emphatically including evolutionary biology, physiology and even medical science. It also means looking at ourselves in the light of everything we can learn about human life from history, from the social sciences, from the study of arts, religions, literatures, mores and other features of various cultures. It further means attending to human conduct on different levels of human interaction, to the relation between what people say and seem to think about themselves and what they do, to their reactions in different sorts of situations, and to everything about them that affords clues to what makes them tick. All of this, and more, is what Nietzsche is up to in Human, All Too Human. He is at once developing and employing the various perspectival techniques that seem to him to be relevant to the understanding of what we have come to be and what we have it in us to become. This involves gathering materials for a reinterpretation and reassessment of human life, making tentative efforts along those lines and then trying them out on other human phenomena both to put them to the test and to see what further light can be shed by doing so.

Nietzsche realized that mental models were the key to not only understanding the world but understanding ourselves. Understanding how the world works is the key making more effective decisions and gaining insights. However, its through the journey of discovery of these ideas, that we learn about ourselves. Most of us want to skip the work, so we skim the surface of not only knowledge but ourselves.

Memory and the Printing Press

You probably know that Gutenberg invented the printing press. You probably know it was pretty important. You may have heard some stuff about everyone being able to finally read the Bible without a priest handy. But here's a point you might not be familiar with: The printing press changed why, and consequently what, we remember.

Before the printing press, memory was the main store of human knowledge. Scholars had to go to find books, often traveling around from one scriptoria to another. They couldn’t buy books. Individuals did not have libraries. The ability to remember was integral to the social accumulation of knowledge.

Thus, for centuries humans had built ways to remember out of pure necessity. Because knowledge wasn’t fixed, remembering content was the only way to access it. Things had to be known in a deep, accessible way as Elizabeth Eisenstein argues in The Printing Press as an Agent of Change:

As learning by reading took on new importance, the role played by mnemonic aids was diminished. Rhyme and cadence were no longer required to preserve certain formulas and recipes. The nature of the collective memory was transformed.

In the Church, for example, Eisenstein talks of a multimedia approach to remembering the bible. As a manuscript, it was not widely available, not even to many church representatives; the stories of the bible were often pictorially represented in the churches themselves. Use of images, both physically and mentally, was critical to storing knowledge in memory: they were used as a tool to allow one to create extensive “memory palaces” enabling the retention of knowledge.

Not only did printing eliminate many functions previously performed by stone figures over portals and stained glass in windows, but it also affected less tangible images by eliminating the need for placing figures and objects in imaginary niches located in memory theatres.

Thus, in an age before the printing press, bits of knowledge were associated with other bits of knowledge not because they complemented each other, or allowed for insights, but merely so they could be retained.

…the heavy reliance on memory training and speech arts, combined with the absence of uniform conventions for dating and placing [meant that] classical images were more likely to be placed in niches in ‘memory theatres’ than to be assigned a permanent location in a fixed past.

In our post on memory palaces, we used the analogy of a cow and a steak. To continue with the analogy used there, imagining that your partner asks you to pick up steak for dinner. To increase your chances of remembering the request, you envision a cow sitting on the front porch. When you mind-walk through your palace, you see this giant cow sitting there, perhaps waving at you (so unlike a cow!), causing you to think, ‘Why is that cow there–oh yeah, pick up steak for dinner’.

Before the printing press, it wasn’t just about picking up dinner. It was all of our knowledge. Euclid's Elements and Aristotle's Politics. The works of St. Augustine and Seneca. These works were shared most often orally, passing from memory to memory. Thus memory was not as much about remembering in the ages of scribes, as it was about preserving.

Consequently, knowledge was far less shared, and then only to those who could understand it and recall it.

To be preserved intact, techniques had to be entrusted to a select group of initiates who were instructed not only in special skills but also in the ‘mysteries’ associated with them. Special symbols, rituals, and incantations performed the necessary function of organizing data, laying out schedules, and preserving techniques in easily memorized forms.

Anyone who's played the game “Telephone” knows the problem: As knowledge is passed on, over and over, it gets transformed, sometimes distorted. This needed to be guarded against, and sometimes couldn't be. As there was no accessible reference library for knowledge, older texts were prized because they were closer to the originals.

Not only could more be learned from retrieving an early manuscript than from procuring a recent copy but the finding of lost texts was the chief means of achieving a breakthrough in almost any field.

Almost incomprehensible today, “Energies were expended on the retrieval of ancient texts because they held the promise of finding so much that still seemed new and untried.” Only by finding older texts could scholars hope to discover the original, unaltered sources of knowledge.

With the advent of the printing press, images and words became something else. Because they were now repeatable, they became fixed. No longer individual interpretations designed for memory access, they became part of the collective.

The effects of this were significant.

Difficulties engendered by diverse Greek and Arabic expressions, by medieval Latin abbreviations, by confusion between Roman letters and numbers, by neologisms, copyists’ errors and the like were so successfully overcome that modern scholars are frequently absent-minded about the limitations on progress in the mathematical sciences which scribal procedures imposed. … By the seventeenth century, Nature’s language was being emancipated from the old confusion of tongues. Diverse names for flora and fauna became less confusing when placed beneath identical pictures. Constellations and landmasses could be located without recourse to uncertain etymologies, once placed on uniform maps and globes. … The development of neutral pictorial and mathematical vocabularies made possible a large-scale pooling of talents for analyzing data, and led to the eventual achievement of a consensus that cut across all the old frontiers.

A key component of this was that apprentices and new scholars could consult books and didn’t have to exclusively rely on the memories of their superiors.

An updated technical literature enabled young men in certain fields of study to circumvent master-disciple relationships and to surpass their elders at the same time. Isaac Newton was still in his twenties when he mastered available mathematical treatises, beginning with Euclid and ending with an updated edition of Descartes. In climbing ‘on the shoulders of giants’ he was not re-enacting the experience of twelfth-century scholars for whom the retrieval of Euclid’s theorems had been a major feat.

Before the printing press, a scholar could spend his lifetime looking for a copy of Euclid’s Elements and never find them, thus having to rely on how the text was encoded in the memories of the scholars he encountered.

After the printing press, memory became less critical to knowledge. And knowledge became more widely dispersed as the reliance on memory being required for interpretation and understanding diminished. And with that, the collective power of the human mind was multiplied.

If you liked this post, check out our series on memory, starting with the advantages of our faulty memory, and continuing to the first part on our memory's frequent errors.

Mozart’s Brain and the Fighter Pilot

Most of us want to be smarter but have no idea how to go about improving our mental apparatus. We intuitively think that if we raised our IQ a few points that we'd be better off intellectually. This isn't necessarily the case. I know a lot of people with high IQs that make terribly stupid mistakes. The way around this is by improving not our IQ, but our overall cognition.

Cognition, argues Richard Restak, “refers to the ability of our brain to attend, identify, and act.” You can think of this as a melange of our moods, thoughts, decisions, inclinations and actions.

Included among the components of cognition are alertness, concentration, perceptual speed, learning, memory, problem solving, creativity, and mental endurance.

All of these components have two things in common. First, our efficacy at them depends on how well the brain is functioning relative to its capabilities. Second, this efficacy function can be improved with the right discipline and the right habits.

Restak convincingly argues that we can make our brains work better by “enhancing the components of cognition.” How we go about improving our brain performance, and thus cognition, is the subject of his book Mozart’s Brain and the Fighter Pilot.

Improving Our Cognitive Power

To improve the brain we need to exercise our cognitive powers. Most of us believe that physical exercise helps us feel better and live healthier; yet how many of us exercise our brain? As with our muscles and our bones, “the brain improves the more we challenge it.”

This is possible because the brain retains a high degree of plasticity; it changes in response to experience. If the experiences are rich and varied, the brain will develop a greater number of nerve cell connections. If the experiences are dull and infrequent, the connections will either never form or die off.

If we’re in stimulating and challenging environments, we increase the number of nerve cell connections. Our brain literally gets heavier, as the number of synapses (connections between neurons) increases. The key that many people miss here is “rich and varied.”

Memory is the most important cognitive function. Imagine if you lost your memory permanently: Would you still be you?

“We are,” Restak writes, “what we remember.” And poor memories are not limited to those who suffer from Alzheimer's disease. While some of us are genetically endowed with superlative memories, the rest of us need not fear.

Aristotle suggested that our mind was a wax tablet in a short book on memory, arguing that the passage of time fades the image unless we take steps to preserve it. He was right in ways he never knew; memory researchers know now that, like a wax tablet, our memory changes every time we access it, due to the plasticity Restak refers to. It can also be molded and improved, at least to a degree.

Long ago, the Greeks hit upon the same idea — mostly starting with Plato — that we don’t have to accept our natural memory. We can take steps to improve it.

Learning and Knowledge Acquisition

When we learn something new, we expand the complexity of our brain. We literally increase our brainpower.

[I]ncrease your memory and you increase your basic intelligence. … An increased memory leads to easier, quicker accessing of information, as well as greater opportunities for linkages and associations. And, basically, you are what you can remember.

Too many of us can’t remember these days, because we’ve outsourced our brain. One of the most common complaints at the neurologist's office for people over forty is poor memory. Luckily most of these people do not suffer from something neurological, but rather the cumulative effect of disuse — a graceful degradation of their memory.

Those who are not depressed (the commonest cause of subjective complaints of memory impairment) are simply experiencing the cumulative effect of decades of memory disuse. Part of this disuse is cultural. Most businesses and occupations seldom demand that their employees recite facts and figures purely from memory. In addition, in some quarters memory is even held in contempt. ‘He’s just parroting a lot of information he doesn’t really understand’ is a common put-down when people are enviously criticizing someone with a powerful memory. Of course, on some occasions, such criticisms are justified, particularly when brute recall occurs in the absence of understanding or context. But I’m not advocating brute recall. I’m suggesting that, starting now, you aim for a superpowered memory, a memory aimed at quicker, more accurate retrieval of information.

Prior to the printing press, we had to use our memories. Epics such as The Odyssey and The Iliad, were recited word-for-word. Today, however, we live in a different world, and we forget that these things were even possible. Information is everywhere. We need not remember anything thanks to technology. This helps and hinders the development of our memory.

[Y]ou should think of the technology of pens, paper, tape recorders, computers, and electronic diaries as an extension of the brain. Thanks to these aids, we can carry incredible amounts of information around with us. While this increase in readily available information is generally beneficial, there is also a downside. The storage and rapid retrieval of information from a computer also exerts a stunting effect on our brain’s memory capacities. But we can overcome this by working to improve our memory by aiming at the development and maintenance of a superpowered memory. In the process of improving our powers of recall, we will strengthen our brain circuits, starting at the hippocampus and extending to every other part of our brain.

Information is only as valuable as what it connects to. Echoing the latticework of mental models, Restek states:

Everything that we learn is stored in the brain within that vast, interlinking network. And everything within that network is potentially connected to everything else.

From this we can draw a reasonable conclusion: if you stop learning mental capacity declines.

That’s because of the weakening and eventual loss of brain networks. Such brain alterations don’t take place overnight, of course. But over a varying period of time, depending on your previous training and natural abilities, you’ll notice a gradual but steady decrease in your powers if you don’t nourish and enhance these networks.

The Better Network: Your Brain or the Internet

Networking is a fundamental operating principle of the human brain. All knowledge within the brain is based on networking. Thus, any one piece of information can be potentially linked with any other. Indeed, creativity can be thought of as the formation of novel and original linkages.

In his book, Weaving the Web: The Original Design and the Ultimate Destiny of the World Wide Web, Tim Berners-Lee, the creator of the Internet, distills the importance of the brain forming connections.

A piece of information is really defined only by what it’s related to, and how it’s related. There really is little else to meaning. The structure is everything. There are billions of neurons in our brains, but what are neurons? Just cells. The brain has no knowledge until connections are made between neurons. All that we know, all that we are, comes from the way our neurons are connected.

Cognitive researchers now accept that it may not be the size of the human brain which gives it such unique abilities — other animals have large brains as well. Rather its our structure; the way our neurons are structured, arranged, and linked.

The more you learn, the more you can link. The more you can link, the more you increase the brain's capacity. And the more you increase the capacity of your brain the better able you’ll be to solve problems and make decisions quickly and correctly. This is real brainpower.

Multidisciplinary Learning

Restak argues that a basic insight about knowledge and intelligence is: “The existence of certain patterns, which underlie the diversity of the world around us and include our own thoughts, feelings, and behaviors.”

Intelligence enhancement therefore involves creating as many neuronal linkages as possible. But in order to do this we have to extricate ourselves from the confining and limiting idea that knowledge can be broken down into separate “disciplines” that bear little relation to one another.

This brings the entire range of ideas into play, rather than just silos of knowledge from human-created specialities. Charlie Munger and Richard Feynman would probably agree that such over-specialization can be quite limiting. As the old proverb goes, the frog in the well knows nothing of the ocean.

Charles Cameron, a game theorist, adds to this conversation:

The entire range of ideas can legitimately be brought into play: and this means not only that ideas from different disciplines can be juxtaposed, but also that ideas expressed in ‘languages’ as diverse as music, painting, sculpture, dance, mathematics and philosophy can be juxtaposed, without first being ‘translated’ into a common language.

Mozart's Brain and the Fighter Pilot goes on to provide 28 suggestions and exercises for enhancing your brain's performance, a few of which we’ll cover in future posts.