Tag: Math

The Simple Problem Einstein Couldn’t Solve … At First

Albert Einstein and Max Wertheimer were close friends. Both found themselves in exile in the United States after fleeing the Nazis in the early 1930s, Einstein at Princeton and Wertheimer in New York.

They communicated by exchanging letters in which Wertheimer would entertain Einstein with thought problems.

In 1934 Wertheimer sent the following problem in a letter.

An old clattery auto is to drive a stretch of 2 miles, up and down a hill, /\. Because it is so old, it cannot drive the first mile— the ascent —faster than with an average speed of 15 miles per hour. Question: How fast does it have to drive the second mile— on going down, it can, of course, go faster—in order to obtain an average speed (for the whole distance) of 30 miles an hour?

Einstein fell for this teaser
Einstein fell for this teaser

Wertheimer's thought problem suggests the answer might be 45 or even 60 miles an hour. But that is not the case. Even if the car broke the sound barrier on the way down, it would not achieve an average speed of 30 miles an hour. Don't be worried if you were fooled, Einstein was at first too. Replying “Not until calculating did I notice that there is no time left for the way down!”

Gerd Gigerenzer explains the answer in his book Risk Savvy: How to Make Good Decisions:

Gestalt psychologists’ way to solve problems is to reformulate the question until the answer becomes clear. Here’s how it works. How long does it take the old car to reach the top of the hill? The road up is one mile long. The car travels fifteen miles per hour, so it takes four minutes (one hour divided by fifteen) to reach the top. How long does it take the car to drive up and down the hill, with an average speed of thirty miles per hour? The road up and down is two miles long. Thirty miles per hour translates into two miles per four minutes. Thus, the car needs four minutes to drive the entire distance. But these four minutes were already used up by the time the car reached the top.

The Ability To Focus And Make The Best Move When There Are No Good Moves

"The indeterminate future is somehow one in which probability and statistics are the dominant modality for making sense of the world."
“The indeterminate future is somehow one in which probability and statistics are the dominant modality for making sense of the world.”

Decisions, where outcomes (and therefore probabilities) are unknown, are often the hardest. The default method problem solving often falls short.

Sometimes you have to play the odds and sometimes you have to play the calculus.

There are several different frameworks one could use to get a handle on the indeterminate vs. determinate question. The math version is calculus vs. statistics. In a determinate world, calculus dominates. You can calculate specific things precisely and deterministically. When you send a rocket to the moon, you have to calculate precisely where it is at all times. It’s not like some iterative startup where you launch the rocket and figure things out step by step. Do you make it to the moon? To Jupiter? Do you just get lost in space? There were lots of companies in the ’90s that had launch parties but no landing parties.

But the indeterminate future is somehow one in which probability and statistics are the dominant modality for making sense of the world. Bell curves and random walks define what the future is going to look like. The standard pedagogical argument is that high schools should get rid of calculus and replace it with statistics, which is really important and actually useful. There has been a powerful shift toward the idea that statistical ways of thinking are going to drive the future.

With calculus, you can calculate things far into the future. You can even calculate planetary locations years or decades from now. But there are no specifics in probability and statistics—only distributions. In these domains, all you can know about the future is that you can’t know it. You cannot dominate the future; antitheories dominate instead. The Larry Summers line about the economy was something like, “I don’t know what’s going to happen, but anyone who says he knows what will happen doesn’t know what he’s talking about.” Today, all prophets are false prophets. That can only be true if people take a statistical view of the future.

— Peter Thiel

And this quote from The Hard Thing About Hard Things: Building a Business When There Are No Easy Answers by Ben Horowitz:

I learned one important lesson: Startup CEOs should not play the odds. When you are building a company, you must believe there is an answer and you cannot pay attention to your odds of finding it. You just have to find it. It matters not whether your chances are nine in ten or one in a thousand; your task is the same. … I don't believe in statistics. I believe in calculus.

People always ask me, “What’s the secret to being a successful CEO?” Sadly, there is no secret, but if there is one skill that stands out, it’s the ability to focus and make the best move when there are no good moves. It’s the moments where you feel most like hiding or dying that you can make the biggest difference as a CEO. In the rest of this chapter, I offer some lessons on how to make it through the struggle without quitting or throwing up too much.

… I follow the first principle of the Bushido—the way of the warrior: keep death in mind at all times. If a warrior keeps death in mind at all times and lives as though each day might be his last, he will conduct himself properly in all his actions. Similarly, if a CEO keeps the following lessons in mind, she will maintain the proper focus when hiring, training , and building her culture.

It's interesting to me that the skill that stands out to Horowitz is one that we can use to teach how to think and one Tyler Cowen feels is in short supply. Cowen says:

The more information that’s out there, the greater the returns to just being willing to sit down and apply yourself. Information isn’t what’s scarce; it’s the willingness to do something with it.

Don’t Let Math Pull the Wool Over Your Eyes

A recent experiment, highlighted in the WSJ, proves many people, including holders of graduate degrees and professional researchers, are easily impressed by math.

“Math makes a research paper look solid, but the real science lies not in math but in trying one's utmost to understand the real workings of the world,” Prof. Eriksson (a mathematician and researcher of social psychology at Sweden's Mälardalen University) said.

Prof. Eriksson's finding, published in November in the journal Judgment and Decision Making under the title “The Nonsense Math Effect,” is preliminary but unfortunately not surprising, other researchers said. It documents a familiar effect, said Daniel Kahneman, professor emeritus of psychology and public affairs at Princeton University. “People who know math understand what other mortals understand, but other mortals do not understand them. This asymmetry gives them a presumption of superior ability.

Thomas Bayes and Bayes’s Theorem

Bayes’s Theorem

Thomas Bayes was an English minister in the first half of the 18th century, whose (now) most famous work, “An Essay toward Solving a Problem is the Doctrine of Chances,” was brought to the attention of the Royal Society in 1763 – two years after his death – by his friend Richard Price. The essay, the key to what we now know as Bayes's Theorem, concerned how we should adjust probabilities when we encounter new data.

In The Signal And The Noise, Nate Silver explains the theory:

[Richard] Price, in framing Bayes's essay, gives the example of a person who emerges into the world (perhaps he is Adam, or perhaps he came from Plato's cave) and sees the sun rise for the first time. At first, he does not know whether this is typical or some sort of freak occurrence. However, each day that he survives and the sun rises again, his confidence increases that it is a permanent feature of nature. Gradually, through this purely statistical form of inference, the probability he assigns to his prediction that the sun will rise again tomorrow approaches (although never exactly reaches) 100 percent.

The argument made by Bayes and Price is not that the world is intrinsically probabilistic or uncertain Bayes was a believer in divine perfection; he was also an advocate of Isaac Newton's work, which had seemed to suggest that nature follows regular and predictable laws. It is, rather, a statement—expressed both mathematically and philosophically—about how we learn about the universe: that we learn about it through approximation, getting closer and closer to the truth as we gather more evidence.

This contrasted with the more skeptical viewpoint of the Scottish philosopher David Hume, who argued that since we could not be certain that the sun would rise again, a prediction that it would was inherently no more rational than one that it wouldn't. The Bayesian viewpoint, instead, regards rationality as a probabilistic matter. In essence, Bayes and Price are telling Hume, don't blame nature because you are too daft to understand it: if you step out of your skeptical shell and make some predictions about its behavior, perhaps you will get a little closer to the truth.

Bayes's Theorem

Bayes's theorem wasn't first formulated by Thomas Bayes. Instead it was developed by the French mathematician and astronomer Pierre-Simon Laplace.

Laplace believed in scientific determinism — given the location of every particle in the universe and enough computing power we could predict the universe perfectly. However it was the disconnect between the perfection of nature and our human imperfections in measuring and understanding it that led to Laplace's involvement in a theory based on probabilism.

Laplace was frustrated at the time by astronomical observations that appeared to show anomalies in the orbits of Jupiter and Saturn — they seemed to predict that Jupiter would crash into the sun while Saturn would drift off into outer space. These prediction were, of course, quite wrong and Laplace devoted much of his life to developing much more accurate measurements of these planets' orbits. The improvements that Laplace made relied on probabilistic inferences in lieu of exacting measurements, since instruments like the telescope were still very crude at the time. Laplace came to view probability as a waypoint between ignorance and knowledge. It seemed obvious to him that a more thorough understanding of probability was essential to scientific progress.

The Bayesian approach to probability is simple: take the odds of something happening, and adjust for new information. This, of course, is most useful in the cases where you have strong prior knowledge. If your initial probability is off the Bayesian approach is much less helpful.

In her book, The Theory That Would Not Die, Sharon Bertsch McGrayne lays out the Bayesian process:

We modify our opinions with objective information: Initial Beliefs + Recent Objective Data = A New and Improved Belief. … each time the system is recalculated, the posterior becomes the prior of the new iteration. It was an evolving system, with each bit of new information pushed closer and closer to certitude.

Here is a short example, found in Investing: The Last Liberal Art, on how it works:

Let's imagine that you and a friend have spent the afternoon playing your favorite board game, and now, at the end of the game, you are chatting about this and that. Something your friend says leads you to make a friendly wager: that with one roll of the die from the game, you will get a 6. Straight odds are one in six, a 16 percent probability. But then suppose your friend rolls the die, quickly covers it with her hand, and takes a peek. “I can tell you this much,” she says; “it's an even number.” Now you have new information and your odds change dramatically to one in three, a 33 percent probability. While you are considering whether to change your bet, your friend teasingly adds: “And it's not a 4.” With this additional bit of information, your odds have changed again, to one in two, a 50 percent probability. With this very simple example, you have performed a Bayesian analysis. Each new piece of information affected the original probability, and that is a Bayesian inference.

Knowing the exact math is not really the key to understanding Bayesian thinking, although being able to quantify is a huge advantage in thinking and life.

“Bayes's theorem,” Silver continues, “is concerned with conditional probability. That is, it tells us the probability that a theory or hypothesis is true if some event has happened.”

When our priors are strong, they can be surprisingly resilient in the face of new evidence. One classic example of this is the presence of breast cancer among women in their forties. The chance that a woman will develop breast cancer in her forties is fortunately quite low — about 1.4 percent. But what is the probability if she has a positive mammogram?

Studies show that if a woman does not have cancer, a mammogram will incorrectly claim that she does only about 10 percent of the time. If she does have cancer, on the other hand, they will detect it about 75 percent of the time. When you see those statistics, a positive mammogram seems like very bad news indeed. But if you apply Bayes's Theorem to these numbers, you'll come to a different conclusion: the chance that a woman in her forties has breast cancer given that she's had a positive mammogram is still only about 10 percent. These false positive dominate the equation because very few young women have breast cancer to begin with. For this reason, many doctors recommend that women do not begin getting regular mammograms until they are in their fifties and the prior probability of having breast cancer is higher.

When doing research for this post, I stumbled on Eliezer Yudkowsky's intuitive explanation (building upon the mammogram example above):

The most common mistake is to ignore the original fraction of women with breast cancer, and the fraction of women without breast cancer who receive false positives, and focus only on the fraction of women with breast cancer who get positive results. For example, the vast majority of doctors in these studies seem to have thought that if around 80% of women with breast cancer have positive mammographies, then the probability of a women with a positive mammography having breast cancer must be around 80%.

Figuring out the final answer always requires all three pieces of information – the percentage of women with breast cancer, the percentage of women without breast cancer who receive false positives, and the percentage of women with breast cancer who receive (correct) positives.

To see that the final answer always depends on the original fraction of women with breast cancer, consider an alternate universe in which only one woman out of a million has breast cancer. Even if mammography in this world detects breast cancer in 8 out of 10 cases, while returning a false positive on a woman without breast cancer in only 1 out of 10 cases, there will still be a hundred thousand false positives for every real case of cancer detected. The original probability that a woman has cancer is so extremely low that, although a positive result on the mammography does increase the estimated probability, the probability isn't increased to certainty or even “a noticeable chance”; the probability goes from 1:1,000,000 to 1:100,000.

Similarly, in an alternate universe where only one out of a million women does not have breast cancer, a positive result on the patient's mammography obviously doesn't mean that she has an 80% chance of having breast cancer! If this were the case her estimated probability of having cancer would have been revised drastically downward after she got a positive result on her mammography – an 80% chance of having cancer is a lot less than 99.9999%! If you administer mammographies to ten million women in this world, around eight million women with breast cancer will get correct positive results, while one woman without breast cancer will get false positive results. Thus, if you got a positive mammography in this alternate universe, your chance of having cancer would go from 99.9999% up to 99.999987%. That is, your chance of being healthy would go from 1:1,000,000 down to 1:8,000,000.

These two extreme examples help demonstrate that the mammography result doesn't replace your old information about the patient's chance of having cancer; the mammography slides the estimated probability in the direction of the result.

Part of the problem is the availability heuristic — we focus on what's readily available. In this case that's the newest information and the bigger picture gets lost. We fail to adjust the probability to reflect new information.

The big idea behind Bayes's theorem is that we must continuously update our probability estimates on an as-needed basis.

Let's take a look at another example, only this time we'll do some basic algebra.

Consider a somber example: the September 11 attacks. Most of us would have assigned almost no probability to terrorists crashing planes into buildings in Manhattan when we woke up that morning. But we recognized that a terror attack was an obvious possibility once the first plane hit the World Trade Center. And we had no doubt we were being attacked once the second tower was hit. Bayes's theorem can replicate this result.

For instances, say that before the first plane hit, our estimate of the possibility of terror attack on tall buildings in Manhattan was just 1 chance in 20,000, or 0.005 percent. However, we would also have assigned a very low probability to a plane hitting the World Trade Center by accident. This figure can actually be estimated empirically: in the previous 25,000 days of aviation over Manhattan prior to September 11, there had been two such accidents: one involving the Empire State building in 1945 and another at 40 Wall Street in 1946. That would make the possibility of such an accident about 1 chance in 12,500 on any given day. If you use Bayes's theorem to run these numbers (see below), the probability we'd assign to a terror attack increased form 0.005 percent to 38 percent the moment that the first plane hit.

The Signal And The Noise, Nate Silver

Weigh the Evidence

Tim Harford, adds:

Bayes’ theorem is an important reality check on our efforts to forecast the future. How, for instance, should we reconcile a large body of theory and evidence predicting global warming with the fact that there has been no warming trend over the last decade or so? Sceptics react with glee, while true believers dismiss the new information.

A better response is to use Bayes’ theorem: the lack of recent warming is evidence against recent global warming predictions, but it is weak evidence. This is because there is enough variability in global temperatures to make such an outcome unsurprising. The new information should reduce our confidence in our models of global warming – but only a little.

The same approach can be used in anything from an economic forecast to a hand of poker, and while Bayes’ theorem can be a formal affair, Bayesian reasoning also works as a rule of thumb. We tend to either dismiss new evidence, or embrace it as though nothing else matters. Bayesians try to weigh both the old hypothesis and the new evidence in a sensible way.

Here is another example, this time from Quora. A reader poses the question, “What does it mean when a girl smiles at you every time she sees you?” Another reader, using Bayes's Theorem replies:

The probability she likes you is

P(like|smile) = \frac{P(smile|like)P(like)}{P(smile)}

P(like|smile) is what you want to know – the probability she likes you given the fact that she smiles at you.

P(smile|like) is the probability that she will smile given that she sees someone she likes.

P(like) is the probability that she likes a random person.

P(smile) is the probability that she will smile at a random person.

For example, suppose she just smiles at everyone. Then intuition says that fact that she smiles at you doesn't mean anything one way or another. Indeed, P(smile|like) = 1 and P(smile)=1, and we have

P(like|smile) = P(like)

meaning that knowing that she smiles at you doesn't change anything.

At the other extreme, suppose she smiles at everyone she likes, and only those she likes. Then P(smile) = P(like) and P(smile|like) = 1.  Then we have

P(like|smile) = 1

and she is certain to like you.

In the intermediate case, what you need to do is find the ratio of odds of smiling to people she likes to smiles in general, multiply by the percentage of people she likes, and there is your answer.

The more she smiles in general, the lower the chance she likes you. The more she smiles at people she likes, the better the chance. And of course the more people she likes, the better your chances are.

Of course, how to actually determine these values is a mystery I have never solved.

Decision Trees

In The Essential Buffett: Timeless Principles for the New Economy, Robert Hagstrom writes:

Bayesian analysis is an attempt to incorporate all available information into a process for making inferences, or decisions, about the underlying state of nature. Colleges and universities use Bayes's theorem to help their students study decision making. In the classroom, the Bayesian approach is more popularly called the decision tree theory; each branch of the tree represents new information that, in turn, changes the odds in making decisions. “At Harvard Business School,” explains Charlie Munger, “the great quantitative thing that bonds the first-year class together is what they call decision tree theory. All they do is take high school algebra and apply it to real life problems. The students love it. They're amazed to find that high school algebra works in life.

Limitations of the Bayesian

Besides seeing the the world as an ever shifting array of probabilities, we must also remember the limitations of inductive reasoning such as the “sun rising every day” example given by Price/Bayes above.

The most useful example of this is explained by Nassim Taleb in the Black Swan:

Consider a turkey that is fed everyday. Every single feeding will firm up the bird's belief that it is the general rule of life to be fed everyday by friendly members of the human race “looking out for its best interests,” as a politician would say. On the afternoon of the Wednesday before Thanksgiving, something unexpected will happen to the turkey. It will incur a revision of belief.

Don't walk away thinking the Bayesian approach will enable you to predict everything. In fact, with the volume of information is increasing exponentially, the future may be as unpredictable as ever, concludes Silver:

There is no reason to conclude that the affairs of man are becoming more predictable. The opposite may well be true. The same sciences that uncover the laws of nature are making the organization of society more complex.

In the final analysis, though, picking up Bayesian reasoning can truly change your life, as said well in this Big Think video by Julia Galef of the Center for Applied Rationality:

After you’ve been steeped in Bayes’ rule for a little while, it starts to produce some fundamental changes to your thinking. For example, you become much more aware that your beliefs are grayscale. They’re not black and white and that you have levels of confidence in your beliefs about how the world works that are less than 100 percent but greater than zero percent and even more importantly as you go through the world and encounter new ideas and new evidence, that level of confidence fluctuates, as you encounter evidence for and against your beliefs.

Bayes's Theorem is part of the Farnam Street latticework of mental models.

Fermat’s Last Theorem

Simon Singh and John Lynch's film tells the enthralling and emotional story of Andrew Wiles. A quiet English mathematician, he was drawn into maths by Fermat's puzzle, but at Cambridge in the '70s, Fermat's Last Theorem (FLT) was considered a joke, so he set it aside. Then, in 1986, an extraordinary idea linked this irritating problem with one of the most profound ideas of modern mathematics: the Taniyama-Shimura Conjecture, named after a young Japanese mathematician who tragically committed suicide. The link meant that if Taniyama was true then so must be FLT. When he heard, Wiles went after his childhood dream again. “I knew that the course of my life was changing.” For seven years, he worked in his attic study at Princeton, telling no one but his family.

(h/t @stevenstrogatz)

Human Traits Essential to Capitalism

Yale economist Robert Shiller argues that rising inequality in the US was a major cause of the recent crisis, and little is being done to address it. He recommends reading Adam Smith's Theory of Moral Sentiments, The Passions and The Interests: Political Arguments for Capitalism Before its Triumph, Nudge, Fault Lines: How Hidden Fractures Still Threaten the World Economy, and Winner-Take-All Politics: How Washington Made the Rich Richer and Turned its Back on the Middle Class.

If you did try to summarise it, what would you say you're trying to get at with these book choices?

I think that our economic system reflects our understanding of humankind, and that understanding has been developing, with especial rapidity lately. You have to understand people first before you can understand how to devise an economic system for them. And I think our understanding of people has been accelerating over the last century, or even half-century.

…On The Passions and The Interests: Political Arguments for Capitalism Before its Triumph, by the great Albert O Hirschman.

This is a great book. It traces the history of an idea – an idea that is central to our whole civilisation today. The idea is that human nature is basically unruly and destructive, or has the potential to become so, but that we've designed a society that sets a space for this kind of impulse, where it's acted out in a civilized manner – and that's capitalism. So when we reflect on some of the horrors of capitalism, we have to consider that things could have been much worse if we didn't have this system. Our fights would have been on real battlefields, rather than economic battlefields. That's a theory, that's an idea that really led to the adoption of capitalism, or the free enterprise system, around the world.

…Tell me about Nudge.

We're now coming up to 2008, when Richard Thaler and Cass Sunstein published this book. It looks quite a bit different from the first two in that it reflects much more modern psychology. I admired Adam Smith for his personal observations, but there was no experimentation, there was no real modern psychology in it. What Sunstein and Thaler emphasise is a lot of principles of psychology that can only be understood with regard to actual experiments. So they talk about things like anchoring, availability, representativeness, heuristic optimism, overconfidence, asymmetry of appreciation of gains versus losses, status quo bias, framing, self-control mechanisms – all the things that we've learned about.

We're way ahead of Adam Smith now in our understanding of people, and that suggests a different model for our economy. Nudge doesn't present itself in a grandiose way at all, but it's a very important book. It really is a different model of our economy, and how government should be involved.

What was the ultimate cause of the crisis, in Fault Line's view?

The title of his book is Fault Lines – so it's plural. He notes that it's not one cause; he actually has several different classes of causes.

The first of them is political, and the politics that lead to rising inequality. That's been a trend in recent years in most nations of the world. Inequality has been getting worse, particularly in the US, but also in Europe and Asia and many other places. One thing that this has done is it has encouraged governments, who are aware of the resentment caused by the rising inequality, to try to take some kind of steps to make it more politically acceptable. He gives other examples as well, but historically, that has often taken the form of stimulating credit: instead of fixing the problems of the poor, lending money to them. He has a chapter entitled ‘Let them eat credit’.

Continue Reading.