Some things are easier to predict than others.
Physics and engineering problems lend themselves to success in predicting future outcomes. In these types of problems, we know the variables and governing relationships involved with a high degree of confidence and we can extrapolate and interpolate.
In short, we can model these problems and forecast the future. For example, we know what the position of the sun and the moon will be tomorrow. And we know what the position will be on June 29, 2015.
There are substantial realms, however, in which prediction is harder. Some things, as Nassim Taleb points out in The Black Swan are harder to predict because they involve an element of reflexivity.
To understand the future to the point of being able to predict it you need to incorporate elements from the future itself… assume you are a special scholar in a medieval university’s forecasting department… you would need to hit upon inventions of electricity, atomic bomb, internet, airplane… Prediction requires knowing about technologies that will be discovered in the future. But that very knowledge will almost automatically allow us to start developing those technologies right away. Ergo, we do not know what we will know.
In short we fail because we are ignorant — we don’t have the knowledge.
Despite our education, fancy computers, and sophisticated modelling tools, we cannot predict the future when we possess only a partial understanding of the variables and relationships involved. Once we step outside of predicting the neat world we understand and into the world we think we understand, we become ill equipped. We don’t fully understand the variables and relationships involved — we have only partial understanding. Worse, we often don’t realize that we have a limited understanding.
In these situations, the further into the future we try to predict, the more prone to error we are. We can, for instance, predict the path of a hurricane for the next few hours with a reasonable degree of accuracy. Yet the further into the future we try to predict the path, the more error prone we become.
I want to explore more about what type of problems lend themselves to predictability. How can we recognize these problems? How can we avoid the siren song of experts predictions?
This paper is a good place to start.
According to the authors, there are two distinct issues associated with forecasting: accuracy and uncertainty. And, there are “three types of predictions: (a) those involving patterns, (b) those utilizing relationships, and (c) those based primarily on human judgment.”
Each of these are covered in detail in the paper, however, I want to highlight some of the interesting tidbits.
When predicting patterns “uncertainty increases together with the forecasting horizon” and “such an increase (in uncertainty) is bigger than that postulated theoretically.”
Another way to explain the differences between the two figures is that temperature is a physical random variable, subject to physical laws, while financial markets are informational random variables that can take any value without restriction—there are no physical impediments to the doubling of a price.
Judgmental forecasting and uncertainty
Empirical findings in the field of judgmental psychology have shown that human judgment is even less accurate at making predictions than simple statistical models (more on this in a future post). These findings go back to the fifties with the work of psychologist Meehl (1954), who reviewed some 20 studies in psychology and discovered that the “statistical” method of diagnosis was superior to the traditional “clinical” approach.
Diffusion of responsibility
A large number of people can be wrong, and know that they can be wrong, brought about by the comfort of a system. They continue their activities “because other people do it”. There have been no studies examining the notion of the diffusion of responsibility in such problems of group error.
…the biases and limitations of human judgment affect its ability to make sound decisions when optimism influences its forecasts.
We are worse at assessing uncertainty than predicting future outcomes
Empirical evidence has shown that the ability of people to correctly assess uncertainty is even worse than that of accurately predicting future outcomes. Such evidence has shown that humans are overconfident of positive expectations, while ignoring or downgrading negative information. This means that when they are asked to specify confidence intervals, they make them too tight, while not considering threatening possibilities like the consequences of recessions, or those of the current subprime and credit crisis.
… complex systems cannot be reduced to simple mathematical laws and be modeled appropriately. The equations that attempt to represent them are only approximations to reality, and are often highly sensitive to external influences and small changes in parameterization. Most of the time they fit past data well, but are not good for predictions. Consequently, the paper offers suggestions for improving forecasting models by following what is done in systems biology, integrating information from disparate sources in order to achieve such improvements.
Fast and frugal heuristics can be better than knowledge intensive procedures
…some of the fast and frugal heuristics that people use intuitively are able to make forecasts that are as good as or better than those of knowledge-intensive procedures. By using research on the adaptive toolbox and ecological rationality, they demonstrate the power of using intuitive heuristics for forecasting in various domains, including sports, business, and crime.
Accurate forecasting in the economic and business world is usually not possible
…accurate forecasting in the economic and business world is usually not possible, due to the huge uncertainty, as practically all economic and business activities are subject to events which we are unable to predict. The fact that forecasts can be inaccurate creates a serious dilemma for decision and policy makers. On the one hand, accepting the limits of forecasting accuracy implies being unable to assess the correctness of decisions and the surrounding uncertainty. On the other hand, believing that accurate forecasts are possible means succumbing to the illusion of control and experiencing surprises, often with negative consequences.
The problems facing forecasters
The forecasts of statistical models are “mechanical”, unable to predict changes and turning points, and unable to make predictions for brand new situations, or when there are limited amounts of data. These tasks require intelligence, knowledge and an ability to learn which are possessed only by humans. Yet, as we saw, judgmental forecasts are less accurate than the brainless, mechanistic ones provided by statistical models. Forecasters find themselves between Carybdis and Scylla. On the one hand, they understand the limitations of the statistical models. On the other hand, their own judgment cannot be trusted. … The problem with humans is that they suffer from inconsistency, wishful thinking and all sorts of biases that diminish the accuracy of their predictions. The biggest challenge and only solution to the problem is for humans to find ways to exploit their intelligence, knowledge and ability to learn while avoiding their inconsistencies, wishful thinking and biases.
In real life, most series behave like the DJIA; in other words, humans can influence their patterns and affect the relationships involved by their actions and reactions. In such cases, forecasting is extremely difficult or even impossible, as it involves predicting human behavior, something which is practically impossible. However, even with series like the temperature human intervention is also possible, although there is no consensus in predicting its consequences.
Still Curious? This was a follow up post to How To Predict Everything.