Over 400,000 people visited Farnam Street last month to learn howto make better decisions, create new ideas, and avoid stupid errors. With more than 100,000 subscribers to our popular weekly digest, we've become an online intellectual hub. To learn more about we what do, start here.
On Building Successful Quantitative Models In Finance And Risk
Ed Thorp on some of the things he learned about building successful quantitative models in finance and risk.
Here are some of the things we learned about building successful quantitative models in finance. Unlike blackjack and gambling games, you only have one history from which to use data (call this the Heraclitus principle: you can never invest in the same market twice). This leads to estimates rather than precise conclusions. Like gambling games, the magnitude of your bets should increase with expectation and decrease with risk. Further, one needs reserves to protect against extreme moves. For the long-term compounder, the Kelly criterion handles the problem of allocating capital to favorable situations. It shows that consistent overbetting eventually leads to ruin. Such overbetting may have contributed to the misfortunes of Victor Niederhoffer and of LTCM.
Our notions of risk management expanded from individual warrant and convertible hedges to, by 1973, our entire portfolio. There were two principal aspects: local risk versus global risk (or micro versus macro; or diffusion versus jump). Local risk dealt with “normal” fluctuations in prices, whereas global risk meant sudden large or even catastrophic jumps in prices. To manage local risk, in 1973–1974 we studied the terms in the power series expansion of the Black–Scholes option formula, such as delta, gamma (which we called curvature) and others that would be named for us later by the financial community, such as theta, vega and rho. We also incorporated information about the yield “surface”, a plot of yield versus maturity and credit rating. We used this to hedge our risk from fluctuations in yield versus duration and credit rating.
Controlling global risk is a quite different problem. We asked how the value of our portfolio would change given changes of specified percentages in variables like the market index, various shifts in the yield surface, and volatility levels. In particular we asked extreme questions: what if a terrorist explodes a nuclear bomb in New York harbor? Our prime broker, Goldman Sachs, assured us that duplicates of our records were safe in Iron Mountain. What if a gigantic earthquake hit California or Japan? What if T-bills went from 7% to 15%? (they hit 14% a couple of years later, in 1981). What if the market dropped 25% in a day, twice the worst day ever? (it dropped 23% in a day 10 years later, in October 1987. We broke even on the day and were up slightly for the month). Our rule was to limit global risk to acceptable levels, while managing local risk so as to remain close to market neutral.
Two fallacies of which we were well aware were that previous historical limits on financial variables should not be expected to necessarily hold in the future, and that the mathematically convenient lognormal model for stock prices substantially underestimates the probabilities of extreme moves (for this last, see my columns in Wilmott, March and May 2003). Both fallacies reportedly contributed to the downfall of LTCM.
Two questions about risk which I try to answer when considering or reviewing any invest- ment are: “What are the factor exposures”, and “What are the risks from extreme events?”.