Historic volatility measures a time series of past market prices. Implied volatility looks forward in time, being derived from the market price of a market-traded derivative (in particular, an option).
Volatility as described here refers to the actual volatility, more specifically:
Now turning to implied volatility, we have:
For a financial instrument whose price follows a Gaussian random walk, or Wiener process, the width of the distribution increases as time increases. This is because there is an increasing probability that the instrument's price will be farther away from the initial price as time increases. However, rather than increase linearly, the volatility increases with the square-root of time as time increases, because some fluctuations are expected to cancel each other out, so the most likely deviation after twice the time will not be twice the distance from zero.
Since observed price changes do not follow Gaussian distributions, others such as the Lévy distribution are often used. These can capture attributes such as "fat tails". Volatility is a statistical measure of dispersion around the average of any random variable such as market parameters etc.
For any fund that evolves randomly with time, the square of volatility is the variance of the sum of infinitely many instantaneous rates of return, each taken over the nonoverlapping, infinitesimal periods that make up a single unit of time.
The generalized volatility ?T for time horizon T in years is expressed as:
Therefore, if the daily logarithmic returns of a stock have a standard deviation of ?daily and the time period of returns is P in trading days, the annualized volatility is
A common assumption is that P = 252 trading days in any given year. Then, if ?daily = 0.01, the annualized volatility is
The monthly volatility (i.e., T = 1/12 of a year or P = 252/12 = 21 trading days) would be
The formulas used above to convert returns or volatility measures from one time period to another assume a particular underlying model or process. These formulas are accurate extrapolations of a random walk, or Wiener process, whose steps have finite variance. However, more generally, for natural stochastic processes, the precise relationship between volatility measures for different time periods is more complicated. Some use the Lévy stability exponent ? to extrapolate natural processes:
If ? = 2 you get the Wiener process scaling relation, but some people believe ? < 2 for financial activities such as stocks, indexes and so on. This was discovered by Benoît Mandelbrot, who looked at cotton prices and found that they followed a Lévy alpha-stable distribution with ? = 1.7. (See New Scientist, 19 April 1997.)
Much research has been devoted to modeling and forecasting the volatility of financial returns, and yet few theoretical models explain how volatility comes to exist in the first place.
Roll (1984) shows that volatility is affected by market microstructure. Glosten and Milgrom (1985) shows that at least one source of volatility can be explained by the liquidity provision process. When market makers infer the possibility of adverse selection, they adjust their trading ranges, which in turn increases the band of price oscillation.
Investors care about volatility for at least eight reasons:
Volatility does not measure the direction of price changes, merely their dispersion. This is because when calculating standard deviation (or variance), all differences are squared, so that negative and positive differences are combined into one quantity. Two instruments with different volatilities may have the same expected return, but the instrument with higher volatility will have larger swings in values over a given period of time.
For example, a lower volatility stock may have an expected (average) return of 7%, with annual volatility of 5%. This would indicate returns from approximately negative 3% to positive 17% most of the time (19 times out of 20, or 95% via a two standard deviation rule). A higher volatility stock, with the same expected return of 7% but with annual volatility of 20%, would indicate returns from approximately negative 33% to positive 47% most of the time (19 times out of 20, or 95%). These estimates assume a normal distribution; in reality stocks are found to be leptokurtotic.
Although the Black Scholes equation assumes predictable constant volatility, this is not observed in real markets, and amongst the models are Emanuel Derman and Iraj Kani's and Bruno Dupire's local volatility, Poisson process where volatility jumps to new levels with a predictable frequency, and the increasingly popular Heston model of stochastic volatility.
It is common knowledge that types of assets experience periods of high and low volatility. That is, during some periods, prices go up and down quickly, while during other times they barely move at all.
Periods when prices fall quickly (a crash) are often followed by prices going down even more, or going up by an unusual amount. Also, a time when prices rise quickly (a possible bubble) may often be followed by prices going up even more, or going down by an unusual amount.
Most typically, extreme movements do not appear 'out of nowhere'; they are presaged by larger movements than usual. This is termed autoregressive conditional heteroskedasticity. Whether such large movements have the same direction, or the opposite, is more difficult to say. And an increase in volatility does not always presage a further increase--the volatility may simply go back down again.
The risk parity weighted volatility of the three assets Gold, Treasury bonds and Nasdaq (Worldvolatility.com) acting as proxy for the Marketportfolio seems to have a low point at 4% after turning upwards for the 8th time since 1974 at this reading in the summer of 2014.worldvolatility.com
Some authors point out that realized volatility and implied volatility are backward and forward looking measures, and do not reflect current volatility. To address that issue an alternative, ensemble measure of volatility was suggested. This measure is defined as the standard deviation of ensemble returns instead instead of time series of returns.
There exist several known parametrisation of the implied volatility surface, Schonbucher, SVI and gSVI.
Using a simplification of the above formula it is possible to estimate annualized volatility based solely on approximate observations. Suppose you notice that a market price index, which has a current value near 10,000, has moved about 100 points a day, on average, for many days. This would constitute a 1% daily movement, up or down.
To annualize this, you can use the "rule of 16", that is, multiply by 16 to get 16% as the annual volatility. The rationale for this is that 16 is the square root of 256, which is approximately the number of trading days in a year (252). This also uses the fact that the standard deviation of the sum of n independent variables (with equal standard deviations) is ?n times the standard deviation of the individual variables.
The average magnitude of the observations is merely an approximation of the standard deviation of the market index. Assuming that the market index daily changes are normally distributed with mean zero and standard deviation ?, the expected value of the magnitude of the observations is ?(2/?)? = 0.798?. The net effect is that this crude approach underestimates the true volatility by about 20%.
Consider the Taylor series:
Taking only the first two terms one has:
Volatility thus mathematically represents a drag on the CAGR (formalized as the "volatility tax"). Realistically, most financial assets have negative skewness and leptokurtosis, so this formula tends to be over-optimistic. Some people use the formula:
for a rough estimate, where k is an empirical factor (typically five to ten).
Despite the sophisticated composition of most volatility forecasting models, critics claim that their predictive power is similar to that of plain-vanilla measures, such as simple past volatility  especially out-of-sample, where different data are used to estimate the models and to test them. Other works have agreed, but claim critics failed to correctly implement the more complicated models. Some practitioners and portfolio managers seem to completely ignore or dismiss volatility forecasting models. For example, Nassim Taleb famously titled one of his Journal of Portfolio Management papers "We Don't Quite Know What We are Talking About When We Talk About Volatility". In a similar note, Emanuel Derman expressed his disillusion with the enormous supply of empirical models unsupported by theory. He argues that, while "theories are attempts to uncover the hidden principles underpinning the world around us, as Albert Einstein did with his theory of relativity", we should remember that "models are metaphors - analogies that describe one thing relative to another".
Well known hedge fund managers with expertise in trading volatility include Mark Spitznagel and Nassim Nicholas Taleb of Universa Investments, Paul Britton of Capstone Holdings Group, Andrew Feldstein of Blue Mountain Capital Management, and Nelson Saiers from Saiers Capital.