Convergence Against Expectations
An Exciting but Ultimately Rather Disappointing Investment
Consider a hypothetical investment whose expected1 rate of return is 12.5% per day, independently across days. Over a year, the expected rate of return is an astronomical
Would you be surprised to learn that the very same investment can also have a 97% chance of losing money over the year, and will tend to lose more money the longer you invest — and can be guaranteed to eventually lose any amount of money you put in?
In this post, I’ll use a simple toy model to show how this strange phenomenon, which I’ll call convergence against expectations, really can happen — no tricks, no verbal sleight-of-hand. My aim is to explore how we can have two intuitions, seemingly at odds with each other, but both well-founded, about how a stochastic process such as an investment will evolve over time. These dueling intuitions will be reconciled using basic probability theory and simple graphs, as all combatants ought to be.
But when we think about what, exactly, one should do about this investment, we find once again that multiple answers are reasonable.
Along the way, we will see how an expected value, as defined in statistics, can diverge wildly from the values that are most probable. Consequently, one must sometimes choose between predictions that are unbiased, and predictions that are asymptotically consistent. For those of us in fields where both unbiasedness and consistency are considered nearly sacrosanct and usually coincide asymptotically, this dilemma encourages a form of heresy, one that leads down the crooked road to Bayesian decision theory.
However, along this road, some of the graphs are pretty…
Setup: No tricks
We’ll use a standard toy model of a financial asset that evolves multiplicatively. Suppose the initial dollar value of the investment is some fixed, arbitrary
where we interpret each
That’s it for our assumptions; nothing up my sleeves. Now we can start verifying some of my absurd claims.
Exponential Increase? Yes.
The expected proportional change in dollar value on any day
For independent random variables, the expectation of the product is simply the product of the expectations:
So we see that the expected value of
It sounds splendid to be unbiased, and even more splendid to expect exponential growth, so let’s drop the math, quit our jobs, throw in 100 dollars and watch how filthy rich we get in a year:
Given that our expectation — our unbiased prediction — was exponential growth, this piddling outcome looks like a computer glitch or a math error. But it’s not. The final value of our $100 investment really is essentially zero, and this is representative of most draws from the stochastic process we have described. But the expected value really is exponentially increasing. What in the name of Pierre-Simon Laplace is going on?
Exponential Decrease? Yes.
In the previous section, we answered the following question:
- How much money do you expect to have gained from this investment in a year?
However, we could have posed a very similar but not identical question:
- How much do you expect to have gained money from this investment in a year?
The first question was about
Consider: for any
We can quantify this net decrease more precisely. To start, let’s make a new variable
Applying our good friend Chebyshev’s Inequality,2 we have an expression that bounds
where
Taking the complement in probability and substituting
This expression says that as
We have established that as
Hmmmmmm. Earlier we found that as
Reconciling our intuitions: Convergence against Expectations
The question, mathematically, is how to have the expectation
There’s only one way: the sequence
To visualize this, we can run 10,000 simulations of trajectories starting at $1, from
So, on any given day, 99% of trajectories are below the red line, which is far, far below the highest trajectories, a handful of which spike into the millions — pulling up the average.
Zooming in, we see a pretty latticework of transient exponential increases and decreases; temporarily, the 99th percentile rises out of sight.
However, plotting the daily 99th percentile5 over a longer period, we can see that convergence to zero really does happen6 — eventually, after an intriguing excursion into the thousands (our proof was about limiting behavior, so it’s not contradicted by this short-run behavior). Because of our proof of convergence, we know that we could pick any percentile, however high, and eventually it would converge to zero as above.
But is it a good investment?
Mathematically, the paradox is resolved (or, for the wonderless pedants, never existed). Because
But this resolution might still seem unsatisfying; we haven’t yet said what to do about the investment. Is it a good investment or not?
The conventional answer (which I think it’s healthy to be skeptical of) is that it depends on your preferences for risk. For large
If your utility function is linear in money — meaning that it has no concavity, and you have no aversion to risk — then it’s easy to verify that the expected utility-gain from the investment is positive, because the expected money-gain is positive. So, in theory, you think it’s worth some positive amount of money to take the gamble for any finite length of time
If your utility function is concave — meaning that you have some aversion to risk — then you may consider it worth paying money not to take the investment, depending on the degree of concavity. For example, suppose your utility function is logarithmic in money,
— and as before, we noted that
Personally, I’m not entirely convinced by this particular model of risk preferences, for reasons I may explore in another post. But however we model them, differences in thinking about risk or uncertainty would explain why, even after the facts are laid before us, we may disagree (even with ourselves) about whether this investment is worth taking.
Conclusion
It’s worth noting that this phenomenon isn’t a “knife-edge result” that depends on the exact numbers I use in my example; convergence against expectations will occur whenever
Footnotes
In the formal sense↩︎
We could get tighter bounds with Hoeffding’s Inequality, but Chebyshev generalizes better, and the pursuit of maximally tight bounds is a form of masochism that I find tedious.↩︎
To see that the bounding functions are decreasing, note that we can always pick a
small enough that . So as increases, for small enough the bounds and are both exponential functions containing with a negative coefficient — the bounding functions are decreasing to zero.↩︎which applies because
↩︎Because the sample mean depends primarily on extremely improbable events, this simple simulation doesn’t estimate it very well for large T, so I don’t plot it.↩︎
Technically, we can’t prove convergence by showing a simulation that seems to converge — it could always diverge again. But apparent convergence in a simulation is empirical evidence that we didn’t botch our proof.↩︎