Uncertainty in Discounting the Future

environment
economics
Author

Gabriel Lewis

Published

January 20, 2021

Our most important decisions—about mitigating climate change, managing our investments, funding childhood education, eating ice cream — often require weighing present costs and benefits against future ones. Crucially, we are usually pretty uncertain about how much, if at all, to discount the future in favor of the present. An Interstate Working Group (IWG) recently picked a single number to try to answer this complicated question: a “discount rate.” This number will determine major U.S. government decisions (even about ice cream) for years to come.

There’s a strong argument to be made that federal policymakers shouldn’t discount the future at all, regardless of whether individuals do or should. But the point of this blog post is to explain what a discount rate is, and to show, using a bit of math (please don’t run away!), that picking a discount rate and ignoring our uncertainty about it necessarily makes us undervalue future costs and benefits. Ignoring uncertainty makes us short-sighted.

Ask an economist about weighing present and future benefits, and they may give you an answer like this:1 if you received 100 dollars now, you could invest it at some rate of return “\(r\)” (think of \(r\) as a small fraction, say 0.03) and in a year you’d have \((1 + r)100\) dollars. But if instead you just received 100 dollars a year from now, well, then you’d just have 100 dollars. With some hand-waving, the economist concludes that any amount of benefit “b” (measured in dollars) is worth \(1 + r\) times more if we get it now, instead of getting it a year from now. Or turning things around, a benefit b is worth \(1/(1 + r)\) times less if we get it a year from now, instead of now. More generally, the economist says (now gesturing even more wildly), a benefit “b” that we’ll experience “t” years in the future should be worth \(b/(1 + r)^t\) to us now — provided we have some investment opportunity with a guaranteed rate of return r. In this context, \(r\) is called the “discount rate.” The bigger it is, the more we discount the future.

Now, even if we buy this story (and the sneaky shift from “benefit” in general to “dollars” in particular), there’s a problem: we don’t know what to invest in, let alone what rate of return it will actually deliver. In the real world, the discount rate \(r\) is intrinsically uncertain.

Not an obstacle, one might say — we can get all the experts together, analyze all the data, and calculate an expected future rate of rate of return \(\mathbb{E}[r]\) for our best investment, whatever that might be. Here, \(\mathbb{E}\) denotes the expected value, which averages over possible values of r, weighted by their probabilities. Then we can plug \(\mathbb{E}[r]\) into our present value formula, \(b/(1 + \mathbb{E}[r])^t\). This seems to be roughly what IWG is proposing, and it is certainly what many U.S. federal agencies and other decision-makers often do in practice.

Unfortunately, whatever \(\mathbb{E}[r]\) might be, plugging it in is simply incorrect. We’re interested in an expected present value, \(E[b/(1 + r)^t]\). Plugging in the expected value of \(r\) delivers something else, \(b/(1 + \mathbb{E}[r])^t\). In fact, the following inequality necessarily holds:

\[\mathbb{E}[b/(1 + r)^t] > b/(1 + \mathbb{E}[r])^t\]

That is, the expected present value of the benefit (on the left) is strictly greater than the number we get by plugging in the expected discount rate (on the right). What’s remarkable about the above inequality is how general it is — it holds regardless of the form our uncertainty takes (regardless of what probability distribution \(r\) has, given the data ). Jensen’s Inequality is amazing that way.

Ok, one might say, cool math — but is this enough of an underestimate to matter? Yes. Consider some cost c that will occur in 100 years: storm damage from climate change, for example.

Suppose we believe there are three possible values for \(r\): 0.01, 0.03, and 0.05, all equally probable — this is roughly the range that IWG is actually considering, though we’re simplifying by picking only three values. Since we’re uncertain about \(r\), we should calculate the expected present value \(\mathbb{E}[c/(1 + r)^{100}] = c(1/1.01^{100} + 1/1.03^{100} + 1/1.05^{100})/3\), which is about \(0.14c\). But suppose instead of doing it the right way, we just plugged in the expected rate of return \(\mathbb{E}[r]= 1.03\) , ignoring our uncertainty about that rate. This would give us \(c/(1.03)^{100}\) , or about \(0.05c\).

So the actual expected present value of the future cost is about 2.75 times higher than our mistaken estimate. A more realistic calculation would use a continuous range of possible discount rates, and with probabilities that are peaked around our best prediction; this calculation reaches the same conclusion.

Show me the more realistic calculation!

We can model our uncertainty about \(r\) by assigning it a Beta probability distribution which puts approximately 80% probability between \(r=0.01\) and \(r=0.05\), keeping an expected value of \(0.03\).

Code
years <- 100

#Set parameters of Beta distribution
beta_mean <- 0.03
beta_concentration <- 100

#Verify quantiles
(beta_quantiles <- qbeta(p = c(0.1, 0.90), shape1 = beta_mean*beta_concentration, beta_concentration))
[1] 0.01085267 0.05133567
Code
discount_out <- mean(1/(1 + rbeta(n = 10000, shape1 = beta_mean*beta_concentration, shape2 = beta_concentration))^years)

This gives us \(\mathbb{E}[1/(1 + r)^{100}]\approx\) 0.129

It would be catastrophic to be so wrong about the future costs of climate change. If you think we should discount future costs and benefits, then your uncertainty about the discount rate itself must be part of your calculations, or you will necessarily discount the future too much.

Footnotes

  1. What follows is, of course, a simplification that captures the essence of the economic argument.↩︎


License: Attribution-NonCommercial 4.0 International . (You may use my work only with proper citation and for non-commercial purposes).