## Epistemic and aleatoric uncertainty There was some discussion in comments recently about the distinction between aleatoric uncertainty (physical probabilities such as coin flips) and epistemic uncertainty (representing ignorance rather than an active probability model).

We’ve talked about this before, but not everyone was reading this blog 15 years ago, so I’ll cover it again here. For a very similar take, I also recommend this article by Tony O’Hagan from 2004. These ideas weren’t new back then either!

Consider the following two probabilities:

(1) p1 = the probability that a particular coin will land “heads” in its next flip;

(2) p2 = the probability that the world’s greatest boxer would defeat the world’s greatest wrestler in a fight to the death.

The first of these probabilities is essentially exactly 1/2. Let us suppose, for the sake of argument, that the second probability is also 1/2. Or, to put it more formally, suppose that we have uncertainty about p2 that leaves us equally likely to choose either option. In a Bayesian sense, assume the mean of our prior distribution, E(p2), equals 1/2.

In Bayesian inference, p1 = p2 = 1/2. Which doesn’t seem quite right, since we know p1 much more than we know p2. More generally, it seems a problem with representation-of-uncertainty-by-probability. To put it another way, the integral of a probability is a probability, and once we’ve integrated out the uncertainty in p2, it’s just plain 1/2.