Skip to content

More background on our research on constructing an informative prior from a corpus of comparable studies

Erik van Zwet writes:

The post (“The Shrinkage Trilogy: How to be Bayesian when analyzing simple experiments”) didn’t get as many comments as I’d hoped, so I wrote an short explainer and a reading guide to help people understand what we’re up to.

All three papers have the same very simple model. We abstract a study as a triple (beta,b,s), where beta is the true effect and b is an unbiased, normally distributed estimator with standard error s. We also define the z-value b/s and the signal-to-noise ratio (SNR) beta/s. The SNR is really important because it directly determines the (achieved) power, the type M error (exaggeration ratio), the type S error and more.

The z-value is the sum of the SNR and standard normal noise. So the distribution of the z-value is the convolution of the distribution of the SNR and N(0,1). From the distribution of the z-value we can recover the distribution of the SNR by deconvolution.

Now suppose we have a particular study with an estimate b and standard error s. We propose to embed this study in a large collection (corpus) of similar studies with estimates b_j and standard errors s_j. From the pairs (b_j,s_j) we can estimate the distribution of the z-value and then (by deconvolution) the distribution of the SNR. If we scale the distribution of the SNR by the standard error of the study of interest, we get a prior for the true effect (beta) of that study.

That’s the whole idea, but there are some interesting things along the way:

paper #1
Theorem 1 proves a claim of Andrew that the type M error is large when the SNR – or, equivalently, the power – is low. See

Proposition 2 formalizes another post of Andrew:

paper #2
On p. 4 we propose a new point of view on what it means for one prior to be more informative than another.

Theorem 2 says that scaling the distribution of the SNR to get a prior for beta (as we’re proposing) is the only way to ensure that the posterior inference is unaffected by changes of measurement unit.

On p 6 we discuss the anthropic principle in the context of our method, see also

Theorem 3. We’re proposing to scale the distribution of the SNR by the standard error. But then the prior for beta becomes dependent on the sample size. That is un-Bayesian and does not necessarily yield a consistent estimate. Theorem 3 says that depending on the shape of the prior, consistent estimation is still possible.

Section 5.3 offers a proposal for the “Edlin factor”, see

paper #3
Figure 1 and Table 1. We estimate the distribution of the z-value and (by deconvolution) the distribution of the SNR from 20,000 pairs (b_j,s_j) from RCTs in the Cochrane database.

Figure 2. We can transform the distribution of the SNR into the distribution of the (achieved) power. We find that the (achieved) power is typically quite low, see also

Figure 4. From the distribution of the SNR, we can also derive the distribution of the type M error. We show the conditional distribution of the exaggeration |b|/|beta| in the left panel of Figure 4. In the right panel, we show how our method fixes that.

I replied that, realistically, it’s tough to get comments sometimes, as there are hard problems to think about.

Erik responded:

I guess you’re right about these things being difficult. But if anyone should understand, it’s your readers. You’ve been telling them about the trouble with noisy estimates for years now! That’s why I tried to link up those papers to some of your earlier posts. Maybe that will help to provide the context.

Good point. The audience of this blog should indeed be receptive to these ideas.

Leave a Reply