Gabriel Weindel writes:
I am a PhD student in psychology and I have a question about Bayesian statistics. I want to compare two posterior distributions of parameters estimated from a (hierarchical) cognitive model fitted on two dependent variables (hence both fits are completely separated). One fit is from a DV allegedly containing psychological process X and Y, and the other one is from a DV that only contains X. The test is to look up whether the cognitive model does ‘notice’ the removal of Y selectively in the parameter that is supposed to contain this process !
My take was to assume that, as I have access to the posterior distribution of the population parameters for both fits, I can simply compute the overlap (or equivalent) between both posterior distributions and if this overlap is high/low-to-null, conclude that there is high/low-to-no evidence that the true parameters of the fit on the two DVs are the same.
But my senior co-authors disagree with me and reviewers will probably also as, first this might be wrong and second this obviously goes against most of the statistics used in psychology and elsewhere were you need a criterion to decide between a null and an alternative hypothesis and where you rarely have access to a posterior distribution of the population parameter. However to me it appears to be both the most desirable and valid solution.
Does this reasoning seem valid to you ?
My quick answer is that I don’t think it makes sense to compare posterior distributions. Instead I think you should fit one larger model that includes both predictors.
Weindel responds:
I don’t see why it doesn’t make sense. We had thought about fitting a larger model but we would then add a dummy variable (DV1 = 0, DV2 = 1) and the two predictors would be highly correlated as they share a process (r = .85), wouldn’t that be a problem also?
My reply: Sure, when two predictors are highly correlated, then it’s hard from the data alone to tell them apart. That’s just the way it is!