Archive of posts filed under the Bayesian Statistics category.

## Progress!

This came in a mass email: Statistical Horizons is excited to present Applied Bayesian Data Analysis taught by Dr. Roy Levy on Thursday, February 18–Saturday, February 20. In this seminar, you will get both a practical and theoretical introduction to Bayesian methods in just 3 days. Topics include: Model construction Specifying prior distributions Graphical representation […]

## He wants to test whether his distribution has infinite variance. I have other ideas . . .

Evan Warfel asks a question: Let’s say that a researcher is collecting data on people for an experiment. Furthermore, it just so happens that due to the data collection procedure, data is gathered and recorded in 100-person increments. (Making it so that the researcher effectively has a time series, and at some point t, they […]

## Probability problem involving multiple coronavirus tests in the same household

Mark Tuttle writes: Here is a potential homework problem for your students. The following is a true story. Mid-December, we have a household with five people. My wife and myself, and three who arrived from elsewhere. Subsequently, various diverse symptoms ensue – nothing too serious, but everyone is concerned, obviously. Video conference for all five […]

## Responding to Richard Morey on p-values and inference

Jonathan Falk points to this post by Richard Morey, who writes: I [Morey] am convinced that most experienced scientists and statisticians have internalized statistical insights that frequentist statistics attempts to formalize: how you can be fooled by randomness; how what we see can be the result of biasing mechanisms; the importance of understanding sampling distributions. […]

## Probabilistic feature analysis of facial perception of emotions

With Michel Meulders, Paul De Boeck, and Iven Van Mechelen, from 2005 (but the research was done several years earlier): According to the hypothesis of configural encoding, the spatial relationships between the parts of the face function as an additional source of information in the facial perception of emotions. The paper analyses experimental data on […]

## Confidence intervals, compatability intervals, uncertainty intervals

“Communicating uncertainty is not just about recognizing its existence; it is also about placing that uncertainty within a larger web of conditional probability statements. . . . No model can include all such factors, thus all forecasts are conditional.” — us (2020). A couple years ago Sander Greenland and I published a discussion about renaming […]

## More background on our research on constructing an informative prior from a corpus of comparable studies

Erik van Zwet writes: The post (“The Shrinkage Trilogy: How to be Bayesian when analyzing simple experiments”) didn’t get as many comments as I’d hoped, so I wrote an short explainer and a reading guide to help people understand what we’re up to. All three papers have the same very simple model. We abstract a […]

## Many years ago, when he was a baby economist . . .

Jonathan Falk writes: Many years ago, when I was a baby economist, a fight broke out in my firm between two economists. There was a question as to whether a particular change in the telecommunications laws had spurred productivity improvements or not. There a trend of x% per year in productivity improvements that had gone […]

## Instead of comparing two posterior distributions, just fit one model including both possible explanations of the data.

Gabriel Weindel writes: I am a PhD student in psychology and I have a question about Bayesian statistics. I want to compare two posterior distributions of parameters estimated from a (hierarchical) cognitive model fitted on two dependent variables (hence both fits are completely separated). One fit is from a DV allegedly containing psychological process X […]

## “Accounting Theory as a Bayesian Discipline”

David Johnstone writes: The Bayesian logic of probability, evidence and decision is the presumed rule of reasoning in analytical models of accounting disclosure. Any rational explication of the decades-old accounting notions of “information content”, “value relevance”, “decision useful”, and possibly conservatism, is inevitably Bayesian. By raising some of the probability principles, paradoxes and surprises in […]

## How to figure out what went wrong with this model?

Tony Hu writes: Could you please help look at an example of my model fitting? I used a very flexible model—Bayesian multivariate adaptive regression spline. The result is as follows: I fitted the corn yield data with multiple predictors for counties of the US (the figure shows results of Belmont County in Ohio). My advisor […]

## Fisher vs. Neyman-Pearson hypothesis testing

You’ll sometimes see discussions of the differences between two different approaches to classical statistical null hypothesis testing. In the Fisher approach, you set up a null hypothesis and then you compute the p-value, which you use as a measure of evidence against the hypothesis. In the Neyman-Pearson approach, you define the p-value as a function […]

## How to convince yourself that multilevel modeling (or, more generally, any advanced statistical method) has benefits?

Someone who would like to remain anonymous writes: I have read your blog posts discussing the benefits of Bayesian inference and partial pooling from a multilevel modeling approach. Recently, I’ve begun thinking about designing a simulation to prove to myself that these methods provide superior performance. Here’s what I have thus far: Perhaps I could […]

## Is the right brain hemisphere more analog and Bayesian?

Oliver Schultheiss writes: I recently commented one of your posts (I forgot which one) with a reference to evidence suggesting that the right brain hemisphere may be in a better position to handle numbers and probabilistic predictions. Yesterday I came across the attached paper by Filipowicz, Anderson, & Danckert (2016) that may be of some […]

## From “Mathematical simplicity is not always the same as conceptual simplicity” to scale-free parameterization and its connection to hierarchical models

I sent the following message to John Cook: This post popped up, and I realized that the point that I make (“Mathematical simplicity is not always the same as conceptual simplicity. A (somewhat) complicated mathematical expression can give some clarity, as the reader can see how each part of the formula corresponds to a different […]

## Hierarchical stacking, part II: Voting and model averaging

(This post is by Yuling) Yesterday I have advertised our new preprint on hierarchical stacking. Apart from the methodology development, perhaps I could draw some of your attention to the analogy between model averaging/selection and voting systems. Model selection = we have multiple models to fit the data and we choose the best candidate model. Model […]

## Hierarchical stacking

(This post is by Yuling) Gregor Pirš, Aki, Andrew, and I wrote: Stacking is a widely used model averaging technique that yields asymptotically optimal predictions among linear averages. We show that stacking is most effective when the model predictive performance is heterogeneous in inputs, so that we can further improve the stacked mixture by a […]

## Infer well arsenic dynamic from filed kits

(This post is by Yuling, not Andrew) Rajib Mozumder, Benjamin Bostick, Brian Mailloux, Charles Harvey, Andrew, Alexander van Geen, and I arxiv a new paper “Making the most of imprecise measurements: Changing patterns of arsenic concentrations in shallow wells of Bangladesh from laboratory and field data”. Its abstract reads: Millions of people in Bangladesh drink […]

## Webinar: Functional uniform priors for dose-response models

This post is by Eric. This Wednesday, at 12 pm ET, Kristian Brock is stopping by to talk to us about functional uniform priors for dose-response models. You can register here. Abstract Dose-response modeling frequently employs non-linear regression. Functional uniform priors are distributions that can be derived for parameters that convey approximate uniformity over the […]

## Simulation-based calibration: Two theorems

Throat-clearing OK, not theorems. Conjectures. Actually not even conjectures, because for a conjecture you have to, y’know, conjecture something. Something precise. And I got nothing precise for you. Or, to be more precise, what is precise in this post is not new, and what is new is not precise. Background OK, first for the precise […]