Skip to content
Archive of entries posted by

Hierarchical stacking, part II: Voting and model averaging

(This post is by Yuling) Yesterday I have advertised our new preprint on hierarchical stacking. Apart from the methodology development, perhaps I could draw some of your attention to the analogy between model averaging/selection and voting systems. Model selection = we have multiple models to fit the data and we choose the best candidate model. Model […]

Hierarchical stacking

(This post is by Yuling) Gregor Pirš, Aki, Andrew, and I wrote: Stacking is a widely used model averaging technique that yields asymptotically optimal predictions among linear averages. We show that stacking is most effective when the model predictive performance is heterogeneous in inputs, so that we can further improve the stacked mixture by a […]

Infer well arsenic dynamic from filed kits

(This post is by Yuling, not Andrew) Rajib Mozumder, Benjamin Bostick, Brian Mailloux, Charles Harvey, Andrew, Alexander van Geen, and I arxiv a new paper “Making the most of imprecise measurements: Changing patterns of arsenic concentrations in shallow wells of Bangladesh from laboratory and field data”. Its abstract reads: Millions of people in Bangladesh drink […]

The likelihood principle in model check and model evaluation

(This post is by Yuling) The likelihood principle is often phrased as an axiom in Bayesian statistics. It applies when we are (only) interested in estimating an unknown parameter $latex \theta$, and there are two data generating experiments both involving $latex \theta$, each having observable outcomes $latex y_1$ and $latex y_2$ and likelihoods $latex p_1(y_1 […]

From monthly return rate to importance sampling to path sampling to the second law of thermodynamics to metastable sampling in Stan

(This post is by Yuling, not Andrew, except many ideas are originated from Andrew.) This post is intended to advertise our new preprint Adaptive Path Sampling in Metastable Posterior Distributions  by Collin, Aki, Andrew and me, where we developed an automated implementation of path sampling and adaptive continuous tempering. But I have been recently reading a writing book […]

How good is the Bayes posterior for prediction really?

It might not be common courtesy of this blog to make comments on a very-recently-arxiv-ed paper. But I have seen two copies of this paper entitled “how good is the Bayes posterior in deep neural networks really” left on the tray of the department printer during the past weekend, so I cannot underestimate the popularity of […]

“Machine Learning Under a Modern Optimization Lens” Under a Bayesian Lens

I (Yuling) read this new book Machine Learning Under a Modern Optimization Lens (by Dimitris Bertsimas and Jack Dunn) after I grabbed it from Andrew’s desk. Apparently machine learning is now such a wide-ranging area that we have to access it through some sub-manifold so as to evade dimension curse, and it is the same […]