I (**Bob, not Andrew!**) will be doing a meetup talk this coming Thursday in New York City. Here’s the link with registration and location and time details (summary: pizza unboxing at 6:30 pm in SoHo):

- Bayesian Data Analysis Meetup: Under the hood: Stan’s library, language, and algorithms

After summarizing what Stan does, this talk will focus on how Stan is engineered. The talk follows the organization of the Stan software.

*Stan math library*: differentiable math and stats functions, template metaprorgrams to manage constants and vectorization, matrix derivatives, and differential equation derivatives.

*Stan language*: block structure and execution, unconstraining variable transforms and automatic Jacobians, transformed data, parameters, and generated quantities execution.

*Stan algorithms*: Hamiltonian Monte Carlo and the no-U-turn sampler (NUTS), automatic differentiation variational inference (ADVI).

*Stan infrastructure and process*: Time permitting, I can also discuss Stan’s developer process, how the code repositories are organized, and the code review and continuous integration process for getting new code into the repository

**Slides**

- Bob Carpenter. Stan: Under the Bonnet. Slides for NYC meetup presented 17 January 2019.

I realized I’m missing a good illustration of NUTS and how it achieves detailed balance and preferentially selects positions on the Hamiltonian trajectory toward the end of the simulated dynamics (to minimize autocorrelation in the draws). It was only an hour, so I skipped the autodiff section and scalable algorithms section and jumped to the end. I’ll volunteer do another meetup with the second half of the talk.

Any chance that the talk or slides will be available? I’d like to learn more about how it’s all engineered.

I’ll post them once I put a deck together. If you’d like to start with autodiff, there’s an arXiv paper, if you want to start with how everything works block-by-block, there’s the JSS paper, and if you want to understand NUTS, there’s the original Hoffman and Gelman JMLR paper and then Michael Betancourt’s arXiv on exhaustive HMC that delves into the theory of why NUTS actually works. For how the low-level stuff’s put together and how to become a Stan developer, Sean Talts and I did a presentation that was recorded and posted somewhere on the Stan web site. Sorry, I’m too lazy to look all this stuff up and provide links.

I edited the post to include the slides. As I expected, I only got halfway through the slides— I came with 60 or so for an hourlong talk)!

Was not able to make it so thank you for posting those.