Writing

vector[N] y; for (n in 1:N) y[n] = normal_rng(x[n] * beta + alpha, sigma);

isn’t going to be that much slower than

vector[N] mu = x * beta + alpha; for (n in 1:N) y[n] = normal_rng(mu[n], sigma);

Ben Bales is almost done vectorizing, so that we’d be able to write it as

vector[N] y = normal_rng(x * beta + alpha, sigma);

which will be just as efficient as a vectorized version because the vectorization only improves performance when (a) there are big matrix ops involved (one matrix-vector multiply is more efficient than N row-vector/vector products because of memory locality), and (b) when there are gradients involved (there are no gradients in generated quantities).

Nevertheless, we really wnat to write the RNG, so this all looks like:

vector[N] y = normal_rng(x, beta, alpha, sigma);]]>

I would guess that for RNG, it wouldn’t be a notable performance loss to just wrap the _glm_rng around the _rng functions, no?

]]>Yes, there is an _lpmf function. No rng yet, but we should have one for every probability function.

]]>