Skip to content
 

Deep learning workflow

Ido Rosen points us to this interesting and detailed post by Andrej Karpathy, “A Recipe for Training Neural Networks.” It reminds me a lot of various things that Bob Carpenter has said regarding the way that some fitting algorithms are often oversold because the presenters don’t explain the tuning that was required to get good answers. Also I like how Karpathy presents things; it reminds me of Bayesian workflow.

The only thing I’d add is fake-data simulation.

I’m also interested in the ways that deep-learning workflow differs, or should differ, from our Bayesian workflow when fitting traditional models. I don’t know enough about deep learning to know what to say about this, but maybe some of you have some ideas.

One Comment

  1. Ben says:

    Karpathy’s old blog posts from when he was in school were super awesome. Like this one: http://karpathy.github.io/2015/05/21/rnn-effectiveness/.

    Definitely more fun than trying to read an ML paper. I think I got into ML via Karpathy blogs. Doubt I’m alone on that. Good stuff.

Leave a Reply