Skip to content

As a forecaster, how important is it to “have a few elections under your belt”?

Kevin Lewis pointed me to this comment from Nate Silver on a recent post:

Having a few elections under your belt helps a *lot*. No matter how much you test things in the lab, there are some things you’re going to learn only by seeing how your forecast reacts to real data in real time. (I’m sure this applies to lots of other stuff too.)

It’s an interesting thought. Nate and I both have experience with election forecasting: he’s been doing it since 2008 and I’ve been doing it since 1992, on and off. And, as Nate wrote, our forecasts are pretty similar, so I guess I can take his comment as being very positive!, in that he’s putting us (the Economist) and them (Fivethirtyeight) in the same category, as the product of experienced forecasters who have learned by seeing how our forecasts react to real data in real time and have ended up with similar results.

I do agree that our forecasts are similar, especially at the national level. We have some differences in how we handle polls, which is why we’re forecasting Biden at 54.2% of the total vote and their forecast is 54.0%—no way to tell these apart! A few months ago, our forecasts differed by more, but that’s because our fundamentals-based predictions were different, and the fundamentals-based prediction becomes less important as election day approaches. We’ve discussed some interesting differences between the two forecasts, but these don’t have much of an impact on the headline numbers. For example, Fivethirtyeight gives Biden a 6% chance of winning South Dakota and we don’t, but . . . that’s only 6%, and these are elections that Biden would win anyway. Their predictive interval for Vermont is much wider than ours, but, again, we all know who’s gonna win Vermont anyway. These internals can be helpful in understanding how a model works, so they’re worth studying, but no matter how you slice it, the data are gonna say, “Biden’s the clear favorite but there’s an outside chance he won’t win enough swing states to pull it off.”

In his tweet, Nate also brings up the stability of his forecast: “it’s been pretty smooth, in contrast to our bouncier 2016 model (though of course the polls were way bouncer in ’16).” This indicates an example of the value of experience. Nate and I both have “a few elections under our belts,” so we know not to take polling bounces seriously, as they can come from differential nonresponse. This is a point that Mark Palko made back in 2012!, which my colleagues and I rediscovered in our Xbox study, which was also apparent in 2016 (see this graph from Alan Abramowitz), and which we formally incorporated into our model in 2020 by allowing a time-varying national polling bias term for polls that don’t adjust for partisanship. So, yeah, our experiences and insight from 2012 and 2016 have helped us get more sensible forecasts and not overreact to polling bounces in 2020.

There’s one thing, though. Much as I enjoy the respect that Nate shows to me from my years of experience, and much as I respect his years of experience in politics and sports analytics, couldn’t someone without all those elections under their belt do as well as we can? After all, Nate and I have written a ton about what we do: I’ve published a few books and many articles about political data analysis, including a few directly on the analysis of pre-election polls, and Nate’s been blogging and sharing his analyses for over a decade.

So couldn’t some newcomer with an open mind and an empty belt just read all our stuff, play with our code, and do better than the Economist or Fivethirtyeight?

I think they could. I think the right newcomers could do better, because they’d have the benefit of all our experience, plus their own unique perspectives.

Indeed, this has happened before. Back in 2010, old-school pollster wrote an insulting open letter to Nate Silver, basically calling Nate a young whippersnapper and telling him to grow up. I thought Zogby was wrong because he didn’t recognize the value of the division of labor. Zogby’s a pollster, Nate’s an analyst: These are two different, complementary, roles.

So, again, I have mixed feelings. I do think my work is enriched by my experience of studying all those elections in real time, and that I can avoid mistakes that I might have made without that experience, and I think this holds for Nate as well. But I’d hesitate to say that this experience is necessary or even desirable for all people. I was a young person once, and I recall figuring things out that oldsters were stuck on: sometimes lack of experience can help too.

I think young researchers should be able to do better than Nate or me, by making use of our experiences through our writings but without being stuck with our preconceptions, whatever they are.

P.S. Nate describes my post as “from the Economist team” but it’s actually by me! I can’t complain, though. I like being part of a team.


  1. David says:

    Not fully on topic, but there are also nice examples of people that came along and put out better COVID-19 projections than established epidemiologists (e.g Youyang Gu’s work at consistently did better than the IHME models).

  2. Anonymous says:

    I read his comment as suggesting you didn’t have experience with elections, and thus very odd. I read it as him saying he had that experience and your post reflected your lack of experience.

    • Andrew says:


      Nate also said how our forecasts were similar, so I took it as him saying we both have lots of experience. It’s experience of different sorts—his is journalistic and mine is academic—but we’ve both done lots of election forecasting and we’ve both thought a lot about the implications of probabilistic forecasts.

      • Anonymous says:

        It’s tough to tell but you may be right. Given how strident he’s been towards you all (He always refers to you as “The Economist guys” and never by name), I took this as but another way for him to express condescension…

        Now that could be wrong. I hope so. I struggle watching this debate though because I think as much as there needs to be a reckoning about how to even forecast a situation like we’re in now with vote irregularities, who can hold Nate accountable for the way he’s dismissed engagement with you? It seems like he’s just above it all.

        • Andrew says:


          I think Nate feels a lot of pressure from all sides. Doing this blog and responding to comments takes a lot of time, but I can do it on my own time and it’s fun. To feel responsible to defend oneself at all times on twitter . . . I can’t imagine. This would harden anyone.

  3. Sam says:

    Andrew, have you ever talked to Nate in person, or has all of your communication with him been over Twitter/blog posts?

    • Andrew says:


      Nate and I are on friendly terms. I haven’t seen him in person for several years but we sometimes exchange emails. He doesn’t seem to have a lot of interest right now in exploring problems with his prediction model, but maybe after the election is over he’ll be more open to discussing these things, I don’t know.

  4. Not Trampis says:

    Surely this is the same for any job. The more you do something the better you are at it.

    and I won’t call you shirley again

  5. Marc says:


    I think that you undervalue experience in your conclusion. Yes, anyone having read most of the relevant literature (including your work and 538’s) could in theory have built a similar model. However, as you very well know, building a complex model is hard, and there are many ways that it can go wrong. Having actual experience in very similar modelling helps both to avoid pitfalls and to make good educated guesses at the source of any errors. That latter is very important — think of how much of modelling is data cleaning and bug fixing. Maybe even more importantly, having previous experience can give more confidence in sticking to your guns on your basic assumptions, rather than being too swayed by the difference between your model and ‘conventional wisdom’ — a modeller version of the the pollster herding effect.

    Shorter version — just like you can’t perform surgery (very well) by having read a book; you can’t expect a first time modeller to do as well as an experienced team, no matter whose books they have read.

    • Andrew says:


      Sure, but flip it around: Nate and I both have lots of experience, but we were both overconfident on this one. And I have more experience and was more the overconfident one! Maybe a young outsider would’ve been more alert to the possibility that 2020 could have new features of voter turnout etc. not captured in the polls. “Sticking to your guns” is fine, but arguably our “guns” did not include all the relevant information.

Leave a Reply