Skip to content
 

Oooh, I hate it when people call me “disingenuous”:

This has happened before.

I hate it when someone describes me as “disingenuous,” which according to the dictionary, means “not candid or sincere, typically by pretending that one knows less about something than one really does.” I feel like responding, truly, that I was being candid and sincere! But of course once someone accuses you of being insincere, it won’t work to respond in that way. So I can’t really do anything with that one.

Anyway, this came up recently in an article by economist Casey Mulligan, “3 Reasons Election Forecasts Made False Projections Favoring Joe Biden.” Mulligan makes some interesting points in his article, but I don’t agree with this bit:

The Economist forecaster Andrew Gelman, not an economist but an eminent Bayesian statistician, is now rather disingenuously shifting all the blame onto pollsters for assembling skewed samples. Arguably most of his forecast error came instead from his seemingly arbitrary choice of which questions to use from the polls. Gelman has claimed that own-vote questions are better forecasters than expectation questions, which is a respectable conclusion but no reason to completely ignore the expectation questions instead of assigning them somewhat less weight.

There are a few things wrong here.

First, the “disingenuous” thing (or “rather disingenuously,” which sounds like the name of one of the characters in a hilarious Michael Keaton movie from the 1980s)—that’s just bullshit. Everything I’ve written on the topic of polling and elections is 100% sincere. The idea that I would pretend I know less than I really do about something . . . let’s just say that’s never been my style! So one minus point for Mulligan for failed mind-reading.

Second, Mulligan points to my post entitled, “Don’t kid yourself. The polls messed up—and that would be the case even if we’d forecasted Biden losing Florida and only barely winning the electoral college,” as evidence that I “shifted all the blame onto pollsters.” Funny that he should say this because here’s what I wrote in the post:

Saying that the polls messed up does not excuse in any way the fact that our model messed up. A key job of the model is to account for potential problems in the polls!

Third, he writes about my “seemingly arbitrary choice of which questions to use from the polls.” This has nothing to do with me! We at the Economist are using the same poll summaries that are reported in the newspaper, Real Clear Politics, Fivethirtyeight, and everywhere else.

In his article, Mulligan has some reasonable points and some not-so-reasonable points. He suggests that in forecasting elections we use other information besides horse-race polls and fundamentals; he thinks we should also use information such as survey responses on who people think will win the election. I agree that this would’ve helped in 2020. In other elections such as 2016, such information would not have be so helpful, but I take Mulligan’s point that more information is out there, and it could be a bad idea to ignore such information even though it’s not always clear exactly how to use it.

One thing Mulligan says that I don’t believe is that Trump’s underperformance in the polls is due to “social desirability . . . Trump is the Bad Orange Man to many. . . . This suggests that some fraction of Trump supporters would not acknowledge their support for him — the “shy Trump voter” — especially in Democratic communities.” I don’t buy this, partly because the poll gap in 2016 was largest not in Democratic states such as New York and California but in strong Republican states such as West Virginia and North Dakota; see figure 2 of this paper.

Mulligan also says that “renegade pollsters Democracy Institute and Trafalgar . . . can be proud of the accuracy of their much-maligned forecasts of the 2020 election.” I haven’t looked into Democracy Institute, but we did check out Trafalgar’s forecast, and it wasn’t so great! They forecast Biden to win 235 electoral votes. Biden actually won 306. Our Economist model gave a final prediction of 356. 356 isn’t 306. We were off by 50 electoral votes, and that was kind of embarrassing. We discussed what went wrong, and the NYT ran an article on “why political polling missed the mark.” Fine. We were off by 50 electoral votes (and approximately 2.5 percentage points on the popular vote, as we predicted Biden with 54.4% of the two-party vote and he received about 52%). We take our lumps, and we try to do better next time. But . . . Trafalgar’s forecast was off by 71 electoral votes! So I can’t see why Mulligan thinks we were so bad but they “can be proud of” their accuracy.

Finally, Mulligan makes another good point when he talks about voter turnout. Turnout is indeed hard to forecast from polls, as my colleague Bob Shapiro discusses in this post. But the fact that turnout modeling is difficult does not mean that we should just throw up our hands and leave it to the pollsters! I accept Mulligan’s point that we didn’t try hard enough to model this potential source of nonsampling error.

In summary, Mulligan makes some good points and some bad points in his article. I’m annoyed because he called me insincere when I wasn’t, and he mischaracterized my post as “shifting all the blame” onto others when I explicitly wrote that this “does not excuse in any way the fact that our model messed up.” That’s just annoying.

Leave a Reply