How and why polling stumbled in 2020

After an election full of surprises, Managing Editor Adam Abadi ('22) gives his comprehensive analysis of why polls (mostly) failed in 2020 and what can be done better in the future.

By
Adam Abadi
on
December 19, 2020
Category:
Election 2020

What is the purpose of electoral polling?

It really depends on your perspective.

If you’re running a presidential campaign, you may commission polls to help inform your strategic decisions on where to campaign, where to advertise, and who to focus on in your outreach.

If you’re a journalist, you might use polls to write “horse race” articles about which candidates are allegedly gaining or losing ground, or to frame coverage of campaign activities. 

If you’re an election forecaster like FiveThirtyEight’s Nate Silver or The Economist’s G. Elliot Morris, you’re going to use polls to calculate the probabilities of various electoral outcomes, based on your assumptions about the likelihood of polling error alongside other political factors.

If you’re looking to donate money to campaigns, you might look at polls to help you decide where a donation will make the most impact – probably the closest races! 

After the 2020 election, a consensus has emerged among journalists, activists, and political elites: the polls were pretty bad, underestimating Donald Trump and GOP congressional candidates, and Biden’s margin of victory was much narrower than the pre-election polls suggested. Indeed, polls in most states did severely underestimate the share of voters that ended up supporting Trump or Republican Senate candidates. 

Even though Biden won most of the states where polls suggested he would win—culminating in his decisive 306-232 victory in the Electoral College—the pre-election polls should be evaluated by their deviation from the winner’s margin of victory, not by their “prediction” of the winner.

This is true for the same reason that you’d judge a weather forecast’s accuracy by how far its prediction was from the actual temperature, not by whether or not it correctly predicted that a temperature was above zero or below zero.

How the polls underperformed in 2020

The systematic polling error made many noncompetitive electoral races appear competitive, and made many competitive electoral races appear noncompetitive. This is the main reason why the 2020 polls failed to meet the needs of campaigns, donors, and journalists. 

For instance, the final FiveThirtyEight presidential polling average for Iowa showed Trump ahead by about one percentage point, while Trump’s actual Iowa margin was a decidedly non-competitive eight percentage points. In Wisconsin, another closely-watched swing state, polls showed Biden ahead by an average of eight points but he only won by one point. This pattern repeated itself in many states, with polls consistently underestimating Trump and overestimating Biden.

There were also many serious polling errors in Senate races, all understating support for Republican candidates. In Maine, Senator Susan Collins defeated her Democratic opponent, Sara Gideon, by about eight percentage points after the final FiveThirtyEight polling average showed Gideon ahead by two points. In Montana, Democrat Steve Bullock lost to GOP Senator Steve Daines by ten percentage points after the final polling average indicated a much tighter three-point race. Overall, Senate polls underestimated GOP candidates’ vote shares by a staggering average of seven percentage points.

Even though polls underrated the strength of Republican candidates in most places, they were relatively accurate in some closely-watched states. In Georgia, polling averages correctly indicated highly competitive presidential and Senate races, even though the state had not voted for a Democratic presidential candidate for nearly thirty years and has not elected a Democratic Senator for twenty years (as of December 2020). 

Likewise, polling averages accurately predicted a close presidential race in Arizona, where Joe Biden was the first Democratic presidential candidate to win since 1996. The final FiveThirtyEight polling averages for Georgia and Arizona were Biden +1.2 and Biden +2.6 respectively, and the final margins were Biden +0.3 for both states.

Why did the polls (mostly) fail?

Even though it may seem unintuitive that polls of 500 to 1000 people can represent the opinions of an entire state or country, the polling error was not a question of sample size or of how many total respondents pollsters could reach. The statistical properties of random sampling indicate that a truly representative (or properly weighted) sample of several hundred to a thousand people is sufficient for accurate public opinion polling. 

Random sampling does create an unavoidable margin of error that is larger in polls with smaller sample sizes, but this margin of error is equally likely to point in either direction because it is statistically random. This means that the margin of error or small sample sizes cannot account for the fact that nearly all the 2020 polls missed in the same direction.

Instead, experts have proposed several theories for the systematic polling error in 2020. 

One possibility is that after four years of Trump and the GOP undermining the reputation of polls and the news organizations who conduct them, Republicans are less likely to answer calls from pollsters. It certainly wouldn’t be surprising if a distrust of pollsters correlated with support for a president who calls election polls “fake” and consistently denigrates the mainstream news outlets associated with polling.

Another theory is that Democratic voters were unusually likely to answer polls this year, perhaps because many were working from home due to the COVID-19 pandemic or because they were particularly enthusiastic to tell pollsters about their opposition to Trump. Democrats are disproportionately higher educated, which makes them more likely to have the kinds of white-collar jobs that can be done remotely. So, it would certainly make sense if Democrats were answering the phone more often because, on average, they’re spending more time at home than Republicans.

A third explanation relies on how social alienation and social trust may affect polling samples. Studies have shown that more socially isolated people and people who are less trusting are less likely to respond to polls. There’s also evidence that more socially isolated and less trusting people are disproportionately likely to support Trump. Combined, these findings would mean that polls oversample Biden voters while undersampling Trump voters. 

There’s also the commonly touted “shy Trump voters” theory, which posits that Trump voters are lying to pollsters about their presidential preference because they’re embarrassed to admit their support for Trump or are intentionally trying to skew polls by offering “wrong” answers. However, there isn’t any substantial evidence for this theory, which is widely rejected by polling experts in lieu of the above explanations. 

One reason why there probably weren’t “shy Trump voters” trying to mislead pollsters is that the 2020 polling error was generally larger in Senate elections than in the presidential election. As a result, the polling error cannot be explained by a reluctance to admit one’s support for Trump, unless Trump voters were significantly more likely to lie about their support for GOP Senate candidates than for Trump himself.

There’s also the question of why polls were so much worse in some states than others.

There’s no clear answer, but we do know that states with large polling errors in 2016 tended to have large 2020 polling errors in the same direction. The Princeton Election Consortium’s preliminary analysis found that the average pro-Biden polling bias was an enormous 6.4 points in states that voted for Trump, but only 2.6 points on average—which is fairly normal for presidential polls—in states that voted for Biden. So, the factor that is systematically biasing polls to underrate Republicans appears to be present everywhere, but is just much stronger in more Republican-leaning states.

This supports explanations that rely on Trump voters being less likely to respond to polls, perhaps because of Trump’s distaste for pollsters or general traits like social alienation and distrustfulness. If such an explanation is correct, then states with a higher percentage of Trump voters—and thus, non-responders—would presumably have a larger polling error because non-response bias is proportional to the percent of non-responders in the targeted sample, all else being equal. 

Journalism and voter expectations

Keeping the multiple uses of polls in mind, 2020 was a major failure of electoral polling for several reasons. One is that breathless media coverage of polls may have caused some voters to wrongly expect an inevitable landslide victory for Biden, potentially affecting turnout or voting preferences.

Some political science literature indicates that the perception of a close election, based on pre-election polling, increases turnout. If this is true, then the polls’ overestimation of Biden’s lead may have depressed turnout, particularly in states like Wisconsin where the presidential race was much closer than polls suggested. This could also mean that inaccurate polls increased turnout in states like Texas and Ohio, where the presidential election was actually much less competitive than polls indicated.

Political scientists have also found some evidence of a “bandwagon effect” in elections, where voters are more likely to support a candidate that they expect to win. If there was a bandwagon effect in 2020, then polls overestimating Biden’s lead may have caused more people to vote for him.

Finally, it’s possible that GOP Senate candidates performed very well because voters wanted to put a check on the power of an anticipated President Biden. Considering all of these conflicting effects, it’s impossible to know exactly how media coverage of inaccurate polls may have affected electoral outcomes in 2020. 

Campaigns and donors

The 2020 polls also failed by causing donors to spend millions on Senate races that were ultimately non-competitive. For example, Democrat Jaime Harrison raised $57 million in one quarter, breaking the all-time quarterly fundraising record for any Democratic Senate candidate in history. He went on to lose the South Carolina Senate election by over 10 percentage points, even though several polls enticed donors by showing the race within 3 points.

In addition, the polling errors led to strategic blunders for the Biden campaign. Presumably in response to polls showing a close presidential race in the state, the president-elect’s campaign bought over five million dollars of ads in Ohio during the week before the election. Biden even campaigned in Ohio himself on the day before the election. The state’s presidential election did not turn out to be close, with Trump coming out eight points ahead. 

Biden’s campaign also dispatched Vice President-elect Kamala Harris to Texas in the days immediately leading up to the election. Like in Ohio, polls suggested a close presidential race in the state, but Trump ended up winning it by nearly six points.

Interpreting polls in a post-2020 world

The large polling error meant that journalistic portrayals of polls as precise snapshots of Senate and presidential races were inaccurate. The poor record of polls in 2020 underscores journalists’ responsibility to convey the uncertainty inherent in electoral polling. For instance, take the October 15 Wall Street Journal headline “Biden Has 11-Point Lead Over Trump Less Than Three Weeks to Election Day”. The headline’s certainty is misleading; it would be more suitable to say something like “Poll Suggests An Estimated 11-Point Lead For Biden Less Than Three Weeks to Election Day”. 

Better yet, news sources could avoid covering individual polls as news stories, and instead use polling averages to supplement other forms of reporting about political campaigns. In this case, it might be reasonable to make an exception for coverage of polls in states with historically lower polling error, and to discuss trends implied by changes in poll results over time.

After 2020, state-level polls should be analyzed in the context of that state’s prior polling error, especially because Republican candidates have now been severely underestimated in the same states throughout the post-Obama era. For instance, polls in Michigan have significantly underrated Republicans in 2016, 2018, and 2020, so it’s probably fair to mostly ignore polls in that state or, at the very least, to treat them as extremely uncertain. In the same timeframe, polls in Arizona have generally been only a point or two away from the actual margin, so it makes sense to take future polls there more seriously.

Despite providing a misleading picture of the competitiveness of most battleground states, electoral polls did not prove to be completely useless. The polls successfully indicated the newfound “purple state” status of Georgia and Arizona. Polls were also fairly accurate in predicting Biden’s decisive margins in the once-purple Colorado and Virginia, where there were only small 2-point polling errors. Finally, pre-election polls correctly revealed that Hispanic voters would swing towards the GOP in 2020—although polls may have underestimated the extent of this trend. 

And to a certain extent, the 2020 polls fulfilled their role as an input to statistical election forecasts. Robustly constructed election forecasts are designed to take uncertainty and systematic polling error into account, and the amount of polling error in 2020 did not result in an outcome that forecasts suggested was particularly improbable. Both FiveThirtyEight’s and The Economist’s final presidential election forecasts gave a greater than 50% chance of winning to the victor of 48 out of 50 states, which is a pretty good result.

The reality is that polls are not going away, despite their jarring inaccuracy in 2020. Pollsters need to determine what went wrong and correct their methodologies accordingly. For everyone else, the best approach moving forward is to fully understand the uncertainty brought about by systematic polling error. We can do so by refraining from interpreting polls as precise predictions of vote shares. Instead, polls should be understood as one of many factors that may provide clues towards an election’s outcome, alongside economic conditions, a state’s demographics and historical polling error, fundraising, and other metrics.

Learn more

The New York Times – ‘A Black Eye’: Why Political Polling Missed the Mark. Again.

The New York Times – What Went Wrong With Polling? Some Early Theories

Vox – One pollster’s explanation for why the polls got it wrong

Adam Abadi

Adam (’22) is an economics major from Brooklyn, NY with interests in public policy, electoral politics, and sustainability. He has interned for an international human rights organization and an urban environmental nonprofit, and is also involved with the Vassar Sustainable Investment Fund. Adam enjoys playing Scrabble, reading surreal fiction, and playing D&D in his spare time.