Jump to:Page Content
Myriad political polls are making big headlines this election season, but could the pollsters be wrong?
Historically, pollsters have simply asked survey respondents whether or not they plan to vote in an upcoming election, and have then used those responses to predict voter turnout. But a 2013 study conducted by Todd Rogers, associate professor of public policy at Harvard Kennedy School (HKS), and Masahiko Aida, a research scientist at Civis Analytics, found that many self-predicted voters do not show up to the polls, a phenomenon termed “flake-out.” Conversely, Rogers and Aida found, many self-predicted non-voters do end up voting (“flake-in”).
Self-predicted voters differ from actual voters demographically. “Actual voters are more likely to be disproportionately white, older, and partisan,” Rogers writes in a recent op-ed article published in the Washington Post, co-authored by Adan Acevedo, a research fellow at the Center for Public Leadership. “That is, self-predicted voters better represent the U.S. population [in its diversity] than actual voters — but misleadingly so.”
When newspapers, political blogs and other sources use data based largely on respondents’ self-predictions, they perpetuate misleading headlines “about the state of the race and the viability of different candidates.”
Since an individual’s voting history is the single most effective predictor of their future voting, Rogers and Acevedo argue that this problem is easily fixed. They make two simple suggestions to improve the accuracy of polls.
First, the authors posit, pollsters should use voter files to choose their sample respondents and develop a hybrid past-present approach: taking into account respondents’ voting history (past) and their intentions to vote or abstain (present). The hybrid approach is startlingly accurate, they found: “For example, 93 percent of self-predicted voters in the 2008 general election who were confirmed as having voted in the past two elections actually did vote on Election Day.”
The second suggestion is even simpler: asking survey respondents about their voting history. “People are surprisingly accurate in reporting whether or not they voted in past elections,” say Rogers and Acevedo. “Using people’s recalled vote history is better at predicting who will vote than using people’s self-predictions about whether they will vote.”
These two suggestions, the authors argue, “can simultaneously increase polls’ predictive accuracy and the American public’s faith in political polls.” During a contentious election cycle, a little faith might go a long way.
"Self-predicted voters better represent the U.S. population [in its diversity] than actual voters — but misleadingly so,” the authors say.
Todd Rogers, associate professor of public policy