Skip to main content

What Can Election 2024 Polls Really Tell Us?

Election polls are accurate but can only reveal voter intentions on the day they were taken. They don’t predict the future

Voter, Asian adult woman, in polling place, filling out ballot

Hill Street Studios/Getty Images

Polling has taken a one-two punch. Though not particularly less accurate than historical averages, today’s polls have endured recent miscues, from failing to call the 2016 presidential election to successfully doing so in 2020 but significantly missing the margin of victory. Alongside an upsurge in (often partisan) critiques, polling faces our era’s significant decline in trust in institutions, specifically in science.

We face a momentous presidential election year in 2024, with polling caught in a feedback loop: Response rates—a measure of how many participate in a poll—are declining. Declining response rates lead to a greater likelihood of polling error, which in turn leads to greater mistrust in polls. Mistrust in polls leads to even lower response rates. And the cycle continues.

Nevertheless, polls can be trusted. “Let’s give a big round of applause to the pollsters,” wrote FiveThirtyEight’s Nathaniel Rakich, after all, about our most recent, election-day calls in 2022. But that trust should accompany an understanding and expectation of what polls can and cannot do.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


First, election polling comprises only a very small fraction of U.S. survey research. We live in a world where decisions are driven by data—from the Internet of Things, the clickstream, focus groups and, yes, surveys. The appetite for survey data is voracious. Governments, academics, foundations and for-profit companies exhaustively and productively rely on surveys to obtain data on crimes, vaccines, unemployment, health and consumer sentiment; to understand consumer behavior, brand awareness and purchase intention; just to name a few.

And here there is excellent news: Despite low response rates, scientific surveys have “still got it” as the errors found in nonelection surveys are typically very low and not generally correlated with response.

Still, election polls are put under a microscope: they are the most conspicuous of all surveys. Whether election polling can be trusted is an important question. Can it?

The answer depends on what you trust them for. Too often journalists, politicians and the public treat polls as cheat codes to the election casino, revealing ahead of time whether to play red or black. But election polling surveys always start with “If the election were held today.” There’s a reason. Polls fielded early in the campaign are far more likely to tell a different story than the final outcome. Even polls conducted a few days before the election cannot compensate for momentum in the final week and have a limited ability to peer into the minds of the undecided, who often number enough to swing results. Today's polls on Biden versus Trump are not wrong, per se, they just tell us what voters are feeling today. And a lot can happen between now and Election Day.

Second, polls have real margins of error, which are reported too modestly. True ranges of error in election polls average around 5 percent, and many elections are won and lost with much smaller margins. Third, election polls have a unique characteristic not encountered in any other survey: they survey a population that does not (yet) exist. Election pollsters must predict who will actually vote. Their likely-voter models tend to be 80 percent accurate, leaving quite a bit of imprecision in deciding who among those polled should actually count in a “horse race” estimate. Finally, surveys can be designed and fielded with very high rigor, but that is costly. Many election polls, however, are conducted with the lowest possible budget, and journalists and the public are often not equipped or otherwise don’t consider the “build quality” that is put into the polls they report and/or consume.

Build quality matters. Pollsters are like ducks, appearing calm above the waterline but frantically paddling underneath to improve the science behind election polling. They have been intensely investigating recent election polling error and working to improve their craft. In 2016 pollsters recognized that they did not adequately statistically adjust for a major cohort of the Trump base: noncollege whites. Solution in hand, they came into 2020 more confident, only to recognize that polls were still turning out too blue. Since then, pollsters have come to realize that while election polls had the right number of Republicans, they were not getting an accurate cross-section within Republicanism, all while Democrats continued to overreport voting intention (and potentially, some do not report their true intentions, though the evidence on such behavior is mixed if not mostly refutational).

The question is: Who is missing? The answer has increasingly been not just folks who are low in educational attainment, but particularly those who mistrust science and media, who may or may not have an affinity for partisan news outlets, and who potentially hold certain patterns of religiosity and other attributes. This group is not monolithic. The missing cohort looks different, for example, for Hispanics versus whites. Various solutions have been tested in the past four years, including developing more strict adjustments for past voting behavior, novel efforts to encourage more of these types of voters to participate in polls, and other approaches. Has the challenge been met? Only the 2024 election will tell.

The public can trust election polls, though, so long as it is understood what polls can and cannot do. For that to happen, it is incumbent that polls are not oversold or treated as future prediction tools rather than tests of current sentiment, and that high-quality polls are given more weight than those done on the cheap. Election polls don’t report on participants’ behavior, they report on intention for future behavior (many smokers intend to quit—but then fail to do so), to wit, voting. Polls that research topics other than elections, for example, polls that probe what Americans think about issues, current events and policies are highly accurate still. And when it comes to election polls, understand they are largely still precise but hold challenges that make them more susceptible to errors. Day-of-election polls will always have some difficulty mirroring the outcomes of races decided by just a few percentage points.

In sports, after all, no one is surprised when the underdog upsets the favorite. It’s not necessarily expected, but it happens. The same is true in election polling.

One thing is certain: If you are called to participate in a poll, participate! Please! Democracy begs for your voice, and polls will be more trustworthy if people take their part and voice their honest opinion.

This is an opinion and analysis article, and the views expressed by the author are not necessarily those of Scientific American. The author’s opinions are solely his own and don’t represent represent any organization he is affiliated with.