The News
As the old joke goes, for anxious election watchers, the polls are terrible — and such small portions!
In addition to the usual concerns about accurately predicting a race that’s effectively deadlocked in every swing state, the sheer number of high-quality, brand-name polls has been in decline for several cycles. Politico’s polling reporter Steven Shepard lamented that last weekend bizarrely passed without any major polls of note nationally or in battlegrounds, although they picked up later this week on both counts. That means less grist for obsessive news consumers and less data for big aggregators and forecasters trying to make sense of the race.
In this article:
Benjy’s view
What’s going on here? Pollsters and forecasters who spoke to Semafor all cited the same number one factor: Money.
Surveys with live phone calls, which are generally considered the most dependable along with some well-regarded online panels, can cost tens of thousands of dollars. The price has been increasing over the years as fewer voters take pollsters’ calls (how often do you pick up on an unknown number?), requiring call centers to work longer total hours to find a minimum number of voters in enough categories to represent the electorate.
“It’s not inflation, it’s that response rates are down,” Nick Gourevitch, a Democratic pollster and partner at Global Strategy Group, told Semafor. “It’s gotten more expensive to do it.”
And those costs have gone up while the sponsors of many of those polls — news organizations — have been cutting their budgets.
But there’s also a third dynamic at play. There’s less incentive to get involved thanks to a more sophisticated polling audience and greater costs of failure.
We’re now well over a decade into the era of the star election forecaster, led by Nate Silver and his rivals and proteges. His biggest influence may be teaching most news junkies to turn to curated averages of polls rather than fret over each one.
This has created a bit of a Heisenberg effect, in which observing something — in this case, that the best analysis comes from mixing together lots of polls with a sophisticated model —might actually change it. What’s the point of shelling out five-figures on a horse race poll if it’s just going to get tossed into an average by an audience trained to scoff at individual surveys?
“It does become a problem when there are as many aggregators as pollsters,” Dave Wasserman, a veteran forecaster at Cook Political Report, said. “As the aggregators gain prominence, the pollsters lose some prominence, and the incentives for polling decline.”
After major misses in 2016 and 2020, the reputational stakes are also higher, especially with a national audience that’s equal parts skeptical and obsessed with pollsters’ work. A blown call, even if it’s a totally normal function of polling in a close race, will be remembered (and retweeted) years after the fact. Analysts have even speculated, without direct evidence, that lower quality pollsters may be “herding” their results towards toss-ups this year with subtle methodological decisions in order to avoid embarrassment.
Some pollsters have shifted to contributing elsewhere: Gallup, a Semafor investor, dropped its horserace polling after 2012 and focuses on broader political trends and candidate qualities instead. The Associated Press sponsors similar polling with NORC, rather than head-to-head voting matchups.
That said, forecasters who aggregate polls and other factors to predict the results didn’t sound too worried about the impact on their work. There’s more than enough data to tell a fairly coherent story, and in an election this close on paper, all the usual caveats about potential polling errors, late breaking undecided voters, and turnout differentials would overwhelm whatever additional certainty might come from that extra poll or two of Wisconsin. Nor do they seem worried about cheaper, less accurate polls flooding the market: Various forecasters and media averages either filter them out or account for polls with a significant partisan skew.
The bigger fear was simply that the polls would systematically be wrong in a significant way, which happened in key states in both 2016 and 2020. Wasserman says his own forecasting work focuses on pollsters with transparent methodology and established track records, but there’s no guarantee they’ve fixed all their problems.
“I don’t know that we would have more clarity from more high standard polls,“ Wasserman said.
The View From Nate Silver
Do pollsters tell Nate Silver his work has affected their work? He doesn’t actually talk to them much, he told Semafor, trying to maintain an “outside view” that focuses on the public data alone. That said, he wasn’t too concerned by the current polling volume.
“There are plenty of polls still so that’s not really a problem for the models,” he said. “I do think a lot of money is wasted on national polls when nearly everything should be in the states instead. And yes a lot of pollsters are terrified of their own shadow, but that’s more true of the mediocre ones than the traditional blue chips.”
Notable
- The New York Times’ Nate Cohn has noted some splits between different pollsters in their view of the race. Their own polling, with Siena, has repeatedly shown Trump doing better nationally than in swing states — raising at least a remote possibility he might win the popular vote and lose the electoral college.