You are on page 1of 3

February 2018

2017 was a good year for (good) polls


By Chris Anderson

Over the past year, rarely has a month gone by without someone asking me if polling is broken. That’s a
question that, until recently, I had not been asked for most of my two decades working as a pollster. But
it is not at all surprising this question has come up now, as so many recently-released public polls have
been flat-out wrong.

The answer is straightforward. The problem is largely isolated to a specific low-cost polling
methodology that has proliferated in recent years: robo polls, also called automated or IVR (interactive
voice response), which use a recorded voice to ask respondents to answer questions by punching a
number on their phone.

The polling industry’s gold standard methodology (professional interviewers calling voters on landlines
and cellphones) continues to produce highly accurate results.

Polling grades: A’s and F’s


A review of the three off-year statewide elections held in 2017 (gubernatorial elections in Virginia and
New Jersey, and the special Senate election in Alabama) clearly illustrates this dynamic.

Of all public polls conducted within one month of each of these elections, most robo polls showed the
ultimate losing candidate ahead, while almost all polls using the gold standard methodology showed the
ultimate winner ahead.

Of polls conducted within one month of each election, fully 86 percent of gold standard polls showed
the ultimate winner leading. Just 39 percent of robo polls did so.

Percent showing correct winner - 2017

86%

39%

Gold standard Robo


While we can’t say for sure why robo polls performed so much worse than live interviewer polls, the
most likely reason is they are prohibited by law from calling cellphones. More than half (52 percent) of
American households only have cellphones, which means automated polls miss them entirely. Add in
the fact that cell-only households tend to be younger, less affluent, and more diverse (all factors that tie
closely to voting behavior) and you begin to see some of the major obstacles for robo polls.

The Alabama special Senate election brings this point home. There were 13 automated polls conducted
within one month of the election, and just three – less than a quarter – showed eventual winner Doug
Jones with a lead. Meanwhile, three of the four gold standard polls conducted in Alabama showed
Jones leading, and the fourth showed a tie. Fox News Voter Analysis results suggest Jones performed
particularly well among younger voters and blacks – exactly the voters most likely to live in cell-only
households that robo polls are prohibited from calling.

Anderson Robbins polling with Fox News


Anderson Robbins is the Democratic partner on a bipartisan polling team for Fox News Channel. Along
with our Republican partner, Daron Shaw, we had the opportunity to poll all three 2017 races. All of our
polling utilizes gold standard methodologies and all of our 2017 polls showed the eventual winner
leading.

Our New Jersey poll showed Democrat Phil Murphy leading by the exact 14-point margin he won with.
In Virginia, our poll had Democrat Ralph Northam leading by five points. He won by nine (just two other
polls were closer to the final margin).

Alabama was a unique challenge. There were no comparable past races to help inform turnout
expectations. It was a special election held in December with a Republican candidate who was already
extremely polarizing before the accusations of sexual misconduct.

Perhaps as a result of these challenges, hardly any major news outlets polled the Alabama race. Fox
News took the plunge and released two likely voter polls in the month prior to the election. Both
showed Democrat Doug Jones leading with 50 percent of the vote, the exact number he achieved on
Election Day.

In our final poll, Republican Roy Moore’s support was just 40 percent, while 2 percent said they’d write
in a candidate, and 8 percent were undecided. The demographics of the undecided voters in our poll
suggested they would not be voting for Jones, so the question was: Would they come out for Moore or
stay home?

The election result (Jones 50 percent, Moore 48 percent, other 2 percent) suggests late deciders broke
decisively for Moore. So, while our Alabama poll didn’t nail the margin, our results did reflect the reality
of the closing five days of the race; Jones voters had made up their mind and were not wavering, but
some Moore voters were unsure until the last minute that they could support a uniquely damaged
Republican candidate.
A Look Ahead
Assessing polls’ accuracy by looking only at how well they predicted wins and losses is admittedly
simplistic, but concerns about predictions are also the reason why people are now asking if polling is
broken. When polls lead people to think the wrong candidate is going to win, that’s what sticks with
them after the election: the polls said one thing; voters said another.

The widespread misunderstanding of polling in the 2016 presidential race – in which the national polls
were actually highly accurate in predicting a Hillary Clinton popular vote win – compounds the issue.
However, the good news from our 2017 polling review is that traditional, live-interviewer polling
remains highly accurate and is very much alive and well, even though the prevalence of automated polls
probably led some to conclude otherwise.

While there’s a time and a place for robo polls, 2017 was another reminder that for political polling
there’s no substitute for high-quality data paired with careful, thoughtful analysis. As we head into what
will surely be an eventful 2018, we look forward to providing our clients the reliable data that is the
bedrock of good strategic decision-making.

Chris Anderson is Co-Founder and President of Anderson Robbins Research. He is also the Democratic
partner on a bipartisan polling team for Fox News Channel, along with Republican pollster Daron Shaw.

You might also like