Posted on January 1, 2017
In the aftermath of the U.S. election, pollsters are facing some tough scrutiny about their relevance. But not all polls are created equal, notes Anne Kilpatrick.
One of many unexpected outcomes of the 2016 U.S. presidential campaign was the failure of most research polls to accurately predict the election of Donald Trump.
This was all the more striking because, in an intensely emotional political race characterized by a deliberate disregard for truth, polls offered a familiar reference point. In the face of chaos, the enduring ability to measure, analyze and process data was one of the few ways to superimpose order.
Furthermore, in a frantic scrabble to meet a rapacious demand for new information about the campaign, media put forward any and all polls.
But not all polls are created equal.
We know that the voter population has become increasingly fragmented, and, in the aftermath of the U.S. election, critics point to the inability of pollsters to address this issue as a driver of their seeming inability to get it right. The criticisms focus on non-representative sampling of potential voters —noting that the voter population is more diverse culturally and that pollsters are under-representing or over-representing certain ethnic or racial groups in their polling, or that the more diverse set of channels by which pollsters need to reach voters (landline telephones, cellular phones and online) is presenting challenges in reaching a representative sample of voters. These challenges can be generalized to any election or referendum polling, and a lack of rigour in addressing these issues may indeed be a factor in determining why some pollsters just aren’t getting it right.
Those pollsters who were able to project a Trump victory did not rely solely on traditional questions about voting intentions or previous voting behaviour, and on demographic characteristics to project outcomes (e.g., likely voting intention if an election were held today; favourability ratings of a candidate; previous voting behaviour, gender, income, education, religion and race). While these questions and demographic characteristics often point to a potential outcome, they do not capture the ultimate insight—voter sentiment. And sentiment is the key word here.
The polls that appear to have been closer to the mark delved into voter sentiment by examining engagement with candidates and how that affected voters’ likelihood to turn out at the ballot box. These pollsters actively sought to measure the impact of the “undecideds”—the almost 15 per cent of voters who likely determined the outcome of the election. These voters helped make voter turnout patterns unpredictable, including the greater than expected turnout of Trump supporters in swing states, and lower than expected turnout among groups that were expected to vote for Hillary Clinton.
Who were these “undecideds”?
We now know they included people who:
• rode an emotional roller coaster during the campaign and were affected by the many twists and turns;
• were ambivalent, did not see themselves reflected in either candidate and were more likely to stay at home than cast their ballot;
• were engaged in the election, but were having difficulty assessing how the candidates would address their more personal concerns; and
• were affected by social desirability bias, the tendency of survey respondents to answer questions in a manner that will be viewed favourably.
Successful pollsters are those who tap into the significant ambivalence or conflicted feelings that some voter segments face by introducing non-traditional survey questions and survey methodologies. One such poll, the USC Dornsife/LA Times Presidential Election “Daybreak” poll, engaged a longitudinal panel of voters and asked them a different set of questions than other surveys. It asked voters to estimate, on a scale of 0 to 100, how likely they were to vote for each of the two major candidates or for some other candidate. Rather than forcing respondents into an either/or vote position, they were able to obtain a more nuanced view of the undecideds (and the “decideds”) by measuring the level of engagement each voter had with the candidates. This approach has served USC Dornsife well in the past two U.S. elections, accurately predicting both outcomes.
The lesson for pollsters is that their models and approaches must evolve to meet the new challenges of political polling in the 21st century. And they also need a crash course in managing expectations.