Knowledge

Trump and the opinion polls

06 Dec 2016 | Research & Business Knowledge

Trump and the opinion polls – a review of the e-group's views by John Attfield

Immediately following the US Presidential Election on 8 November 2016 the ICG e-group hosted a lively and fascinating debate in about the results and the opinion polls, running from 9 till 13 November with a total of 37 contributions from 24 posters, some of whom were contributing to an ICG discussion thread for the first time.

The thread was launched by a post on the morning of 9 November, soon after the election results became clear, arguing that:

'The opinion polling organisations and the media have been engaging in the greatest exercise in self-delusion that we've witnessed in our generation.'

The poster drew attention to the so-called 'herding' phenomenon among opinion pollsters, with a link to this article.

Desperate to avoid looking wrong, he suggested, the pollsters ensure that the polls they release fit in with the majority trend, e.g. not releasing awkward outliers, or by manipulating their choice of sampling points.

Several contributors agreed with the notion that political polling has lost credibility as a result of high-profile failures in e.g. the Brexit referendum as well as the Trump election. The media, for whose benefit the opinion polls exist, are the first to jump in and “rubbish” the polls when they get things wrong. There is a danger that public cynicism about opinion polls – justified or not – will have a negative impact on the credibility of market research in general.

Reference was made to a useful article in the IJMR (issue 4 2016) by Daniel Nunan on “The declining use of the term market research”. It was suggested that market researchers should do more to explain the difference between political polling and market research.

As one contributor commented:

'Overall, market research has probably benefitted over the years from having political polling as a 'flagship'.  However this now appears to be changing and the whole sector may be damaged as a result.'

There was an interesting discussion on the possible causes of the failed polling predictions. One factor could be the gap between rational behaviour (e.g. when answering a polling question, when rational, justifiable, sensible and politically correct responses are called for) and emotional behaviour (e.g. in the privacy of the voting booth). As Michael Moore predicted in his Trumpland movie, a vote for Trump would represent 'the biggest ‘Fuck You’ in recent history'…!

This train of thought was echoed by various contributors who highlighted the heavy emotional appeal of Trump's campaign and lack of any such appeal in Clinton’s. One cited a study by Odin Analytics prior to the election, where they asked voters:

'Without looking, off the top of your mind, what issues does [insert candidate name] stand for?'

They then analysed the answers to this one open-ended question. The text analysis showed more anger against Clinton and that Trump was more successful in establishing a signature message – see the full results here.

Another contributor quoted promotions guru Drayton Bird:

'For God knows how long I've been saying offer a clear emotional benefit. Trump’s message included ‘Make America Great Again”. Can you remember what Hillary's line was?'

A third cited Salena Zito in The Atlantic:

'The press takes him literally, but not seriously; his supporters take him seriously, but not literally.'

This article from a data geek was said to sum up nicely how lots of number crunchers have been missing the persuasive power of emotional campaigns that have been winning many recent elections (e.g. Trump, Corbyn, the EU referendum etc.):

Another contributor pointed to an interesting blog that combines comment on the US election results with Systems 1 and 2 thinking:

A further contributor suggested that the pressures of 'political correctness' may have discouraged voters from reporting their true intentions to the pollsters:

'The US election was particularly nasty, with each side vilifying the other, and that, as a result, the opinion polls did not reflect the feelings and intent of significant portions of the electorate who may have felt intimidated by what passed as political discourse.'

It was suggested that issues and comments that were picked up and highlighted by the media were not necessarily reflecting the things that voters really cared about:

'I wonder if this a lesson for understanding the Trump win. Comments that he made that seemed so important (and disqualifying) to the liberal mindset, out in the real world just pass people by. The idea of ‘locker room banter’ was one way of summarising this – just words which are pretty unimportant detail in the context of the bigger issues. By contrast, Hillary uttered a word which I thought at the time would lose her election, and the word was: ‘Deplorables’. Hillary represented for many Americans an elite that was disconnected from ordinary lives and seemed to feel disdain for them and their views. So that term was not a detail, but a summary of, in effect, her brand proposition for these people. Hillary mocked her customers. Donald didn't.'

A paper by Alexander Wheatley from Lightspeed at a recent ESOMAR event was called 'Head or Heart, the conflicts of political polling' and tackled precisely this kind of 'rational vs. emotional' issue. Given that stated voting intentions may not reflect actual voting behaviour, his argument – as cited by a contributor to the discussion – is that a more drastic approach is needed, i.e. a new set of questions needs to be defined which could replace the traditional questions leading to predictions. 

Historian Yuval Noah Harari’s 'Sapiens – A Brief History of Humankind', speaks of the tendency of politics (and history) towards unpredictability:

'… history is what is called a ‘level two’ chaotic system. Chaotic systems come in two shapes. Level one chaos is chaos that does not react to predictions about it. The weather, for example, is a level one chaotic system. … Level two chaos is chaos that reacts to predictions about it, and therefore can never be predicted accurately. Markets, for example, are a level two chaotic system … Politics too is a second order chaotic system. Many people criticise Sovietologists for failing to predict the 1989 revolutions and castigate Middle East experts for not anticipating the Arab Spring of 2011. This is unfair. Revolutions are, by definition, unpredictable. A predictable revolution never erupts.'

As poster of this quote pointed out: 'The big revolutionary stuff only happens because people don’t anticipate it’s going to happen. Otherwise it’s not revolutionary.'

However, as several other posters argued, in point of fact the majority of opinion polls were not too far off the mark. They were correct in showing a slight lead for Clinton over Trump in terms of the popular vote. So it might be considered unfair to condemn the polls too harshly. They called the overall result wrongly because their models were unable to predict the overall balance of state-level results and thus the composition of the Electoral College.

Maybe too high expectations are placed on polls for a minute level of accuracy that isn’t always realistic. They were right about the overall result but their models got the state-by-state predictions wrong.

'The problem is not the polling itself, but the statistical models used to predict individual state results, based on national polling.  Because the US system involves an electoral college, really tiny margin wins in key states can result in an overwhelming win in the electoral college. No doubt somebody will soon be referring to the ‘Trump landslide’, but that is certainly not an appropriate description in terms of numbers of votes nationally.'

Polling models are heavily dependent on past behaviour, and may struggle to keep up with the dynamics of rapid political change. As one contributor wrote:

'Political polling relies heavily in its sampling calculations on extrapolations from past voting behaviour. It’s therefore reliable when there is relative continuity of voter behaviour – lots of polls have called the results of elections for decades with remarkable accuracy – but much less reliable when there are big tectonic shifts, like we have seen with Brexit and this US election. … Polling isn’t completely broken, it just needs to wait until conditions to return to those in which polling can work.'

Discussion then turned to the question of possible flaws in polling methodology, both in the US election and more generally. Factors here could be, for example:

  • Too small sample sizes
  • Sample design and inadequate capacity to read at a sub-group level
  • Over-sampling of certain voter groups and under-sampling of others
  • Flawed conversion of stated voting intention into actual turnout
  • Flawed conversion of nationwide results into local-level predictions (or vice versa)

It was argued that newer (and cheaper) polling methodologies are less capable of delivering robust results than more traditional and more rigorous approaches:

'I'm old enough to remember so-called random sampling methods being used in research.  They were used for sound statistical reasons, but were always relatively expensive compared to other sampling methods.  Seems to me that cost and also the craze for new technology has clouded the debate about polling. Both of the currently most used methodologies  (telephone and online) have big holes in their sampling base so it's really simple – they are innately flawed and demonstrably no longer work.'

A viewpoint from the US highlighted ways in which the pollsters failed to read what was really going on in the US election. The poster put it down to 'bad qualitative' and an over-reliance on numbers:

  • Polls didn't/couldn't capture just how many people were on the fence right up to Election Day. Many never got over the Bernie Sanders mess with DNC…there were many last minute decisions.
  • Facebook, and curatorial social technology, is taking a big hit for the algorithm serving up so much fake journalism in the last days — the echo chamber got louder, false stories achieved larger circulation than real polls by the New York Times and others.                               
  • And, as an ethnographer (who spends most days in the field in “Middle America”), the media and the Clinton campaign utterly failed at understanding the specifics of Trump supporters ire and frustrations. The media turned them into caricatures instead of openly acknowledging uniquely rural problems.

She concluded:

'From a researcher's point of view, it was largely a case of bad qualitative. The numbers lacked a face, a soul, or a heartbeat. Polling focuses on who is voting; we should have been asking ‘why’.'

The media placed great reliance on analyses of cumulated poll results (“polls of polls”). But when the basic methodology is flawed, this risks simply compounding the error. Combining several flawed predictions doesn't improve the quality of the ensuing prediction.

Some more decent academic research is required here, it was argued.

In conclusion, the debate highlighted some of the dangers to market research when political opinion polls lose credibility in the eyes of the public. As various contributors remarked:

'The over-reliance on polls to predict election outcomes is a warning to all of us. It's dangerous to rely on single sources of data or single methods. We're almost certain to miss part of the picture.'

'If the perceived experts can't get it right, the market will turn to he who sounds credible. … For the good of polling and research more broadly, we professionals need to sort this out before our ‘profession’ is even further undermined than it has been recently.'

'Perhaps the research industry’s big crime has been not to have built a strong enough credibility for ourselves to be listened to carefully, over fast, quick, unmediated data.'

Overall it was an excellent discussion with many interesting and thoughtful contributions – a fine demonstration of the wealth of ideas and depth of thinking to be found among the ICG’s membership.

Menu