Knowledge

What have we been talking about? Election Special

09 Jun 2015 | Research & Business Knowledge

Most polling is probably twaddle!  Vs.  Polling is a thankless task – A review of the E-group discussion by Claire Labrum

This is a summary of the comments and discussions around the issue of polling which took place on the E-group immediately after the election collated into broad subject areas.  There is a formal review into the election polls taking place by the British Polling Council, and it will be interesting to see the results of that – but in the meantime, this is the immediate response from ICG colleagues…

'A week is a long time in Politics' (Harold Wilson)  … and polling!

The Issue in a nut-shell:  Only the exit poll was accurate (20,000 people and guaranteed voters) – 11 other polls were miles off – discuss!

What is the role of polls (and do people understand this?)

The E-group had quite a lot of debate about the purpose of polling

  • To look at the impact on seats – but NOT predict seats
  • To look at voter predisposition
  • Give a snapshot of opinion now rather than predict outcome

This demonstrates that there is a mismatch between the research intention and the ‘sector’- winning seats is exactly what an election is about, but polls predict the share of vote, rather than seats won.  So the polls are measuring something different and do not reflect the actual electoral process (i.e. based on national preference rather than constituency level analysis).

BUT polls do have a responsibility – they are active and public influencers of the results/ how people choose to vote.  Part of the problem is that parties review and respond to polls, which itself could change intentions and affect outcome/ tactical voting (a catch 22 for the pollsters?) -indeed, do some (sponsored polls) try and manipulate the outcome?  BE theory says what other people do is a key influencer of behaviour … so maybe yes!

Expected accuracy – how wrong were the polls really?

Political polls just before elections are expected to be within 3% of the final result (although the counter view is that, given the sample sizes and collective amount of polling that took place, this argument is bunkum).  In fact, the polls were 4% adrift on the tory vote and 2% adrift on the lab our vote.  It looks worse because the media focused on the gap between the two which went from 0% to 7% – so they didn’t get it very wrong, just wrong enough to be embarrassing.  The Conservatives won only 7-8 seats more than the polls predicted.

The BBC poll of polls showed the two parties as neck and neck until a few days before when conservatives nudged ahead and showed 34% (vs. 37% actual share of vote).  For labour the poll of polls showed 33% (vs. 30% actual).  Whilst the polls were spectacularly wrong in 1992, they have been very accurate since then.  The exit poll was actually very accurate, reinforcing the truth that research is most accurate when closest to whatever it is that it is measuring.  Polls also underestimated the NO vote in the Scottish election – could this be the influence of society on claimed behaviour vs what people actually do?  It is not ‘cool’ to say you would vote NO, or indeed vote Tory (discussion about the ‘shy Tory voter’).  Note the shy Tory hypothesis is just that and there is nothing concrete to prove or disprove this.

Expectations of what polls (and research) can and cannot deliver

Most MR is not expected to be THAT accurate – simply provide a guide to what is happening.  However, Polls are open to more scrutiny and expected to be more predictive.  And pollsters push their services on the basis of predicting the results (otherwise they wouldn’t have any customers).  So is the margin of error argument a strong enough defence????

Was this due to a failure of the polling methodology?

Some discussion on methodology and that it needs looking at, but no real suggestions as to what should change.  The failure in methodology argument focused on lengthy, complex and rushed questions, Issues of sample and response error and whether warm up questions were included.  Observations made by the E-group included:

  • If the margin of error allows for massive differences in outcome, then the margin of error needs to be reduced.  On the other hand, increased sample size wouldn’t make a difference as the crucial errors are due to other factors and may be unique to this election
  • If the caveats that need to be issued with poll data are such that almost any election outcome is possible from them, then they probably aren't fit for purpose – but this seems to be overstating it a little!
  • If all the polls are wrong in the same direction, then this may indicate that there is systematic bias that needs addressing
  • The polling industry must publish all polls and avoid 'adjusting' poll data to fit a perceived norm.  However, it seems that by and large they do, and they try and adjust for known biases rather than reference other polls

Our industry must be able to identify in 'shy Tories', 'shy UKIPs' and 'shy anything else's' as well as more, and less, likely voters and factor these in to the forecasts – the data might then usefully be published in a straight read and interpretation/ forecast formats.  There was also a discussion about the difference in response when people asked to name nationwide preference and then preference when local candidates’ names are attached – this showed a massive rise in tactical voting.

There are also issues around how questions are phrased – when asked who they intended to vote for, the results showed a 50/ 50 split.  But when asked who they thought the majority of people would vote for, a conservative win was predicted.  So, do polling companies really understand people?  Why do people say they will vote one way and then behave differently – has it become too ‘black box’? People change their minds and respond emotionally to campaigns – do polling companies take this into account?

There was also lots of debate about difference between online/ tel/ face to face.  Lots of polls are internet based – what effect has this had on how people respond?  In telephone polls, Tories shown a consistent three point lead – supports arguments that:

  • Older voters less likely to complete internet based polls
  • On the phone people are more likely to be honest
  • Young people (who are less likely to vote) have mobiles (not landlines) and so less likely to be telephone polled = telephone polls are more accurate?

Complexity of the voting decision vs. oversimplification of polling questions

Voting is a complex decision where people need to consider various factors, including local candidates, parties and party leaders.  Does polling oversimplify this decision?  See above comment about difference when asked what party vs what candidate they will vote for.  This makes a strong argument for using the full research spectrum when investigating political views – this is no longer a quantitative process, but more qualitative analysis should be included.

Should polls always get it right?

For most elections since the war Polls have been there or there abouts.   However, there is another argument that 2001/ 5/ 10 got the outcome right but the detail awry in consistent areas, specifically overestimating Labour support and underestimating Tory support.  But there are lots of things that can result in polls getting it wrong which you cannot anticipate or control,  including late change of campaign tactic to a scare tactic (which is what the Tories did this time and possibly hardened the vote).  Polls can only measure an expressed intention at a specific point in time of future behaviour (BE implications???).  What is also interesting is that 1974 had two elections – and the outcome changed significantly in each one – indicating that that people genuinely change allegiance, so were the pollsters victims of the public's fickle nature?

The role of media in misrepresenting poll results

The media seek headlines and are uninterested in the detail.  Many of the 2015 polls were swathed in caveats (more so than in previous years) because of the uncertainties but these are of no of interest to the media and so get lost along the way.

 We were aware of the rise of UKIP and collapse of LibDems so this was likely to make polls less reliable than usual – polling companies were having to make more assumptions this time than last time.  But everyone got it wrong in the same way, so is this an indication of 'group think' amongst the polling companies?  Or where they measuring something real?  The argrument for conformity is supported by Survation who predicted a Tory win, but didn't want to publish for fear of being different.  They said 'the results seemed so our of line iwth all the polling conducted by ourselves and our peers – what poll commentators would term an outlier – that I chickened out of publishing the fitures, something I'm sure I will always regret'.  So even the pollsters dont beleive in their own results, so why should anyone else?

 Either the polls were right, and a large number of voters changed their minds about how or whether they would vote, or there is some fundamental flaw in the methodology adopted by polling companies leading to them all generating data that is biased in the same direction.

Who else is measuring intentions?

The parties themselves got their predictions wrong.  They were out on the streets, talking to people and doing their own polls yet they didn’t see it coming – this suggests that people actually did change their minds

The reaction from the industry has not been helpful

Peter Kellner (YouGov) siad that politians rely too  heavily on polling data rather than standing on a platform of what they beleive in.  He said 'politicians should campaign on what they beleive, the should not listen to people like me and the fitures we produce'.  This is, to put it mildly, unhelpful and undermines the value of the research discipline.

There has been lots of hindsight commentary – resulting in a degree of self-flagellation – but surely we should see this as an opportunity to explore people's motivations in more depth rather than cave in and hang our heads in shame?  There is a sense that the industry reaction has been a little feeble and akin to 'a rabbit in headlights'.  Responses from agencies are likely to be defensive, and response from the industry will take too long and miss the moment.

Scope for error in polling

 In the polls, roughly 15-18% say they would not vote, undecided or refuse to do the research,  In reality, around 30% of the population don’t vote – turnout is a long way from 100% and it is unclear whether polling companies take this into account.  However in this election we had a 66% turnout which was high (so maybe the ‘adjustments’ being made were too robust?).  There is nonetheless a massive number who express an intention but who then don’t vote/ undecided/ refuse which does introduce  in massive potential error  … and this is not counting those who change their minds in the last days prior to the election (a real factor in this election?).

Assuming the norm

Post 1992 a lot of changes were made by polling companies.  Since then, most polling predictions have been there or there abouts.   However, conditions since 1992 have been relatively stable/ predictable, so assumptions have held true.

2015 was a totally different kettle of fish and new issues emerged to undermine the accepted norms – multi-party politics, outmoded electoral system, Scotland transformation, UKIP, and lots of tight marginal seats (impossible to call, but decisive on overall outcome).  Even the media was saying that it was 'the most unpredictable election in decades'.  

Other factors also contributed – the reluctance for change in a relatively stable period (a heuristic – better the devil you know), Labour's perceived responsibility for the latest financial crash, the changes that have occurred within Labour over the past 5 years (which Ed has been instrumental in), loss of labour support to SNP in Scotland post referendum (and England’s fear of runaway SNP influence), labour credibility gap. 

Don’t forget what we owe to pollsters

Whilst it is easy to moan, lets not forget that it was the accuracy of the first Gallup polls that laid foundation for the MR industry in the first place – we wouldn’t be here if not for the pollsters

What now?

The British Polling Council are conducting an enquiry, but we have to wonder what is the starting point – 'let's find out what went wrong' or the more constructive 'let's look at what was measured / how / with whom and what conclusions were drawn'.  The industry must address misperceptions head on and challenge the media narrative

And what of the MRS response???

Menu