A simple guide to political polling

Posted on May 11th, 2017 by Beatrix Pitel

Crick Centre Associate Fellow Beatrix Pitel interviews Joe Twyman, Head of Political and Social Research for Europe, the Middle East and Africa at YouGov.

With the snap general election on June 8th, newspapers and social media are more than ever filled with statistics from political polls. But do many members of the general public really understand how polling works, or how to tell a good poll from a bad one? Last week I met the political pollster Joe Twyman for coffee, with the hope he could shed some light on the subject.

How many people do you usually poll and why that number? (pollsters call this the sample size)

We usually poll about 1,500 people.

The question of why is an interesting one. You can read books which have complicated equations to work out the sample size for a given population based on the complicated interrelationships between variables, but it’s mostly rubbish. You should be choosing the sample size that is most appropriate for the survey in question, that is, what the margin of error will support.

Over 1,000 is also a good number because it is over 1,000. Psychologically it sounds good, so it is good for commercial clients just as it is good for the media. There is no statistical difference between a survey of 999 people and a survey of 1,000. It’s just that the survey of 1,000 sounds better.

(All polls have a margin of error because they involve sampling. A smaller group of people is surveyed with the hope of understanding what a larger group of people think. Therefore, polls are subject to the laws of probability and the results will have some amount of error. A poll of 1,000 people will have a margin of error of plus or minus 3%. This represents the range within which we can feel confident that the real population’s results will fall. The size of the sample affects the margin of error, smaller samples are less precise and therefore more prone to error.)

How do you decide which 1,500 people to survey?

What we do not do is select 1,500 people at random. That would be problematic because it would just replicate the biases, which aren’t enormous but are still present in our panel as a whole. *

Instead we invite specific people to take part. We do this using a combination of the following bits of information; gender, age, region, education, political interest, general election vote in 2015 and EU referendum vote. A lot of information about people is collected when they first register for the YouGov panel and then we collect subsequent information from surveys that they complete.

Essentially, what we want to do is to represent the population we are trying to survey. It doesn’t matter if that population is Labour party members, Conservative voters or the UK adult population as a whole. We need to make sure that we’ve got the right number of men, women, young people, women without degrees who live in Buckinghamshire or men with degrees in Essex etc. for each population.

*(YouGov surveys are all done online via YouGov’s online panel. The public can sign up online and are given points for completing surveys).

Polls are surely not conducted for the good of the world. Who pays for them and why?

Well sometimes they are conducted for the good of the world but it doesn’t necessarily mean they are not paid for!

People conduct polls for a whole host of reasons – for example, to understand, prove or test something. Who pays for it? Well in the case of YouGov 90-95% of people who pay for polls are commercial organisations; These polls never see the light of day. Polls on pension products, pet food and everything else under the sun.

Then there are the polls we do in the Political team. We are most famous for our published political work but even a large proportion of our work is not published. Instead, we are doing work for organisations like pressure groups, political parties, trade unions and political candidates. We also do work for the media, most publicly for the Times and The Sunday Times. Who pays for that? The Times and The Sunday Times!

Just after the announcement of the June 8th election, one polling company put the Conservatives 21 percentage points ahead of Labour and another just 9 percent. Both are well-known and respected polling companies. Why are the polls so different?

They are well known but I would question whether they are respected! One of the polls was in The Mail on Sunday, if you look at the poll they are suggesting in their data that since the 2015 general election, the level of UKIP support has fallen by just 1%. Now I would suggest that all the evidence in the real world, having lost their leader, their funding etc. that UKIP is probably not just 1% below what they got at the last general election. YouGov have got them at 5%, having lost two thirds of their votes to The Conservatives.

Why do pollsters keep getting it wrong? For example, in 2015 the pre-election polls failed to capture the Conservative lead and the British Polling Council along with the Market Research Society established a full independent enquiry into what happened.

It depends what you mean by keep and wrong. Often when these things are judged by the public, and we get why this happens, they are judged in binary terms. Did they get the overall story right or wrong?

Instead, pollsters tend to think in terms of accuracy. In 2010 YouGov were saying that there was going to be a coalition, and lots of people were suggesting we were wrong. For the ten years prior to that we also had a pretty good track record. When 2015 came along, the expectation was that we couldn’t get it wrong. It’s true that we don’t always get things right, but it’s not true that we always get it wrong.

The vast majority of polls we publish are right. In fact, if you look at the history of all published political polls since 1945, in the vast majority of elections the polls have been right. The average error of all these polls is just 2% and no single poll has ever been out by more than 6%.

The Error in 2015 was certainly larger than pollsters wanted it to be, we hold our hands up and agree that improvements need to be made. However, a surprise error is often confused with a large one. The suggestion that we were massively out is just not the case. We were not saying that UKIP would form the government! It was a relatively small error of 3.3%. Historically larger than average, and larger than we wanted but at the same time not enormous. It was a surprise error but sometimes it gets confused with being a large error.

The Guardian’s US data editor, Mona Chalabi, recently argued that political polling is bad for democracy. What would your response to her be?

I have already tried to challenge her to a public debate. It is very easy to sit in an office in New York and say ‘I think it’s bad for democracy’ and then refuse to debate it.

What is bad for democracy is when certain groups in society are privy to information that the rest of the public is not. You take opinion polling out the public domain and you give it to elites and the establishment. You get to a stage where the bankers know, the broadcasters know and the politicians certainly know! But who doesn’t? The ordinary people.

While I was working on public opinion polling in Iraq, people – my friends – were placing their lives and the lives of their families on the line to conduct opinion polls about the nature of democracy and the emerging democratic system. Why were they doing it? They profoundly felt that it was important for the democratic process.

I consider it an insult to those people, that someone can type this sort of thing on a blog and then refuse to defend it.

Biography

Beatrix graduated from the University of Sheffield in 2016 with a BA in Politics and Philosophy. She won the YouGov Quantitative Methods Prize for her dissertation, a statistical analysis of attitudes towards gender equality and perceptions of democracy in Turkey.

Do you have a question? Would you like to be involved?

Contact us