The politics of polling Contents

Chapter 2: Background

18.The last three UK-wide elections have, with some individual exceptions, represented notable failures for the polling companies. The polls published before the 2015 General Election, the 2016 referendum on the UK’s membership of the EU, and the 2017 General Election, collectively failed to predict the outcome in each case. To misquote Oscar Wilde, to get one election wrong may be regarded as a misfortune, to get two wrong looks like carelessness, and to get three wrong suggests something somewhere has gone horribly amiss.

19.What was of particular concern following each of these elections was the impact that the polls might have had on driving the ‘narrative’ of the campaigns and that they may well have influenced the way people voted. We use the term ‘narrative’ to describe the messages which dominate discussions during election campaigns, whether that is in broadcast news or interviews, newspaper articles, or even general public conversation. For example, election coverage may focus predominantly on a particular policy of a leading party, or on the electoral consequences of a particular result. To the extent that these narratives are driven by flawed polling, then questions legitimately may be asked about whether polling inhibits the functioning of the democratic process.

20.The reasons behind each polling failure are unlikely to be the same, with no overarching methodological issue at fault for every election. We were told, however, that there are specific challenges involved in polling the modern electorate, which are making it more difficult to estimate political opinion accurately.

21.We had three initial questions to answer: Are polls getting worse? If so, why? And does it matter for the democratic process?

This chapter addresses these questions.

What is a poll?

22.There is no universally agreed definition of a poll or any clear distinction between a poll and a survey. Instead, the words ‘poll’ and ‘survey’ are used to describe a variety of data collection exercises.20 The use of random sampling is sometimes used to distinguish surveys from polls. However, polls sometimes use random sampling, and research that uses quota sampling is often referred to as a survey. The distinction is therefore more a matter of the purpose and objectives of the research than its methodology.

23.It is broadly accepted that polls are intended to provide snapshots and measure changes in what the population thinks about contemporary issues at a particular time,21 often with the intention of influencing the debate on a particular issue. Surveys that measure public opinion, on the other hand, tend to be focused on ‘the bigger picture’ and are generally used by academics and think tanks to understand changing social and political attitudes from a more normatively neutral perspective—an example here would be the British Social Attitudes survey series.

24.There is an added complication when considering the definition of a ‘political poll’ (the term used within our remit). Some polls might ask questions which are obviously political in nature (such as views on Brexit, for example). Other polls might ask more wide-ranging questions on social issues which, while not directly political, could of course have a political angle depending on the context (an example might be questions on satisfaction with and resources for the NHS). Furthermore, some surveys might contain these wider questions on social issues as well as more directly political questions on voting intention and party evaluations.

25.This ambiguity in terminology is a particular problem for members of the public and the media, who have no obvious way of checking the quality of polls or surveys, and this is an issue which we return to in Chapter 3. However, for the purposes of this report, we use the following terms:

These definitions are not intended to be exhaustive, or to describe the quality of polls. Issues such as the representativeness of samples, and the presentation of poll findings, are explored in some detail later in this report. Furthermore, in some places in this report, we use the term ‘poll’ in its general sense, to encompass polls which contain both directly political questions, as well as questions on wider issues.

26.In Appendix 5, we outline some basic information about the polling industry.

Who commissions polls?

27.Over the course of an election campaign, polls are commissioned by a diverse range of individuals and organisations, including newspapers, advocacy groups, political parties and businesses. There has been a notable growth in voting intention polling over recent years. Approximately 3,500 polls were conducted over the 65 year period between 1945 and 2010. By contrast, in the five year period from 2010 until 2015, there were nearly 2,000 published polls, with over 90 voting intention polls undertaken over the course of the six week General Election campaign in 2015.24 This increase has been driven largely by the advent of online polling, which has resulted in lower costs and lower barriers to polling.

28.Many polling organisations are commissioned to produce regular voting intention polls for specific newspapers throughout an election campaign. For example, during the 2017 General Election, Ipsos MORI conducted exclusive polling for the Evening Standard25 and ComRes produced polls for The Independent.26 YouGov has carried out polling for The Sunday Times since 2002, regular polling for The Daily Telegraph from 2002 to 2010, daily polling for The Sun from 2010 to 2015, and since 2015 it has polled for The Times.27

29.In contrast, some of the major broadcasters are less inclined to commission their own voting intention polls. The BBC never commissions voting intention polls during election campaigns, in accordance with its Editorial Guidelines.28 The major broadcasters do, however, commission an exit poll for General Elections, which polls a sample of voters as they leave polling stations. In 2010, Sky News formed a broadcasters’ consortium with the BBC and ITV News to commission the 2010 General Election exit poll. The fieldwork was undertaken by the polling organisations GfK and Ipsos MORI. The same consortium undertook the 2015 and 2017 exit polls.29 According to one of our witnesses, the exit poll was reported to have cost around £300,000.30

30.Exit polling in the UK has enjoyed a greater level of success recently than the pre-election voting intention polls. In 2015 the exit poll indicated there was little doubt that David Cameron would remain as Prime Minster, and in 2017 it suggested that Theresa May had lost her majority.31 The key difference between the exit poll and standard voting intention polls is that exit polls do not aim to estimate how many people will vote for each party. Rather, its objective is to estimate the number of seats each party will win. A sample of voters is polled as they leave the polling booth in a sample of polling stations. Wherever possible the same set of polling stations is used at each election, meaning that the data provide an estimate of the change in support for each party in each constituency.32

31.Dr Jouni Kuha, Associate Professor of Statistics and Research Methodology at the London School of Economics, attributed the success of exit polls to two factors. First, he highlighted the data collection techniques—that the exit polls ask thousands of people how they voted, not how they intend to vote, in the same polling stations as last time. Second, an equally crucial factor was the data analysis—the “fairly elaborate sequence of statistical analysis and statistical modelling” that was applied to produce the predictions.33

32.Because of the variety of polling which exists, there is no comprehensive oversight of polling. If polls are commissioned by campaigners for the purpose of promoting electoral success for a political party, parties or candidates at an election, then their spending will be regulated under electoral law and the registered campaigners would need to include the spending within their spending returns submitted to the Electoral Commission. However, polls commissioned for different purposes or by different actors, such as newspapers, are not regulated. Meanwhile, polling companies may be members of the British Polling Council (BPC) or the Market Research Society (MRS), and those bodies place certain obligations upon their members, but these are voluntary membership bodies, rather than statutory regulators. In addition, some of the organisations publishing polls online are not subject to any form of regulation at all. There is no overall regulator which monitors the variety of polls being commissioned every day, or monitors who funds and produces them. This is an issue which we examine in further detail in Chapter 5.

The accuracy of voting intention polls

33.The performance of voting intention polls in recent years has, according to commentators, given polling a “bruising”34, a “battering”35 and a “bloody nose”36—placing the accuracy of this type of polling very much under the spotlight.

34.The polls have been wrong before—the 1992 General Election saw the polls underestimate the Conservative lead over Labour by nearly 9%.37 In 2012, Martin Boon, then a Director at ICM Unlimited, described the performance of polls at the 1997 and 2001 General Elections as “mediocre.”38 In all three cases the polling companies had underestimated Conservative support. Results for the 2005 General Election were more favourable, with analysis conducted by the BPC demonstrating that the average error for its member companies was no greater than 1.5%.39 Again, in 2010, polling companies fared fairly well with the BPC reporting that all but one of the nine companies produced polls that came within 2% of the Conservative vote share, and five companies produced polls that were within 1%, although there was a tendency to over-estimate the Liberal Democrat share of the vote.40

35.The 2015 General Election, however, saw a universal failure of the final polls accurately to predict voting intention, resulting in the most significant polling failure since 1992. The combined results of the 2015 and 2017 General Elections, and the referendum on the UK’s membership of the EU, have appeared to show a significant, successive failure to estimate accurately voting intention. This has prompted concerns that polling accuracy may be in systematic decline, placing a renewed emphasis on the question of whether we can trust the polls.

36.A central consideration for our inquiry was, therefore, whether the results of polls over the last three years are evidence of a broader trend, and whether the polls are getting worse.

Are the polls getting worse?

37.We recognise that assessing the ‘accuracy’ of voting intention polling is not straightforward. Professor Will Jennings, Professor of Political Science and Public Policy at the University of Southampton, has assembled what he believes to be the largest cross-national data set of voting intention polls for national elections from 45 countries dating back to the 1940s. He told us that: “There is no single, universal benchmark against which the accuracy of polls can be gauged.”41

38.Furthermore, several witnesses told us that polls are not necessarily intended to be predictors of election outcomes. Instead, they represent a ‘snapshot’ in time, and public opinion can and does shift between the date of a poll and the election.

39.Professor Jennings outlined several ways in which polling accuracy can be measured. These included:

40.Of course the information provided by voting intention polls is more complex than just the final prediction and the final result. Indeed, polling during a campaign can tell us a range of important details about the electorate’s views. The focus on the final Conservative-Labour margin can obscure some of the other political insights that can be garnered from polls. Professor Sir John Curtice, President of the BPC, noted that:

“I would suggest to you that, even in 2017, the opinion polls told you an awful lot of things that it was rather useful to know. They told you that the public were changing their minds about the merits of the Leader of the Opposition and of the Prime Minister. They also told you that the Labour manifesto was more popular than the Conservative manifesto and that Brexit was indeed dividing voters—that voters who had voted leave were swinging towards the Conservatives and voters who had voted remain were more likely to swing towards Labour … For the discerning reader, there was an awful lot of political intelligence in the opinion polls.”43

41.While discerning readers may be interested in this additional information, it is reasonable to assume, however, that an average member of the public may not. To the public and the media, which party is ahead—the ‘horse race’ election coverage—is usually the main focal point throughout the electoral campaign and it is this that provokes the most scrutiny.

42.Professor Jennings explained to us that his work, looking at historic datasets of voting intention polls, had: “enabled analysis of the evolution of voter preferences over the election cycle and, most pertinent to the remit of this committee, the degree to which polls at the end of the cycle correspond to election outcomes.”44 He had concluded that “there is no evidence of a global crisis in polling”. He also suggested that the historical accuracy of polling in the UK was typical of similar advanced democracies. In reference to the 2015 and 2017 General Elections, Professor Jennings stated that: “While few would suggest that 2015 and 2017 were high points for pollsters, the errors experienced have not been outside the ordinary.”45 Professor Jennings has now published his research in the journal Nature Human Behaviour.46

43.Although this evidence did not support the idea that, overall, polling is getting less accurate, there was no dispute that polls published in the run-up to the last two General Elections and the 2016 referendum did not accurately reflect the eventual outcomes.

Performance of polls: 2015 General Election

44.While the voting intention polls in 2015 and 2017 were notable for their failure to predict the final result, it is, as highlighted by Professor Jennings, “important to put the two elections in quite different contexts. The accuracies of the polls in 2015 and 2017 are quite different.”47

45.In 2015, on average the final estimates of the polling companies put the Conservative party on 34% and the Labour party on 34%. No individual poll put the Conservative party ahead.48 The final polling result underestimated the Conservative lead by 6.5 percentage points.49

46.In response to the result, the BPC and the MRS established an independent inquiry into the causes of the discrepancy between the final polls and the election result. Under the chairmanship of Professor Patrick Sturgis, Director of the National Centre for Research Methods at the University of Southampton,50 the inquiry was charged with the task of establishing the degree of inaccuracy in the polls, the reasons for the inaccuracies and whether the findings and the conduct of the polls were adequately communicated to the general public.51 The inquiry published its findings in March 2016.52

47.According to the inquiry team, the main reason the final polls did not reveal a decisive Conservative lead was that polling samples were not sufficiently representative of the voting population. The report said:

“Our conclusion is that the primary cause of the polling miss in 2015 was unrepresentative samples. The methods the pollsters used to collect samples of voters systematically over-represented Labour supporters and under-represented Conservative supporters. The statistical adjustment procedures applied to the raw data did not mitigate this basic problem to any notable degree.”53

Performance of the polls: 2017 General Election

48.Writing shortly after the 2017 General Election, Peter Barnes, Senior Elections and Political Analyst, BBC News, said:

“Once again the polls, taken as a whole, were not a good guide to the election result.

Over the course of the campaign the gap between the main two parties narrowed but, with one exception, the final polls all suggested a clearer Conservative lead than the actual outcome.”54

He went on to note that the polls were not “an unmitigated disaster”.55 However, it was clear to all those we spoke to that the 2017 polls were not a roaring success either. In fact, the final average polling result showed a mean absolute error on the Conservative-Labour lead of 5.3 percentage points.56

49.The reason for the polling failure in 2017 appears to have been the polling companies’ approach to weighting for turnout. Polling companies have to adjust their data to take into account who is likely to vote and who is not. In 2017, most companies used turnout models which assumed turnout patterns would be broadly the same as they were in 2015. However, turnout in 2017 was different in important respects from 2015, notably amongst voters under the age of 50, who were more likely to turn out and proved to be considerably more likely to vote Labour.57 This meant that most polling companies over-estimated the Conservative vote share because they under-weighted turnout amongst Labour-voting groups. For example, Ipsos MORI’s unadjusted data had shown the two major parties level, but when they adjusted for turnout, the Conservative party moved to an eight point lead. Similarly, ICM predicted an initial six point Conservative lead—too high, but within the margin of error—but its turnout-adjusted prediction was a 12 point lead for the Conservative party.58

50.In response to the result, the BPC acknowledged that “the final polls were not ideal.” However, it stated that it did not consider it necessary to conduct another formal inquiry and instead decided to ask its members to produce a “lessons learned” report for discussion.59

Performance of the polls: 2016 referendum on the UK’s membership of the EU

51.The performance of the polls in the 2016 referendum, with most final polls showing a lead for ‘Remain’, is another example of a failure by the polling industry, although online polls fared notably better than those using telephone methods.60 The end result was that ‘Leave’ won by just under four percentage points. Professor Curtice told us that:

“One of the challenges that faced the polling industry during the EU referendum was that traditionally it has not been the practice of most political polling to attempt to gather information on education. Educational attainment has not usually been particularly important, once you knew somebody’s occupation or class position, but in the EU referendum education mattered much more than social class.”61

52.Over the course of the inquiry, the Committee has heard varying accounts of the accuracy of the polls for the 2016 referendum, with some witnesses suggesting that several polling companies indicated a likely ‘Leave’ win. Dr Nick Anstead, Assistant Professor at the Department of Media and Communications at the London School of Economics and Political Science, suggested to the Committee that: “To call this event a polling failure is perhaps unfair. In the run-up to the referendum, the polling data was quite mixed, with some polls showing a leave victory”.62 This notion was supported by the evidence from the World Association for Public Opinion Research (WAPOR), which told the Committee: “It is worth noting that of the 72 referendum polls conducted during the official campaign, 35 polls showed a Remain lead and 33 polls showed a Leave lead, with 4 showing dead heats.”63

53.However, the BPC’s assessment of the performance of the polls in the referendum concluded that, on the whole, the final polling predictions were not an accurate guide to the result:

“Seven member companies issued ‘final’ polls of voting intentions in the EU referendum. While no company forecast the eventual result exactly, in three cases the result was within the poll’s margin of error of plus or minus three points. In one case Leave were correctly estimated to be ahead. In the four remaining cases, however, support for Remain was clearly overestimated. This is obviously a disappointing result for the pollsters, and for the BPC, especially because every single poll, even those within sampling error, overstated the Remain vote share.”64

Is it getting harder to poll?

54.We asked many of our witnesses the same thing—is polling getting harder? Their answers were nuanced. Key challenges included: the increasing difficulty of persuading members of the public to take part in polls and surveys; the decline in the value of socio-economic class in predicting voting intention and thus in weighting poll data; the variety of demographic and political variables that polling organisations now need to take into account (such as education and attitudes towards Brexit); challenges associated with predicting who will turn out to vote (turnout); and the financial constraints affecting the newspaper industry and the corresponding impact on the commissioning of polling.65 Others, however, highlighted the benefits brought by the internet which has made polling faster and cheaper to do.66

55.Another issue raised was around ‘voter volatility’. This relates to volatility in electoral choice—the willingness of voters to switch between parties—which has been increasing in the UK.67

Persuading the public to take part

56.It is becoming harder to get members of the public to take part in polls. Nick Moon of Moonlight Research told us that: “It will always be hard to persuade enough people to take part in your surveys to make them reliable.” However, he also suggested that it was an issue which was getting more problematic: “That is another thing that has got more difficult for pollsters. It is undoubtedly harder to get people to take part in surveys. You can see it in the response rates of the big government surveys … that is a problem that pollsters continually have to come up against.”68

57.Professor Curtice explained the impact that declining participation rates could have on the accuracy of polling:

“The principal problem is that response rates to surveys of any kind, including public and political opinion polls, are lower … There is probably a consensus that it potentially creates a problem for political polling in so far as it probably increases the probability that any sample that you obtain, by whatever method, contains disproportionately those who are interested in politics, and therefore contains more people than you would find in the general population who are going to vote.”69

Decline in the value of socio-economic class as a weighting variable

58.The methodological issues relating to the changing social base of British politics was a consistent theme in the evidence. Nick Moon, talking about the 2017 General Election, supported this, stating that:

“There is a decline of classbased voting. If someone had said before the election you were going to see the biggest working-class swing to the Tories in any election in living memory, you probably would have laughed at them, yet that is what we saw. It has become harder to work out what kinds of people might be likely to vote for one party rather than another.”70

59.These shifts in demographic predictors of voting mean that polling organisations are having to adjust the quotas and weightings used to try to ensure that their samples accurately reflect public opinion. Anthony Wells, Director of Political and Social Research at YouGov and the owner and author of UKPollingReport, explained how this had changed the approach to weighting samples:

“… the quotas and weights used to ensure samples are fully representative have changed over time. Fifty years ago ensuring a sample was representative in terms of social class would have been the most important factor, whereas social class now has very little predictive value in voting intention and it is more important to ensure samples are representative on factors like age, education and attitudes towards Brexit.”71

60.Professor Curtice confirmed that this aspect of polling had become more difficult over recent years, stating that: “The widening and changing social bases of electoral choice in the UK have made things more difficult for the industry.”72

Turnout

61.Predicting and adjusting for turnout has always been a challenge for polling organisations. This was particularly problematic in the 2017 General Election where there was considerable divergence in how different companies adjusted for voter turnout. Professor Richard Tait CBE, Professor of Journalism at Cardiff University, told us that:

“There appear to be two fundamental problems—the failure of the polling companies’ currently constituted samples accurately to represent the electorate in an era of rapid and unpredictable political change; and the polling companies’ equally unsuccessful attempts to turn their raw data into accurate predictions of the outcome by estimating the likelihood of specific groups (such as young people) actually voting.”73

62.The issue of differential turnout—where levels of turnout vary between supporters of different parties—and how polling organisations take this into account, was also identified as a particular issue for current polling. Professor Curtice suggested that turnout in recent years had been “persistently lower than it was through to 1997.” He explained that:

“Clearly, once turnout is lower, there is a greater probability that you will get a differential turnout of a kind that may be relevant to understanding what the outcome of an election is going to be. It is pretty clear from the experience of both 2015 and 2017 that estimating correctly who is and who is not going to turn out, particularly the differences in turnout between different demographic groups, is now one of the principal challenges facing the polling industry.”74

Financial constraints and the impact of the internet

63.The MRS noted that it was not just methodological issues which presented challenges, and highlighted the impact that financial constraints could have on accuracy. It said that:

“Commissioning clients will generally ‘get what they pay for’ but dwindling resources and budgetary allocations mean that costs of opinion polling are continually being driven downwards. Larger representative sample sizes for opinion polls can reduce the margin of error but also result in an increase in the base price. Commissioning clients, particularly news and media organisations which use opinion polls to generate journalist content, make decisions on political polling design which prioritises speed of delivery at low cost.”75

64.Polling has undoubtedly been assisted by technological developments that have changed the ways in which polls are conducted. Traditional opinion polling, conducted face-to-face in people’s homes, has seen a significant decline in market share, largely due to the labour costs involved. In its place, telephone and internet polling have become the predominant polling modes. Automated calling systems and internet panels of tens of thousands of people have made polling cheaper and quicker than ever before. Some witnesses suggested that the emergence of internet polls had allowed more polls to be conducted and published.76

65.YouGov suggested that using large panels of online volunteers had become an increasingly viable approach to sampling. It added that:

“The movement to online polling offered several advantages in terms of accuracy. Using quota sampling from a panel of volunteers who we already hold extensive demographic data upon allowed for more detailed quotas to be set on who was interviewed, ensuring greater representativeness on more variables. Online interviewing also reduces or removes the interviewer effect (that is, people being embarrassed to give answers seen as socially undesirable to a live interviewer) and addresses the issues of false recall when using past vote weighting, which is now standard across almost the whole industry.”77

66.Professor Curtice, however, felt that the internet’s influence on polling was nuanced:

“Clearly, there are arguments about how internet polling should be conducted and about its relative merits as compared with telephone interviewing, but the advent of the internet has radically changed the polling industry’s business model. It has been very successful at reducing costs. To that extent, at least, doing polls has become much easier for the industry than it was 25 or 30 years ago, although it is not clear whether that equates to doing polls well.”78

Voter volatility

67.Voter volatility was cited frequently as presenting a difficult and relatively unprecedented challenge for the polling industry. The polling organisations themselves told us that voter volatility was having a profound impact on the context in which polling was conducted, stating that “voter dynamics in the UK are more complex and fluid than at any time any of us can recall.”79 After the 2015 General Election, Dr Jonathan Mellon, University of Manchester, concluded that the election was the latest in a long-running trend of increasing individual voter volatility.80

68.Dr Anstead argued that although recent polling failures had a diverse range of causes, what unified them was “the backdrop of growing political instability, which is making measuring public opinion harder than it is during times of political stability.” Dr Anstead asked: “How does polling work if politics is fundamentally more unstable and volatile from election to election?”81

69.Professor Jennings indicated that the instability of voter intentions was one of the reasons behind the polling errors in 2017, stating that:

“The other thing to note about the difference between the two elections is that, in 2015, the vote intentions were relatively stable during the campaign. There was a systematic miss, but there did not seem to be a lot of movement during the campaign, whereas in 2017 we saw a surge in support for one party, Labour, that was historically exceptional since 1945. There has never been an election and a campaign in which there was such a large shift in vote intention. That is the crucial distinction about 2017 that made it a really difficult election to survey.”82

70.There was no clear consensus on how the issue of voter volatility might be addressed. We have no doubt that polling organisations are committed to producing accurate findings, but there are aspects of the modern electorate and the current political climate that are making polling harder to do. Our concern was that polling may now have reached a tipping point and that, from now on, it might produce results which are less accurate than in the past. This is especially worrying given the other key concern raised through the evidence—the impact of polling on the democratic process.

The impact of voting intention polling

71.There are concerns that inaccurate voting intention polling has a negative impact on the conduct of elections due to its influence on voters, the media, and political parties. Throughout our inquiry, we have tried to assess the extent to which such influence is evident.

Impact on voters

72.The question of whether polls influence voting behaviour has been the subject of a number of academic studies and is widely debated. The evidence we received was mixed. There are a number of theories about how polls influence voters. These include the ‘bandwagon’ effect, where it is suggested that information from polls can influence people to alter their opinion to accord with the majority view, and the ‘underdog’ effect, which will cause some people to adopt a minority view out of sympathy.83 There is mixed but weak evidence for both effects.

73.It has also been suggested that voting intention polling can have an impact on turnout. This is because voters tend to be more engaged in close elections and may believe that their vote is more likely to make a difference to the outcome. Academics told us that there is “good evidence that turnout is higher in elections that are anticipated to be close.”84 Professor Ailsa Henderson, Professor of Political Science at the University of Edinburgh, reiterated this point about turnout and suggested that, as well as examining possible negative influences of polls, more attention should be paid “to the positive role that polls play in informing public debate and facilitating voter engagement.”85

74.Another area where it is claimed the polls affect voter behaviour is in the practice of tactical voting. David Jordan, Director of Editorial Policy and Standards at the BBC, acknowledged that there was an argument that “in some elections, some members of the electorate have used polls to vote tactically.”86 Dr Benjamin Lauderdale, Associate Professor in Research Methodology, London School of Economics, suggested:

“If one is going to make an argument for the value of electoral polling in advance of an election, it is that it is another kind of information available to the public as they make their decision. This matters in certain contexts for local tactical reasons: if you understand, in your constituency, that two particular parties are going to be very competitive and the others are not going to be competitive, that might shape how you make a decision there, so there is constituency-level relevance.”87

75.The polling companies, however, rejected the notion that polling is used to support tactical voting, stating that “voters explicitly reject the notion of polling influencing their vote.” They cited research that showed “the vast majority of the public (87%) reject the idea of tactical voting, with the corollary being that the influence of the sort of information necessary to make decisions about tactical voting, most notably polling, is negligible.”88

76.There was considerable scepticism expressed over whether voting intention polls affect voters’ decisions. Dr Lauderdale, along with the WAPOR and the MRS, argued that there was no strong evidence that the publication of polling information had a discernible impact on voters’ decision-making.89 Professor Henderson cited the results of her research into the influence of polls on voters during the 2016 Scottish Parliament election campaign, which suggested that “polls do not exert an undue influence on voters” and that “one would be hard pressed to say they exerted an influence at all.”90

77.While there may not be clear evidence that polling impacts on voter decision-making, the performance of the polls since 2015 seems likely to have had an impact on the levels of voter confidence in polling. The Royal Statistical Society neatly summarised the impact of recent polling performance on public and media confidence in polls:

“Following the outcome of the 2015 General Election, in which the Conservatives unexpectedly won an outright majority, there was considerable backlash from the media and the public regarding the polls which had largely predicted a hung Parliament. Many said polls should no longer be such a focus for reporting in election periods, with some newspapers saying they would stop using them altogether … With a further UK general election having taken place since then, as well as a referendum on the UK’s membership of the European Union, there remains much debate about the usefulness of polls.”91

78.Whether or not the results of the polls during the last three years constitute the beginning of a downward trend in accuracy, confidence in polls has been damaged. Even before the results of the 2017 General Election were known, scepticism was being expressed in the media—with the query ‘can we trust the polls?’ featuring prominently in election coverage well before the final result was announced.92

Impact on the media ‘narrative’

79.Voting intention polling plays a significant role in shaping the media coverage and therefore the ‘narrative’ of the election. The evidence was clear about this.

80.Many of our witnesses highlighted the 2015 General Election as a particularly pertinent example of this narrative shaping. The run-up to that election was dominated by media coverage of the ‘race’ between the Conservative and Labour parties. Following the evidence of the polls, the dominant narrative was that the election was neck and neck between the Labour and the Conservative parties and that a coalition was the most likely electoral outcome. Many commentators plausibly suggested that this ‘false’ narrative was shaped, predominantly, by the voting intention polls.93

81.This observation was one that was included in the considerations of the Inquiry into the 2015 British general election opinion polls which stated that: “The poll-induced expectation of a dead heat undoubtedly informed party strategies and media coverage during both the short and the long campaigns and may ultimately have influenced the result itself, albeit in ways that are difficult to determine satisfactorily.”94

82.The notion that polls can significantly shape the narrative of an election, and can therefore prove misleading when they are wrong, was echoed by David Jordan, Director of Editorial Policy and Standards at the BBC, who told us:

“Our concern about the 2015 and 2017 general elections and the Scottish and EU referendums was the capacity of the polls to influence the journalistic narrative of those election campaigns. In particular, we were very concerned that, in the 2015 election, there was a huge focus on the possibility of a Labour-SNP coalition, which turned out to be fanciful, shall we say, in the context of the outcome, and really rather misleading.”95

This view was also supported by Dr Anstead’s assertion that: “Flawed polls can therefore lead to misdirected or irrelevant debates becoming central to media coverage, and the exclusion of other issues.”96

Impact on political parties

83.We also heard evidence that voting intention polling can influence the decision-making of political parties. In fact there was more certainty in the evidence about the notion that polling can have an impact on political parties, than for the idea that polling can affect voters’ behaviour.

84.It is well known that political parties take an interest in the results of the bigger and well established newspaper polls. Private polling is also used by political parties to inform decision-making, and this is commonly relied on more heavily than the results of publicly available polls. Deborah Mattinson, Co-Founder of BritainThinks, told the Committee: “Politicians I have worked with have paid a lot more attention to their own private polling than to published polling”.97

85.Lord Kinnock shared with us his experiences of polling, in particular those from the 1992 General Election campaign, which is regarded as one of the worst UK polling failures in history. On private polling, Lord Kinnock told the Committee:

“However much you try to guard against it, your disposition will be to think that the private polling, conducted presumably in circumstances of slightly greater intimacy and with a degree of extra thoroughness, although both assumptions are probably wrong, will give you a closer indication of what is really going on.”98

86.This can even affect the timing of General Elections. According to his aides, Prime Minister Jim Callaghan’s decision in 1978 to defer a General Election was partly due to a private poll from MORI suggesting that Labour was doing less well in the marginal seats which it needed to win to achieve a majority. In fact, these estimates were not very statistically robust, with small sub-samples subject to large margins of error. However, Jim Callaghan understandably went along with his pollsters’ advice. The rest is history.99

87.Another much cited example of private polling that is thought to have influenced crucial strategic decision-making was in 2007, when Gordon Brown, having recently become Prime Minister, decided against calling a snap General Election. According to Damian McBride, Gordon Brown’s special adviser, the polls were “crucial” at this time. He stated, in an article for The Telegraph in 2012, that:

“It was argued that if even one Labour MP lost their seat, it would expose the early election as an act of vanity and folly on GB’s part and he would have to resign.

It seems madness now, but that became the consensus in the inner circle right up until October 5, when the final decision was made. And this is where the polls were indeed crucial. Every poll that we ever looked at in those weeks—private or public—said that Labour would win a clear majority. But the same polls, especially after the Tory conference, said he was going to shed at least a dozen South East (and Midlands) marginals …

People who had previously been arch proponents of the early election had started to play devil’s advocate more frequently and enthusiastically. GB’s pollsters were also—to cover their backs—starting to paint worst case scenarios, all of which ended up with him resigning after a drastic reduction in Labour’s majority.”100

Despite months of polling in his favour, Gordon Brown announced in October 2007 that he would not be calling a General Election.

88.Lord Kinnock and others also confirmed that public polling results inevitably had some impact on political parties’ perception of the election campaign. When asked about the influence of polls on politicians and their actions, Lord Kinnock said: “The existence of the polls of themselves, producing the results that they do day on day, week on week, means that there is information generally available that the human beings who are leaders cannot be expected to ignore.”101

89.The concern that polling results can have an undesirable impact on politics and political decision-making was highlighted by several of our witnesses. Referring to the events of the 2015 General Election, Ric Bailey, Chief Adviser, Politics at the BBC, told us:

“ … that narrative was accepted not just by the BBC and the media, but by the whole political establishment. We then had to report that. It was not just about what we were reporting, it was about the constituencies party leaders were choosing to campaign in, the subject areas they wanted to campaign on, and the interviews and who they gave them to. A whole series of political strategies by the parties themselves were dominated by that narrative. We were all in the same boat, as it were, and perhaps we should have stood back and said that.”102

90.Perhaps the most striking example of the impact of polls on political decision-making occurred during the Scottish independence referendum in 2014. During the later stages of the campaign, the polling averages suggested that the election would be relatively close. Then, a single YouGov poll suggested that the ‘Yes’ campaign might be in the lead (51%/49%).103 Shortly afterwards, the politicians in support of ‘No’ made what was known as ‘The Vow’ to Scotland of greater devolution of powers if the Scottish people chose to stay in the UK. Some politicians and media commentators suggested that the polling industry was disproportionately powerful and had influenced the future of the country on the back of a single poll.104 Ric Bailey described the political activity that followed:

“… the cancellation of Prime Minister’s Question Time and three Westminster party leaders dashing on to the first plane north to start making vows to the Scottish Parliament. That may not have been due entirely to the publication of a single poll, but it was certainly influenced by it.”105

91.It is not, of course, purely the results of the polls which have the ability to influence political discourse—the way in which they are covered by the media is also an important consideration. Carl Miller, Research Director of the Centre for the Analysis of Social Media at Demos, said: “The vicious outcome is that the effect of a poll is obviously proportionate to the amount of coverage it receives in either conventional or social media. The amount of coverage a poll receives is itself proportionate to, or reflects, how sensational the outcome of the poll is.”106 In Chapter 3, we therefore consider in more detail the way in which the media covers voting intention polls.

Conclusion

92.We cannot say conclusively that polls impact directly on voters’ decision- making in any consistent way. But we found that voting intention polls play a hugely significant role in shaping the narrative around political events such as elections and referendums. Given the impact that they can have on political discourse, they will inevitably influence public behaviour and opinions, even if only indirectly. It is therefore vital that work continues in order to try to improve polling accuracy and that this is done as transparently as possible. The Royal Statistical Society noted that: “It is crucial that pollsters and independent parties conduct critical inquiries in public so that the causes of uncertainty can be better understood.”107

93.We expect that polling organisations will continue to seek to innovate, in order to improve the methodologies used in polling and to improve their suitability for estimating voter preferences. It is therefore important that every opportunity is taken to learn the lessons from recent elections. It is also crucial that polling companies and others conduct critical inquiries in public so that the causes of inaccuracy can be better understood, as was done after the 2015 General Election.

94.Analysis of political polls conducted since the 1940s does not show that polling has become more inaccurate over time. However, the three high-profile failures of polling in the UK in the last three years—covering two General Elections and the referendum on the UK’s membership of the EU—raises the possibility that things might have taken a turn for the worse. The internet has certainly made polling easier and cheaper to conduct. However, a combination of difficulties in persuading a representative range of members of the public to take part in polls, shifting demographic predictors of the vote, and an increasingly volatile electorate, have by common consent made it more difficult to estimate political opinion accurately. It is entirely possible that polling failures will become more common in the future.

95.Amongst the methodological issues faced by polling companies, the changing utility of demographic variables for the weighting of samples, particularly the declining validity of weighting based on socio-economic class, is a significant challenge. Polling companies can no longer rely on traditional weighting variables, and so will need to continue to develop new ways to adapt their methodological approaches. Further work is needed to better understand the impact of newer variables such as voters’ educational level, age and attitudes to policy issues such as the NHS and (currently) views on austerity and the UK’s relationship with the European Union.


20 The British Polling Council, ‘About the BPC’: http://www.britishpollingcouncil.org/ [accessed 20 March 2018]

21 Parliamentary Office of Science and Technology, Getting Opinion Polls ‘Right’, POSTnote96, March 1997

22 Ipsos MORI, ‘Ipsos MORI Issues Index: June 2017’: https://www.slideshare.net/IpsosMORI/ipsos-mori-issues-index-june-2017 [accessed 20 March 2018]

23 NatCen, ‘British Social Attitudes’: http://natcen.ac.uk/our-research/research/british-social-attitudes/ [accessed 20 March 2018]

24 Written evidence from Sky News (PPD0005)

25 Joe Murphy, ‘General Election polls: Theresa May heading for clear majority with Jeremy Corbyn facing probably net loss of seats’, Evening Standard (8 June 2017): https://www.standard.co.uk/news/politics/uk-general-election-polls-theresa-may-heading-for-clear-majority-final-poll-of-campaign-reveals-a3559951.html [accessed 20 March 2018]

26 Joe Watts and John Rentoul, ‘Election poll latest: Theresa May will win biggest Tory landslide since Thatcher, final survey predicts’, Independent (7 June 2017): http://www.independent.co.uk/News/uk/politics/election-poll-latest-tory-win-results-corbyn-theresa-may-a7777781.html [accessed 20 March 2018]

27 Written evidence from YouGov (PPD0016)

28 BBC, ‘Editorial Guidelines: Surveys, Opinion Polls, Questionnaires, Votes and Straw Polls’, http://www.bbc.co.uk/editorialguidelines/guidance/surveys/guidance-full#heading-polls-at-election-times [accessed 20 March 2018]

29 Written evidence from Sky News (PPD0005)

30 Q 151 (Ben Page)

31 John Curtice, Stephen Fisher, Jouni Kuha and Jonathan Mellon, ‘Focus: on the 2017 exit poll—another surprise, another success’, Discover Society (46) (July 2017): http://eprints.lse.ac.uk/83556/1/Kuha_Focus%20%202017%20exit%20poll_2017.pdf [accessed 20 March 2018]

32 Ibid.

33 Q 17 (Dr Jouni Kuha)

34 Alan Travis, ‘Can we still trust opinion polls after 2015, Brexit and Trump’, The Guardian (8 May 2017): https://www.theguardian.com/politics/2017/may/08/opinion-polls-general-election [accessed 20 March 2018]

35 Ashley Kirk, ‘General Election exit poll: How accurate are exit polls usually?’ The Telegraph (9 June 2017): http://www.telegraph.co.uk/news/2017/06/08/general-election-exit-poll-accurate-could/ [accessed 20 March 2018]

36 ‘Should we trust the general election 2017 polls?’, New Statesman (7 May 2017): https://www.newstatesman.com/politics/june2017/2017/05/should-we-trust-general-election-2017-polls [accessed 20 March 2018]

37 Parliamentary Office of Science and Technology, Getting Opinion Polls ‘Right’, POSTnote96, March 1997

38 Martin Boon, ‘Predicting Elections, A ‘Wisdom of Crowds’ Approach’, International Journal of Market Research, vol 54, (1 July 2012), p 465: http://www.mrs.org.uk/pdf/IJMR_54_(4)_Boon.pdf [accessed 20 March 2018]

39 The British Polling Council, ‘Accuracy of the final 2005 Polls’ (May 2005): http://www.britishpollingcouncil.org/accuracy-of-the-final-polls-2/ [accessed 20 March 2018]

40 The British Polling Council, ‘Accuracy of the Final 2010 Polls’ (May 2010): http://www.britishpollingcouncil.org/accuracy-of-the-final-polls/ [accessed 20 March 2018]

41 Written evidence from Professor Will Jennings (PPD0009)

42 Ibid.

43 Q 140 (Professor John Curtice)

44 Written evidence from Professor Will Jennings (PPD0009)

45 Ibid.

46 Will Jennings and Christopher Wlezien, ‘Election polling errors across time and space’, Nature Human Behaviour (published online 12 March 2018): https://www.nature.com/articles/s41562–018-0315-6 [accessed 20 March 2018]

47 Q 1 (Professor Will Jennings)

48 Professor Patrick Sturgis, Dr Nick Baker, Dr Mario Callegaro, Dr Stephen Fisher, Professor Jane Green, Professor Will Jennings, Dr Jouni Kuha, Dr Ben Lauderdale and Dr Patten Smith, op cit., p 2

49 Written evidence from Professor Will Jennings (PPD0009)

50 Professor Sturgis was appointed by this Committee to act as our Specialist Adviser.

51 British Polling Council, ‘Details of Opinion Poll Inquiry Announced’ (May 2015): http://www.britishpollingcouncil.org/details-of-opinion-poll-inquiry-announced/ [accessed 20 March 2018]

52 Professor Patrick Sturgis, Dr Nick Baker, Dr Mario Callegaro, Dr Stephen Fisher, Professor Jane Green, Professor Will Jennings, Dr Jouni Kuha, Dr Ben Lauderdale and Dr Patten Smith, op cit.

53 Professor Patrick Sturgis, Dr Nick Baker, Dr Mario Callegaro, Dr Stephen Fisher, Professor Jane Green, Professor Will Jennings, Dr Jouni Kuha, Dr Ben Lauderdale and Dr Patten Smith, op cit., p 4

54 Peter Barnes, ‘How wrong was the election polling?’, BBC News (13 June 2017): http://www.bbc.co.uk/news/election-2017–40265714 [accessed 20 March 2018]

55 Ibid.

56 Written evidence from Professor Will Jennings (PPD0009)

57 Ipsos MORI, ‘How Britain voted in the 2017 election’ (20 June 2017): https://www.ipsos.com/ipsos-mori/en-uk/how-britain-voted-2017-election [accessed 20 March 2018]. While turnout differences in people aged under 50 were a factor, claims of a ‘youthquake’, a significant increase in young people turning out to vote, have been questioned by the British Election Study, which found “no surge in youth turnout at the 2017 election”: Chris Prosser, Ed Fieldhouse, Jane Green, Jonathan Mellon, and Geoff Evans, ‘The myth of the 2017 youthquake election’, British Election Study: http://www.britishelectionstudy.com/bes-impact/the-myth-of-the-2017-youthquake-election/#.WqpjWa27Lct [accessed 20 March 2018]

58 Peter Kellner, ‘General Election polls 2017: How the pollsters got it wrong’, Evening Standard
(9 June 2017): http://www.standard.co.uk/news/politics/general-election-polls-how-the-pollsters-got-it-wrong-a3560936.html [accessed 16 March 2018]

59 British Polling Council, ‘General Election: 8 June 2017’ (June 2017): http://www.britishpollingcouncil.org/general-election-8-june-2017/ [accessed 20 March 2018]

60 Daniel Dunford and Ashley Kirk, ‘How right or wrong were the polls about the EU referendum?’, The Telegraph (27 June 2016): http://www.telegraph.co.uk/news/2016/06/24/eu-referendum-how-right-or-wrong-were-the-polls/ [accessed 20 March 2018]

61 Q 140 (Professor John Curtice)

62 Written evidence from Dr Nick Anstead (PPD0018)

63 Written evidence from the World Association for Public Opinion Research (PPD0006)

64 British Polling Council, Press Release, ‘Performance of the polls in the EU referendum’ (24 June 2016): http://www.britishpollingcouncil.org/performance-of-the-polls-in-the-eu-referendum/ [accessed 20 March 2018]

65 Written evidence from YouGov plc (PPD0016)

66 Written evidence from the World Association for Public Opinion Research (PPD0006)

67 Jonathan Mellon, ‘Party Attachment in Great Britain: Five Decades of Dealignment’, Social Science Research Network (SSRN) (10 August 2017): https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2745654 [accessed 20 March 2018]

68 Q 3 (Nick Moon)

69 Q 140 (Professor John Curtice)

70 Q 1 (Nick Moon)

71 Written evidence from Anthony Wells (PPD0015)

72 Q 140 (Professor John Curtice)

73 Written evidence from Professor Richard Tait (PPD0013)

74 Q 140 (Professor John Curtice)

75 Written evidence from the Market Research Society (PPD0010)

76 Written evidence from the World Association for Public Opinion Research (PPD0006)

77 Written evidence from YouGov plc (PPD0016)

78 Q 140 (Professor John Curtice)

79 Written evidence from ComRes, Opinium, Ipsos MORI, Panelbase, LucidTalk, ORB International, BMG Research and Survation (PPD0014)

80 Jonathan Mellon, ‘Party Attachment in Great Britain: Five Decades of Dealignment’, Social Science Research Network SSRN (10 August 2017): https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2745654 [accessed 20 March 2018]

81 Written evidence from Dr Nick Anstead (PPD0018)

82 Q 1 (Professor Will Jennings)

83 Ian McAllister and Donley T. Studlar, ‘Bandwagon, Underdog, or Projection? Opinion Polls and Electoral Choice in Britain, 1979–1987’, The Journal of Politics, vol. 53, No. 3 (August 1991), p 720: http://s3.documentcloud.org/documents/612965/bandwagon-jofp-1991.pdf [accessed 20 March 2018]

84 Written evidence from Chris Hanretty, Oliver Heath and Michael Spagat (PPD0011)

85 Written evidence from Professor Ailsa Henderson (PPD0012)

86 Q 93 (David Jordan)

87 Q 18 (Dr Lauderdale)

88 Written evidence from ComRes, Opinium, Ipsos MORI, Panelbase, LucidTalk, ORB International, BMG Research and Survation (PPD0014)

89 Written evidence from Dr Benjamin Lauderdale (PPD0002), World Association for Public Opinion Research (PPD0006) and the Market Research Society (PPD0010)

90 Written evidence from Professor Ailsa Henderson (PPD0012)

91 Written evidence from the Royal Statistical Society (PPD0022)

92 Examples of the headlines described can be found in the following: ‘Should we trust the general election 2017 polls?’, New Statesman, (7 May 2017): https://www.newstatesman.com/politics/june2017/2017/05/should-we-trust-general-election-2017-polls [accessed 20 March 2018]; Martin Baxter, ‘Campaign Calculus 2017: Can we actually trust the polls?’, The Telegraph (1 May 2017): http://www.telegraph.co.uk/news/2017/05/01/general-election-polls-arent-meaningless-rubbish-foolish-trust/ [accessed 20 March 2018]; Alan Travis, ‘Can we still trust opinion polls after 2015, Brexit and Trump?’, The Guardian (8 May 2017): https://www.theguardian.com/politics/2017/may/08/opinion-polls-general-election [accessed 20 March 2018]

93 Dr Stephen Cushion and Professor Richard Sambrook, ‘The ‘horse-race’ contest dominated TV news election coverage’, UK Election Analysis 2015: Media, Voters and the Campaign (May 2015): http://www.electionanalysis.uk/uk-election-analysis-2015/section-1-media-reporting/the-horse-race-contest-dominated-tv-news-election-coverage/ [accessed 20 March 2018]

94 Professor Patrick Sturgis, Dr Nick Baker, Dr Mario Callegaro, Dr Stephen Fisher, Professor Jane Green, Professor Will Jennings, Dr Jouni Kuha, Dr Ben Lauderdale and Dr Patten Smith, op cit., p 7

95 Q 90 (David Jordan)

96 Written evidence from Dr Nick Anstead (PPD0018)

97 46 (Deborah Mattinson)

98 Q 133 (Lord Kinnock)

99 David Lipsey, In the Corridors of Power (London: Biteback Publishing Ltd, 2012), pp 124-125

100 Damian McBride, ‘Gordon Brown and the 2007 election: why it never happened’, The Telegraph (5 October 2012): http://www.telegraph.co.uk/news/politics/gordon-brown/9589561/Gordon-Brown-and-the-2007-election-why-it-never-happened.html [accessed 20 March 2018]

101 Q 135 (Lord Kinnock)

102 Q 90 (Ric Bailey)

103 YouGov, ‘’Yes’ campaign lead at 2 in Scottish Referendum’ (6 September 2014): https://yougov.co.uk/news/2014/09/06/latest-scottish-referendum-poll-yes-lead/ [accessed 20 March 2018]

104 Edward Platt, ‘Living by numbers: YouGov and the power of the pollsters’, New Statesman (16 April 2015): https://www.newstatesman.com/politics/2015/04/living-numbers-yougov-and-power-pollsters [accessed 20 March 2018]

105 Q 89 (Ric Bailey)

106 Q 24 (Carl Miller)

107 Written evidence from the Royal Statistical Society (PPD0022)




© Parliamentary copyright 2018