The politics of polling Contents

Chapter 6: Digital media

Introduction

249.We were appointed “to consider the effects of political polling and digital media on politics”.

250.It soon became clear to us that the impact of digital media on politics was far too large a topic to be covered adequately within our reporting timeframe. A wide range of digital media issues have emerged, spanning topics beyond concerns about political opinion polling. Most notably, these include the use of social media deliberately to spread misinformation about politics and political topics, the use of artificial intelligence to analyse online commentary and target political advertising, and related questions around the legal and regulatory status of social media platforms. These issues are incredibly fast-moving and, at the point this report was agreed, several new stories about these issues were emerging every day. Various committees and organisations are already studying some of these issues. In particular, as mentioned in Chapter 1, the House of Commons Digital, Culture, Media and Sport Committee is conducting an inquiry into fake news, which is examining the impact that fake news is having on the public’s understanding of the world and its response to traditional journalism.217

251.It has been suggested that a key impact of digital media on politics is the use of algorithms that “curate”218 our news feeds, cherry-picking what we are most likely to be interested in, thereby creating ‘echo chamber’ effects. “Deliberate and concerted”219 attempts by foreign governments, amongst other actors, to manipulate the online information space have been cited. We were also aware of the complex arguments around the legal and regulatory status of social media platforms and the tensions that exist for platforms (and other digital media sites) between allowing freedom of expression, including the ability to post anonymous content, and the calls for greater transparency about how content, particularly content relating to political issues, is produced, by whom, and for what aim. Although these issues are not directly related to political polling, they are nonetheless relevant to considerations around how political information is promoted, shared and understood in a digital age.

252.As we explained in Chapter 1, it would not have been possible for us to have considered all these matters within the context of this inquiry and we have not therefore gathered the evidence in order to draw conclusions about these issues. In November 2017, we wrote to the House’s Liaison Committee to suggest that another ad hoc committee be appointed in 2018 in order to assess these wider issues in more detail (the letter is reprinted in Appendix 7). We appreciate that the Liaison Committee has not recommended this proposal for an ad hoc committee this year.220 However, the Liaison Committee is currently conducting a review of the House’s investigative and scrutiny committees, and we would strongly urge it to consider the establishment of such a committee in the future. We see this report as a first stage in the scrutiny of these issues and, if the House saw fit to establish a committee to consider the wider impact of digital media on politics, we hope that it would be able to build on our work so far.

253.For the purposes of our inquiry, we chose to focus most of our attention on the ways in which digital media impacts upon political polling, and vice versa.

Digital media and polling

254.Digital media refers to digitised content (text, graphic, audio and video) that can be created, viewed and distributed on digital electronic devices. It is a term which has evolved to cover a vast range of media products and technologies and often refers to a blend of technology and content.221

255.In the context of our inquiry, our focus was on digital news media and how digital media technologies have revolutionised the way in which we consume, interact with and share news and information. We wanted to understand what impact these changes have had on polling. Specifically, we asked whether the capability and demand for 24-hour news and an increasingly competitive media market had led to an increase in the number of polls and, if so, whether that had meant a decline in quality in both polls and the reporting of polling.

256.Although there was broad agreement that there had been an increased volume of polling in recent years, there was less agreement that this was as a result of demand from the media (or the increase in digital media channels). YouGov told us that “the increased volume of polling has been driven by an increased number of polling companies, lower costs and lower barriers to entry to the industry, rather than demand from the media.”222 Others supported the suggestion that the increase in digital media channels and a broader online environment had opened up the industry and made the polling market easier to enter.223 However, the MRS, with reference to its own system of regulation and accreditation, warned us that: “The growth of digital media channels represents a challenge for researchers and the regulatory framework as there is greater proliferation of non-accredited individuals without a professional and/or ethical approach to research.”224

257.Echoing an issue that we touched on in Chapter 3, we also heard that the digital media environment encouraged a tendency to inflate polling stories to help compete in a crowded and competitive market, particularly during election campaigns. Jim Waterson, the Politics Editor at Buzzfeed, highlighted a good example of this:

“One of the most viral poll stories of the entire general election campaign was published on 3 June in the Independent. It stated, ‘Labour ahead of Conservatives in unadjusted poll of voters’. Most members of the Committee understand that the reason you adjust polls is precisely to weight them. The top line said, ‘A new poll suggests Labour could be on course for a shock win at the general election—but only if all those considered least likely to vote turn out … on Thursday.’

That was shared 40,000 times on Facebook. You could probably put a substantial multiple on that for the number of people seeing it. To make a complete guess, you might be talking of 500,000 to 1 million people in the UK who saw the headline. If we had written up, ‘Conservatives still ahead’, it would not have been shared anything like as much as that. Given the state of online publishing, you have an enormous incentive to sensationalise, because that is the way in which you will get your headline shared and people reading your material.”225

258.The example provided by Buzzfeed alluded to one of the core themes highlighted by the evidence on digital media, which is the impact of social media on politics and the way in which people view, share and form opinions about political issues. Although this is in no way a recent or unstudied phenomenon—the impact of social media on society is one that is being covered by a vast range of experts and academics—our inquiry did provide an opportunity to highlight some specific concerns around the impact that social media are having on the way people engage with political issues.

Social media

259.Social media are defined as websites and applications that enable users to create and share content or to participate in social networking.226

260.Our lines of inquiry relating to social media fell into three broad categories:

The use of social media data for predicting elections

261.The data generated by social media presents a very significant opportunity to better understand public opinion on, amongst other topics, politics. One example of the way in which social media are being utilised to measure public opinion is through attempts to predict election results using data gathered from social media platforms. A number of different organisations have claimed to have accurately predicted recent elections using social media data.227 One example of this is the company BrandsEye, which claimed to have accurately predicted the outcome of the 2016 referendum on the UK’s membership of the EU and the 2016 presidential election in the United States of America.228

262.BrandsEye undertakes what it calls ‘opinion mining’, predominantly for corporate clients, effectively scouring the internet and online conversations for mentions of specific keywords. To do this it uses algorithms to evaluate the mentions of a specific topic for relevance. A common issue for this type of practice is the ability of algorithms to accurately measure sentiment—for example, sarcasm or irony. BrandsEye explained that it had attempted to address this problem by complementing the data generated by the algorithms with crowdsourcing—paying large teams of people to categorise comments and verify the sentiment and topic.229

263.Jean Pierre Kloppers, Chief Executive Officer of BrandsEye, explained how BrandsEye trialled the use of social media data to predict the outcome of a political event, in this case, the referendum on the UK’s membership of the EU:

“We had half a million people speaking on social media in the week before the referendum. We took a statistically significant sample of that half a million conversation and put it through what we call the BrandsEye crowd—people who work on our platform …

When somebody mentions something about Brexit, we select it on a key-word basis. If people are speaking about Brexit, leave or remain—whatever language they are using—we use key words to find it. If an individual mention—whether a Facebook post, a tweet or a comment on a blog somewhere—is selected by the system as part of the sample, it is sent to multiple raters within our crowd. Those are people, like anybody here, who work on our platform and earn money by competing with other people to verify the sentiment of the author. If somebody says very simplistically, ‘I am going to vote remain’, it is easy to understand. If somebody says, ‘David Beckham is voting remain, so I will, too’, there is probably a bit of sarcasm. These people compete with one another to try to understand what the author meant. An algorithm cannot do that. We have gamified the way in which data is verified by people.”230

264.Mr Kloppers acknowledged however that these new techniques were supplementary to traditional polling, rather than a substitute. He stated that: “Social media give a view of how the public feel about an issue that is not captured by an opinion poll, in the same way that an opinion poll captures a view about an issue that is not captured by social media. The future lies in a combination of the two approaches.”231

265.Elsewhere in the evidence, there was considerable scepticism of the value of using social media data to predict the outcome of elections. Respondents identified a number of issues relating to the use of social media data for measuring public opinion. A key challenge is that social media users are often not representative of the general or voting populations. Dr Christopher Prosser and Dr Jonathan Mellon from the University of Manchester undertook a study using the 2015 British Election Study Face-to-Face survey to examine demographic and attitudinal differences between Facebook and Twitter users and non-users. They stated that:

“The short answer to the question of whether social media users are representative of the population in terms of their political attitudes is no, social media users are on average younger and better educated than non-users, and they are more liberal and pay more attention to politics. Despite paying more attention to politics, social media users are less likely to vote than non-users, but they are more likely [to] vote Labour party when they do.”232

266.Polling companies also alluded to issues relating to representativeness, suggesting that Labour supporters were more prominent on Twitter than Conservatives, which made “gauging public opinion over social media data cumbersome at best and extremely difficult at worst.”233

267.Dr Prosser and Dr Mellon did highlight that “these differences in political attitudes and behaviour arise due to the demographic composition of social media users” and that “with appropriate demographic adjustments, it might be possible to use social media users to gauge levels of political support.” However, they went on to clarify that: “It is important to note that we are addressing attitudinal and demographic differences of social media users in general. A further question remains about the representativeness of people who use social media to talk about politics. The likely answer to this question is that they are even less representative of the population than social media users in general.”234

268.Professor Hanretty, Royal Holloway, University of London, highlighted further issues with the representativeness of social media users, stating that:

“The use of social media data in predicting elections is very fraught. We know from some of Professor Green’s colleagues on the British Election Study team that the Twitter population is not representative of the general UK population. We also know that the degree to which it is not representative is changing over time. As the population ages, there may be more people who have been brought up on Twitter. The age profile changes, so the character of Twitter changes.”235

269.Dr Mark Shepard and Dr Narisong Huhe from the University of Strathclyde echoed the point that social media data are susceptible to considerable shifts over time, reducing the extent to which it can be considered representative. They further explained that:

“Polls can be representative of the public (typically plus or minus 3% sample error accuracy) at any time. Social media is very different as it is a) rarely representative; and b) changes over time as different types of groups mobilise online at varied times … Consequently, if you take your social media data too soon, you might be overly capturing the views of activists, compounding any online biases that we know exist.”236

270.The Alan Turing Institute suggested that social media data are harder to understand and more open to interpretation—more “difficult to calibrate than is the case for other political polling data collection methods.” As a newer form of data, the Institute also highlighted that “vulnerabilities in the data are less well understood and more difficult to detect and to correct for.”237

271.However, many respondents suggested that it was plausible that social media would have a place in future developments in political polling, following further research into the challenges outlined above. Dr Prosser and Dr Mellon noted that “more sophisticated techniques can be developed that are able to adjust for the compositional differences of social media users and could be used to predict election outcomes.” They noted, however: “Whether they are able to do so accurately will be a question for future empirical research.”238

272.It is also possible that analysis of social media data might be used more widely to analyse trends in political opinion. Dr Nick Anstead from the London School of Economics and Political Science suggested that, instead of trying to replicate the work of polling companies to predict elections, “researchers might think of this new tool as being a powerful aid to qualitative research, more akin to focus groups or even a twenty-first century version of the mass observation.”239

Spread of misinformation on social media

273.It is well established that social media are “changing the way that people participate in political and democratic debate.”240 The way in which people view, share and discuss political issues online has led some commentators to argue that it has revolutionised political engagement, facilitating connections to the causes people care about.241 Others have suggested that it actually limits political awareness, and that although social media provides a “mobilising force that builds passionate partisanship”242, this is often to the detriment of interactions between supporters of different parties.243 A number of witnesses referred to the idea that because social media platforms provide users with information that they agree with, and sometimes suppress the content they disagree with, a series of echo chambers are created. One commentator suggested that this creates “a strengthening of existing biases and political prejudices, and a narrowing of political, cultural and social awareness.”244 Anthony Wells, of YouGov and the UKPollingReport, said that:

“Like most other political news, the results of opinion polls are widely shared on social media. Often this reflects the same ‘echo chamber effect’ that is seen in much online political discourse. People are more likely to retweet or share poll results that they agree with or see as being ‘good’ for their side, less likely to retweet or share poll results they disagree with.”245

274.In addition, it is possible that the ‘personalisation’ of news stories online may be making political polarisation even worse. A report by Demos on the echo chamber effect suggested that there is a strong connection between a user’s ideology and the users and news sources they interact with. It added that “users with published support for political parties in the UK are more likely to share ideologically-aligned media, are more likely to keep within ideologically-aligned communities”.246

275.As social media plays an increasing role in political engagement around election campaigns, these issues have been thrown more sharply into focus. In the United States, the ongoing investigation into possible Russian interference in the 2016 presidential election has alerted the world to the way in which social media enables the deliberate spread of misinformation and skewing of attitudes, for the purpose of influencing or forwarding a political agenda. Although we were not able to examine this considerable topic in full, we were informed about a range of concerns relating to the way people access, share and discuss political information, how this can be manipulated and distorted, and (although not limited to election campaigns) what specific impact this might be having on our electoral process in the UK.

276.Professor Farida Vis, Professor of Digital Media at Manchester Metropolitan University, outlined what she saw as the key problem—”global information pollution”—which she said comprised three issues:

“The first is the spreading of misinformation—information that was not intended to cause harm by whoever shared it but that was misleading or false none the less. The second is the spreading of disinformation, where the intention is knowingly to cause harm. The third is the spreading of mal-information, where information that was previously thought to circulate only in private is leaked to the public.”247

277.We were also alerted to concerns around what Carl Miller of Demos termed as the “capacity of social media to misinform and systematically manipulate.”248 Jim Waterson provided an example of where the results of a poll had been distorted to spread misinformation on social media:

“… a lot of dubious, unregulated polls are done with just a Facebook page or something like that. Those can go very viral on their own, and an unscrupulous site can post the results. I once went to the Russian embassy and was handed a printout of a Twitter poll that it had run on its own Twitter page asking, ‘Is UK criticism of Russian operations in Syria hypocrisy? Yes: 78%’. They handed that out to journalists as evidence that they had done polling of our people, who agreed that we were being hypocrites. If you stick a headline on examples like that and you are an unscrupulous site, you can spread them quite far.”249

278.This type of activity has been referred to as ‘computational propaganda’, which the Oxford Internet Institute defines as “the use of algorithms, automation, and human curation to purposefully distribute misleading information over social media networks.”250 This can be conducted through the use of bots. Bots are programs that perform simple, repetitive tasks. They can deliver content to people—retweeting fake news, for example—but they can also exploit social network algorithms to get a topic to trend. A single individual can use them to create the illusion of large-scale consensus.251 Professor Vis highlighted a deeply worrying example of the spread of false information for the purposes of influencing a political issue:

“A particular example that goes to the heart of some of this related to the Westminster attacks in March. Some of you may have seen a photograph that was shared on social media of a Muslim woman in a headscarf on her mobile phone, seemingly walking past one of the victims of the attack, who, in how it was framed, was dying on the bridge. What is problematic here is that the picture was real. This happened; there was nothing doctored about it. However, the fake account presented the information by framing it in a very anti-Islam, anti-Muslim way, essentially to suggest, ‘This is where the UK is headed if we go down this political trajectory’.

At the time, people may have picked up on the fact that this was a troll account, but it seems that they did not link it to Russia. What is even more problematic is that the account, and the information that it spread, went viral. The image was highly emotive and tapped into a national sentiment, and it was picked up in over 80 media reports in the UK. What we have here is a problem of mass amplification by a different agent that has not yet really been mentioned: the mainstream media. This is not a bot account. It is an account, sponsored by Russia, that is pretending to be a right-wing Texan citizen but that now seems to be meddling in UK politics.”252

279.Professor Vis told the Committee that the spreading of misleading and false information to shape political discourse was pushing us towards a “crisis point in terms of the threat to liberal democracies.”253

280.There have been efforts to tackle this issue—for example, the then Minister for Digital, the Rt Hon Matt Hancock MP, highlighted that “many institutions are now putting more effort into what are essentially fact-checking mechanisms or organisations” and that this included Google, the BBC and Channel 4.254 Will Moy, Director of Full Fact, an independent fact-checking charity, explained:

“We are trying to create technology that can automatically recognise the repeating of claims that we have checked and found to be wrong, and begin to check certain kinds of claims that can be checked automatically. There are many claims that cannot be checked automatically and that no realistic future technology will be able to check, but some types of claims—statistical claims, for example—are more susceptible to automated checking.”255

281.Professor Helen Margetts, Director of the Oxford Internet Institute, was also supportive of the use of fact-checking services, stating that: “We need public, political and, potentially, legal pressure to make sure that they carry on with the initial effort to employ fact-checkers and to block bogus accounts, which are responsible for disseminating false information—in some countries, to a huge extent.”256

282.A number of witnesses suggested that better education to support improved digital literacy amongst the population could help to tackle some of the issues associated with social media. Education, they argued, could remedy issues such as the spread of online misinformation by encouraging more critical thinking. Will Moy highlighted that:

“If you really want to think about the long term—where we want to be in 50 years—there are urgent questions about how we educate a generation that, for the first time, does not have dominant sources of news, is exposed to an absolute proliferation of information sources and has to make very difficult judgments very quickly between them.”257

283.Professor Margetts highlighted what she saw as the limitations of the current education system, in relation to digital literacy:

“The trouble is that our education system has not in any way adapted to that. Many children are blocked from using the internet and social media at school. With resources, they could be educated to understand what they look at and whether it is a fact or unreliable information—to look at the source and think about where it comes from. Building digital media into any sort of civic education, and ramping up civic education, would be one way of tackling that. It is definitely a place to put resources.”258

284.Similarly Professor Vis spoke of the “enormous potential” of the “overhauling of the national curriculum so that we can teach young people, and all citizens, how to deal with information online”.259 Matt Hancock confirmed that the Government was also convinced of the importance of improving digital skills:

“The second thing that we can improve, and are improving, is how we teach young people to engage with this sort of information, and how they should think about their use of data online and the veracity and sources of news media. That is incredibly important, but it is a generational challenge to improve that sort of education.”260

285.Efforts to tackle the deliberate spread of misinformation on social media are not currently underpinned by any regulatory mechanism. Carl Miller told us: “We have seen the emergence of one of the most important arenas in political debate in this country with no rules around how it works.”261 He added:

“People can say whatever they want on there. Third-party campaigners can do whatever they want. People from any country can send whatever information they want into British political debate. Just as a matter of priority, putting some kind of enforceable regulatory system in place to begin to defend the integrity of online political discourse would be the thing I would spend political time and wherewithal to try to put in place.”262

286.The question of the feasibility and the desirability of further regulation for the social media sphere was a consistent theme, drawn into sharpest focus by the evidence we received on the impact of social media for political campaigning.

Social media and political campaigning

287.In recent years, political campaigns have been exploring the potential of advanced data analysis tools to help win votes.263 A combination of big data and social media are increasingly being used to attract support from particular voters through demographically targeted political messaging. Rapid increases in the use of digital campaign techniques have included more sophisticated use of data to support direct targeting techniques. Furthermore, the use of bots as campaign tools have allowed campaigners to reach more voters at a lower cost than before.

288.Social media have allowed parties to wage a different sort of election campaign. Digital electioneering, in which political parties buy adverts that target users of social media, was first used on a large scale in Barack Obama’s 2008 presidential bid.264 Since then it has grown. Dominic Cummings, who was campaign director for Vote Leave ahead of the Brexit referendum, has said that 98% of the campaign’s money was spent on digital advertising.265

289.Comprehensive spending reports are not published until a significant period of time has passed since the election or referendum concerned and only cover registered campaigners. However, according to a report by the Electoral Commission, identifiable social media spend on Facebook, YouTube and Twitter ahead of the 2015 General Election ranged from £1.21m by the Conservative party, to £160,000 by Labour, £22,245 by the Liberal Democrats and £5,466 by the Scottish National Party.266 For the 2017 General Election, it has been reported that during the 12 month period before the election the Conservative party spent around £2.1 million on Facebook advertising alone, while the Labour party spent just over £500,000 on Facebook advertising.267

290.Will Moy described the way in which social media have altered the level and type of access that political parties have to constituents:

“Fascinatingly, they now have the ability to communicate directly with the public in their millions. That communication used to be intermediated by the media and was at least open to challenge. The political parties put out claims that are, of course, tendentious—that is their job—and unscrutinised. The claims go directly to the public, backed by massive online advertising campaigns with highly targeted information, and with limited or no scrutiny or public visibility to people who are not targeted by those campaigns. That is a deeply concerning phenomenon, if you believe that an effective election campaign should be a debate between different people on different sides. If it is actually two conversations, in two different places without interaction, that is something to be worried about.”268

291.Similarly, Sue Inglish, Former Head of Political Programmes, Analysis and Research at the BBC, highlighted the lack of transparency of this type of campaigning. She asked:

“ … how do you control political advertising on Facebook? It seems to me—again, looking from the outside—that in the 2017 election political parties targeted very small groups of voters in key marginal constituencies, through Facebook, with political advertising that none of us saw, unless we happened to be part of that target group.”269

292.This type of political advertising activity can be supported through the use of thousands of automated accounts, or bots. Professor Tait, Professor of Journalism at Cardiff University, outlined how bots can lead to subversion of the democratic process:

“ … there is clear evidence that bots, some of which have come from outside this country, are being used to enhance one argument or another. That is potentially a very dangerous development. You have to distinguish between legitimate, targeted advertising, which people are entitled to do, and the use of bots to create an impression that your side is the winning side or to troll or attack people who disagree with you—which, in many ways, is diminishing the quality of British public and political discourse, frankly. Some of the attacks on journalists, for example, that one now sees on social media are very serious and need to be addressed.”270

293.A number of witnesses highlighted that this type of political campaigning through social media was posing challenges for governments and regulators, and indeed for democracy. It has become harder, and often impossible, to define what constitutes political advertising and, crucially, what falls under the category of campaign material. Professor Vis told us:

“When we think about advertising, we still think about messages that we can recognise as advertising. One of the things that came out of the congressional hearings in the States was that some of this sponsored content was about fake events, such as a rally of miners for Trump. How do you regulate against that? At an emotive level, the event is potentially highly persuasive. Here is a politician who is coming to my town and is doing something about an issue I care deeply about, but the event is entirely fake.”271

The Electoral Commission

294.The most pertinent of the concerns raised about political advertising came from the Electoral Commission, who highlighted the regulatory challenges associated with applying existing expenditure rules to campaign activity conducted through social media.

295.In its recent report, Political finance regulation at the June 2017 general election,272 the Electoral Commission acknowledged that more action was needed to ensure compliance with the political finance rules for future elections, due to:

296.The report notes that there was commentary and concern raised during and after the election about the use of enhanced direct targeting techniques and the use of bots as campaign tools. It noted that spending on the creation or use of such campaigning techniques to produce and disseminate election campaign material was covered by the existing expenditure rules, but acknowledged that online campaigning presented some specific regulatory challenges. Bob Posner, Director of Political Finance and Regulation and Legal Counsel at the Electoral Commission, articulated the concerns that have been raised by the rise in campaigning through social media:

“Campaigning has a wide definition, whether you are a party or a non-party campaigner. If you are seeking to influence voters for or against, it is campaigning, under our law. There is quite a low threshold of spending where our regulation comes in. For the referendum, for example, spending of upwards of £10,000 brought it within our regulatory remit, if you were campaigning. For the election, in parts of the UK, it was £10,000; in England, it was £20,000. We monitor that.

Bots are a form of amplification of a message. They are a very effective way of amplifying something, but it is still campaigning, at root. The challenge that you are raising is how you spot it. That goes into our live monitoring and into other people observing things and reporting them to us, but we recognise that it is a challenge. Part of what we are doing at the moment is talking to the main social media platform providers and looking forward, to see where there can be improvements.”273

297.In its 2017 report, the Electoral Commission called for improved transparency and the regulation of online campaign material, stating that:

“We want to see changes to electoral law to help improve transparency and the regulation of online campaigns at UK elections.

We first recommended in 2003 that online campaign material produced by political parties and non-party campaigners should—like its printed equivalent—be required to include an imprint stating who has published it. This would enable voters to identify who is spending money on trying to influence them at elections.

Our recommendation would require secondary legislation to be introduced by the UK Government and approved by the UK Parliament. It will also require secondary legislation to be made by the Scottish Parliament and National Assembly for Wales in relation to elections to those legislatures.

We have also highlighted how the rules could be improved to ensure that campaigners report more detailed breakdowns of spending, including on different types of advertising such as online and social media promotion.

The UK Government, the Scottish Government and the Welsh Government should take steps to amend electoral law so that these changes are in place ahead of the next major elections in 2021 and 2022.”274

298.The Electoral Commission also noted that there were wider questions about social media in elections that it felt went beyond its remit.275 It stated that:

Options for tackling the challenges posed by social media

299.Social media are having and will continue to have a considerable impact on the political process. The issues outlined in this Chapter are provoking serious and wholly warranted concerns. This has naturally prompted consideration of how these might be tackled from a regulatory perspective.

300.The evidence was in no way conclusive about what, if any, regulatory solutions should be considered. The Minister, Matt Hancock, confirmed that tackling these issues was difficult. He said:

“The challenge is right, but the questions are, ‘What is the role for government?’, and, ‘How do you get there?’ As we all saw five years ago, in a very different press environment, when the whole Leveson debate was going on, getting to an answer is extremely difficult.”277

301.The Minister highlighted that some action was already being taken:

“A number of things can be done. The first is that the big platforms themselves can take action, and in some cases are taking action, to ensure that people have to hand information about the veracity and the source of news and information, as well as the news itself. The moves in that direction by the big social media companies are welcome, but there is much more to do.

The second thing that we can improve, and are improving, is how we teach young people to engage with this sort of information, and how they should think about their use of data online and the veracity and sources of news media. That is incredibly important, but it is a generational challenge to improve that sort of education.”278

The Minister went on to confirm that: “Getting a handle on the unregulated space is very difficult, because we need to approach the solution to the problem in a way that does not undermine the very values by which we are trying to govern the country.”279

302.Professor Vis also suggested that further regulation might not provide a workable solution:

“Regulation is a very slow beast. By the time it has gone through all the checks and balances, it will be outdated; what we are regulating for will no longer be the current situation, so I am not highly optimistic about that route. That does not mean that it should not be discussed—it absolutely should be discussed, and exhausted as a potential solution—but I see more potential in a middle ground that tries to avoid regulation, to reshape the conversation with the platforms and to explore what is possible at a platform level. There are different ways in which inroads can be made very positively, and much more quickly.”280

303.There was also some discussion on the role and responsibilities of the platforms themselves. Professor Margetts told us: “The big internet corporations and social media platforms have to do something. They must stop saying that it is not their problem, which is what we have seen until now. We are beginning to see some movement on that.”281

304.Google told us that it was starting to take action:

“Google wants to make it easier for people to get their news from legitimate and verified sources to help tackle misinformation. We are also looking to tackle the issue of misinformation through a series of measures, including removing advertising from sites that misrepresent content, promoting trusted and vetted news sources, and supporting factchecking organisations that can provide independent verification of news items.”282

On political advertising, Google added that:

“Google believes it is important that people have platforms to communicate and make themselves heard, and election advertising has long served a positive and inclusive role in elections.

However, all political adverts are subject to our policies on advertising content and targeting practices, and we require all political ads and landing pages to comply with the local campaign and election laws.”283

305.Facebook told us about the action it was taking to ensure that advertising on its site was more transparent. It said:

“We are also going to require more thorough documentation from advertisers who want to run election-related ads. We are starting with federal elections in the US, and will progress from there to additional contests and elections in other countries and jurisdictions. As part of the documentation process, advertisers may be required to identify that they are running election-related advertising and verify both their entity and location.

Once verified, these advertisers will have to include a disclosure in their election-related ads, which reads: ‘Paid for by.’ When you click on the disclosure, you will be able to see details about the advertiser. Like other ads on Facebook, you will also be able to see an explanation of why you saw that particular ad.

For political advertisers that do not proactively disclose themselves, we are building machine learning tools that will help us find them and require them to verify their identity.”284

306.Twitter also told us about the action it was taking to tackle the issue of malicious automation, stating that:

“For example, in December 2017, our systems identified and challenged more than 6.4 million suspicious accounts globally per week—a 60% increase in our detection rate from October 2017. We currently detect and block approximately 523,000 suspicious logins daily for being generated through automation. Furthermore, since June 2017, we’ve removed more than 220,000 applications in violation of our developer and API rules, collectively responsible for more than 2.2 billion low-quality Tweets.”285

307.We have not been able to assess these issues in detail but we do not believe that the steps taken so far by social media companies will address satisfactorily the ongoing public concerns about the possible threats to democracy.

308.The issue of whether social media companies should be defined as ‘publishers’ rather than ‘platforms’ was raised. This argument was made more recently in a debate in the House of Lords, moved by Baroness Kidron. Baroness Kidron highlighted that:

“In common with publishers and broadcasters, these companies use editorial content as bait for advertising. They aggregate and spread the news, and provide data points and key words: behaviours that determine what is most important, how widely it should be viewed and by whom. In common with news publishers, they offer a curated view of what is going on in the world.

The Silicon Valley companies are content creators, aggregators, editors, information cataloguers, broadcasters and publishers. Indeed, severally and together they publish far more media than any other publisher in any other context—but, in claiming to be ‘mere conduits’, they are ducking the responsibilities that the rest of the media ecosystem is charged with.”286

309.When giving evidence to us, Carl Miller explained some of the complications associated with regulating platforms as publishers:

“In doing that, we would make it legally impossible for those entities to exist. Legally, they cannot take responsibility for the content on their platforms; they never would be able to, and they would all shut down in a day. We need a new kind of legal settlement, seeing them not as publishers and not as completely objective, almost like utility companies. Something else has to happen. We need to be creative in the new legal fictions we create, in order partly to empower our own law enforcement and regulatory agencies to be more powerful online and partly to hold the large tech companies to their responsibilities.”287

310.The Government struck a more positive note about the approaches that could be taken internationally. Matt Hancock said: “I would strongly caution against the idea that, just because the global internet platform companies are global, we have no influence. That is not the attitude we take in the UK Government at all.” He highlighted the Government’s Digital Charter, which aimed to change people’s attitudes towards the internet, and highlighted other areas of internet regulation where the Government had been working internationally. He stated that: “Do not think for one minute that we are powerless in the face of the big institutions. We are in fact leading the world in ensuring that the internet is ultimately a force for good in the world, rather than a free-for-all.”288

311.The Council of Europe has started to look into some of these issues. In 2016, it adopted an Internet Governance Strategy 2016–2019 which aims “to ensure that public policy for the Internet is people-centred, meaning that it should respect the core values of democracy, human rights and the rule of law. Its strategic objectives are to build democracy online, to protect Internet users, and to ensure respect and protection for human rights online.”289 The Council of Europe has commissioned a number of studies and reports on internet governance. It has also taken steps to establish a framework for a partnership for human rights, democracy and the rule of law between itself and internet companies, “with a view to creating a space for closer consultation with intermediaries on issues related to the exercise and enjoyment of human rights online. The Council of Europe thus also aims to promote dialogue between internet companies and other stakeholders.”290

312.The concerns outlined above will also need to be tackled alongside wider issues relating to digital media, artificial intelligence and their role in people’s daily lives. These matters are moving quickly up the political agenda. For example, we welcome the announcement that the Nuffield Foundation is to fund and establish a new institute, independent of government, to better understand and examine the impact of artificial intelligence, data, and algorithms on people and society.291 The purpose of the institute will be to ensure that the use of data—and the use of automated technologies that serve to augment it—is harnessed to promote social wellbeing, both for society as a whole and for different groups within it.

313.Collaborating with industry, civil society and other sectors, the new institute will promote and support ethical practice in the public interest through its convening role, by initiating research, and through deliberation and dialogue. The Foundation’s partners developing the new body include the existing and respected Nuffield Council on Bioethics, as well as The Alan Turing Institute, the Wellcome Trust, the Royal Statistical Society, techUK, the Royal Society, the British Academy and the Omidyar Network’s Governance and Citizen Engagement programme. It is expected that the institute will be established before the end of 2018, and that the terms of reference will be published soon. The institute is expected to complement existing regulatory frameworks, including that provided by the Information Commissioner’s Office, as well as the oversight provided by the Government’s Centre for Data Ethics and Innovation.

314.In January 2018, the Government published its policy paper for its Digital Charter.292 The introduction says:

“The internet is a powerful force for good. It serves humanity, spreads ideas and enhances freedom and opportunity across the world. Combined with new technologies such as artificial intelligence, it is set to change society perhaps more than any previous technological revolution—growing the economy, making us more productive, and raising living standards.

Alongside these new opportunities come new challenges and risks. The internet can be used to spread terrorist material; it can be a tool for abuse and bullying; and it can be used to undermine civil discourse, objective news and intellectual property. Citizens rightly want to know that they will be safe and secure online. Tackling these challenges in an effective and responsible way is critical for digital technology to thrive.

The Digital Charter is our response: a rolling programme of work to agree norms and rules for the online world and put them into practice. In some cases this will be through shifting expectations of behaviour; in some we will need to agree new standards; and in others we may need to update our laws and regulations. Our starting point will be that we will have the same rights and expect the same behaviour online as we do offline.

The Charter’s core purpose is to make the internet work for everyone—for citizens, businesses and society as a whole. It is based on liberal values that cherish freedom, but not the freedom to harm others. These are challenges with which every nation is grappling. The internet is a global network and we will work with other countries that share both our values and our determination to get this right.”

315.Little detail has been announced by the Government so far on what the Digital Charter’s work will involve and its likely timescales. However, its work programme is intended to be “a broad, ongoing programme, which will evolve as technology changes”. Its current priorities include “online harms”, “liability” and “disinformation”. The Government also plans to develop the Charter in conjunction with the technology sector, businesses and civil society, convening the various stakeholders in order to find solutions.293 The recent allegations regarding Cambridge Analytica and Facebook, which we noted in Chapter 1, have placed the challenges relating to digital media under a new and glaring spotlight, which makes the importance of Government action on the Digital Charter an even more urgent priority.

316.Overall it appears that there is a good deal of uncertainty amongst governments, regulatory bodies and the platforms themselves about how these issues should be tackled. Professor Vis stressed that the issues we are facing “were not anticipated and are not well understood.” She highlighted that social medial platforms have only existed for around 15 years and that they were not necessarily originally conceived as being “built for the purposes of furthering democratic principles and ideals.” Professor Vis concluded that: “We are only just starting to understand the breadth of that. Therefore, we can only just start to think about remedies.”294

317.The Minister Matt Hancock concluded: “It is the work of a generation to ensure that this amazing new technology allows for the flourishing of humanity rather than its undermining. It is no smaller than that.”295

318.The evidence received by the Committee on the use of social media to influence political debate adversely was deeply concerning. We appreciate the complexities of considering a regulatory solution to these issues. We are, however, acutely aware of the urgency of the situation, as many witnesses highlighted that governments, regulators and the platforms themselves are on the ‘back foot’ on many of these issues and have been too slow to address the spread of misinformation and the manipulation of political information on social media platforms. We believe that these issues warrant serious and concerted investigation, and recommend that the Government urgently conducts further research into this issue.

319.One way to combat the spread of misinformation online and to limit its potential impact on democratic debate is to ensure that people have the critical literacy skills to match digital skills to enable them to assess and analyse the information they read online. The Department for Education must ensure that such skills are taught to people of all ages, including children and young people at schools and colleges, as well as adults in further education.

320.We were concerned to hear the issues raised by the Electoral Commission and support its calls for more transparency in online campaign material. The Electoral Commission has called for the Government to introduce secondary legislation to ensure that online campaign material must, like its printed equivalents, include an imprint stating who has published it. This will be crucial in helping to ensure that public confidence is maintained in the electoral system and we endorse this recommendation. However, we recognise that this will do little to address the challenges posed by international actors who try to operate below the radar.

321.We have already recommended that the Electoral Commission should play a greater role in overseeing voting intention polling during election campaigns. In the light of the current challenges posed by digital media, and its ongoing work to ensure transparency relating to online campaign material, it is likely that the Electoral Commission will need to play an increasingly important role in helping to ensure that the democratic process in the UK is not subverted.

322.We welcome the Government’s announcement of the Digital Charter, which will agree new standards for online behaviour. As identified in this report, digital technologies pose some very serious challenges and risks for democracy, which require urgent attention and decisive action. The Government should, without further delay, outline the specific actions it will take to address the Charter’s priorities, including around the legal liability of online platforms and on limiting the spread and impact of disinformation, and publish the likely timescales for its programme of work.

323.The Government should also ensure that the Digital Charter’s work programme includes:

This work will clearly need to be conducted in close collaboration with, or even commissioned from, independent organisations including research bodies, businesses, civil society and other stakeholders. The challenges associated with digital media are fast-moving and the work outlined above should be pursued urgently.

324.We also recommend that the Government should initiate talks within the Council of Europe, the Organisation for Security and Co-operation in Europe, the Commonwealth, the Group of Eight (G8) and other international bodies, to discuss international approaches to tackling the problems posed to the democratic process by the rise of digital and social media.


217 House of Commons Digital, Culture, Media and Sport Committee, ‘What is ‘fake news’?’ (15 September 2017): https://www.parliament.uk/business/committees/committees-a-z/commons-select/digital-culture-media-and-sport-committee/news/fake-news-inquiry-launch-17-19/ [accessed 20 March 2018]

218 David Robert Grimes, ‘Echo chambers are dangerous—we must try to break free of our online bubbles’, The Guardian (4 December 2017): https://www.theguardian.com/science/blog/2017/dec/04/echo-chambers-are-dangerous-we-must-try-to-break-free-of-our-online-bubbles [accessed 20 March 2018]

219 Q 31 (Carl Miller)

220 Liaison Committee, New ad hoc Committees in 2018–19 (2nd Report, Session 2017–19, HL Paper 103)

221 The Centre for Digital Media, ‘What is digital media?’: https://thecdm.ca/program/digital-media [accessed 20 March 2018]

222 Written evidence from YouGov (PPD0016)

223 Ibid.

224 Written evidence from the Market Research Society (PPD0010)

225 Q 65 (Jim Waterson)

226 Oxford English Dictionary, ‘Definition of social media in English’: https://en.oxforddictionaries.com/definition/social_media [accessed 20 March 2018]

227 Patrick Evans, ‘Can social media be used to predict election results?’, BBC News (10 November 2016): http://www.bbc.co.uk/news/election-us-2016–37942842 [accessed 20 March 2018]

228 BrandsEye, ‘About us’: https://www.brandseye.com/about-us/ [accessed 20 March 2018]

229 BrandsEye, ‘How it works’: https://www.brandseye.com/how-it-works/ [accessed 20 March 2018]

230 Q 38 (Jean Pierre Kloppers)

231 Ibid.

232 Written evidence from Dr Christopher Prosser and Dr Jonathon Mellon (PPD0008)

233 Written evidence from ComRes, Opinium, Ipsos MORI, Panelbase, LucidTalk, ORB International, BMG Research and Survation (PPD0014)

234 Written evidence from Dr Christopher Prosser and Dr Jonathon Mellon (PPD0008)

235 Q 34 (Professor Chris Hanretty)

236 Written evidence from Dr Mark Shepard and Dr Narisong Huhe (PPD0003)

237 Written evidence from the Alan Turing Institute (PPD0019)

238 Written evidence from Dr Christopher Prosser and Dr Jonathon Mellon (PPD0008)

239 Written evidence from Dr Nick Anstead (PPD0018)

240 Carl Miller, ‘The Rise of Digital Politics’, Demos (October 2016): https://www.demos.co.uk/wp-content/uploads/2016/10/Demos-Rise-of-Digital-Politics.pdf [accessed 20 March 2018]

241 Pew Research Center, ‘The Political Environment on Social Media, 3. Social media and political engagement’ (October 25, 2016): http://www.pewinternet.org/2016/10/25/political-engagement-and-social-media/ [accessed 20 March 2018]

242 Angela Phillips, ‘Social media is changing the face of politics—and it’s not good news’, The Conversation (February 2016): https://theconversation.com/social-media-is-changing-the-face-of-politics-and-its-not-good-news-54266 [accessed 20 March 2018]

243 Written evidence from ComRes, BMG Research, Ipsos MORI, LucidTalk, ORB International, Opinium, ORB International, Panelbase, and Survation (PPD0014)

244 Alex Krasodomski-Jones, ‘Talking to ourselves? Political Debate Online and the Echo Chamber Effect’, Demos (January 2017): https://www.demos.co.uk/wp-content/uploads/2017/02/Echo-Chambers-final-version.pdf [accessed 20 March 2018]

245 Written evidence from Anthony Wells (PPD0015)

246 Alex Krasodomski-Jones, ‘Talking to ourselves? Political Debate Online and the Echo Chamber Effect’, Demos (January 2017): https://www.demos.co.uk/wp-content/uploads/2017/02/Echo-Chambers-final-version.pdf [accessed 20 March 2018]

247 Q 123 (Professor Farida Vis)

248 Q 27 (Carl Miller)

249 Q 67 (Jim Waterson)

250 Oxford Internet Institute, Computational Propaganda Worldwide: Executive Summary (June 2017): http://comprop.oii.ox.ac.uk/wp-content/uploads/sites/89/2017/06/Casestudies-ExecutiveSummary.pdf [accessed 20 March 2018]

251 Tse Yin Lee, ‘Bots used to bias online political chats’, BBC News (21 June 2017): http://www.bbc.co.uk/news/technology-40344208 [accessed 20 March 2018]

252 Q 124 (Professor Farida Vis)

253 Q 123 (Professor Farida Vis)

254 Q 101 (Matt Hancock MP)

255 Q 54 (Will Moy)

256 Q 50 (Professor Helen Margetts)

257 Q 51 (Will Moy)

258 Q 52 (Professor Helen Margetts)

259 Q 125 (Professor Vis)

260 Q 172 (Matt Hancock MP)

261 Q 26 (Carl Miller)

262 Q 28 (Carl Miller)

263 Information Commissioner, ‘The Information Commissioner opens a formal investigation into the use of data analytics for political purposes’ (May 2017): https://iconewsblog.wordpress.com/2017/05/17/information-commissioner-elizabeth-denham-opens-a-formal-investigation-into-the-use-of-data-analytics-for-political-purposes/ [accessed 20 March 2018]

264 ‘How online campaigning is influencing Britain’s election’, The Economist (27 May 2017): https://www.economist.com/news/britain/21722690-social-media-allow-parties-target-voters-tailored-messagesand-cat-videos-how-online [accessed 20 March 2018]

265 Ibid.

266 The Electoral Commission, UK Parliamentary General Election 2015: Campaign spending report, p 29 (February 2016): http://www.electoralcommission.org.uk/__data/assets/pdf_file/0006/197907/UKPGE-Spending-Report-2015.pdf [accessed 20 March 2018]

267 Peter Walker, ‘Tories spent £18.5m on election that cost them majority’, The Guardian (19 March 2018): https://www.theguardian.com/politics/2018/mar/19/electoral-commission-conservatives-spent-lost-majority-2017-election [accessed 20 March 2018]

268 Q 48 (Will Moy)

269 Q 76 (Sue Inglish)

270 Q 76 (Professor Richard Tait CBE)

271 Q 125 (Professor Farida Vis)

272 Electoral Commission, Political finance regulation at the June 2017 UK general election (November 2017): https://www.electoralcommission.org.uk/__data/assets/pdf_file/0004/237550/Political-finance-regulation-at-the-June-2017-UK-general-election-PDF.pdf [accessed 20 March 2018]

273 Q 164 (Bob Posner)

274 Electoral Commission, Political finance regulation at the June 2017 UK general election (November 2017): https://www.electoralcommission.org.uk/__data/assets/pdf_file/0004/237550/Political-finance-regulation-at-the-June-2017-UK-general-election-PDF.pdf [accessed 20 March 2018]

275 Ibid.

276 Ibid.

277 Q 174 (Matt Hancock MP)

278 172 (Matt Hancock MP)

279 Ibid.

280 Q 124 (Professor Farida Vis)

281 Q 50 (Professor Helen Margetts)

282 Written evidence from Google (PPD0029)

283 Ibid.

284 Written evidence from Facebook (PPD0030)

285 Written evidence from Twitter (PPD0031)

286 HL Deb, 11 January 2018, col 368

287 Q 30 (Carl Miller)

288 Q 179 (Matt Hancock MP)

289 Council of Europe, Internet Governance—Council of Europe Strategy 2016–2019 (September 2016): https://rm.coe.int/16806ad2a8 [accessed 20 March 2018]

290 Council of Europe, Internet Governance—Thematic Focus: https://rm.coe.int/leaflet-internet-governance-en/1680735bf6 [accessed 20 March 2018]

291 Nuffield Foundation, ‘Data Ethics and Artificial Intelligence’: http://www.nuffieldfoundation.org/data-ethics-and-artificial-intelligence [accessed 20 March 2018]

292 Department for Digital, Culture, Media & Sport, Policy paper: Digital Charter (25 January 2018): https://www.gov.uk/government/publications/digital-charter/digital-charter [accessed 20 March 2018]

293 Ibid.

294 Q 123 (Professor Farida Vis)

295 Q 179 (Matt Hancock MP)




© Parliamentary copyright 2018