Disinformation and 'fake news': Final Report Contents

2Regulation and the role, definition and legal liability of tech companies

Definitions

11.In our Interim Report, we disregarded the term ‘fake news’ as it had “taken on a variety of meanings, including a description of any statement that is not liked or agreed with by the reader” and instead recommended the terms ‘misinformation’ and ‘disinformation’. With those terms come “clear guidelines for companies, organisations and the Government to follow” linked with “a shared consistency of meaning across the platforms, which can be used as the basis of regulation and enforcement”.15

12.We were pleased that the Government accepted our view that the term ‘fake news’ is misleading, and instead sought to address the terms ‘disinformation’ and ‘misinformation’. In its response, the Government stated:

In our work we have defined disinformation as the deliberate creation and sharing of false and/or manipulated information that is intended to deceive and mislead audiences, either for the purposes of causing harm, or for political, personal or financial gain. ‘Misinformation’ refers to the inadvertent sharing of false information.16

13.We also recommended a new category of social media company, which tightens tech companies’ liabilities, and which is not necessarily either a ‘platform’ or a ‘publisher’. The Government did not respond at all to this recommendation, but Sharon White, Chief Executive of Ofcom, called this new category “very neat” because “platforms do have responsibility, even if they are not the content generator, for what they host on their platforms and what they advertise”.17

14.Social media companies cannot hide behind the claim of being merely a ‘platform’ and maintain that they have no responsibility themselves in regulating the content of their sites. We repeat the recommendation from our Interim Report that a new category of tech company is formulated, which tightens tech companies’ liabilities, and which is not necessarily either a ‘platform’ or a ‘publisher’. This approach would see the tech companies assume legal liability for content identified as harmful after it has been posted by users. We ask the Government to consider this new category of tech company in its forthcoming White Paper.

Online harms and regulation

15.Earlier in our inquiry, we heard evidence from both Sandy Parakilas and Tristan Harris, who were both at that time involved in the US-based Center for Human Technology. The Center has compiled a ‘Ledger of Harms’, which summarises the “negative impacts of technology that do not show up on the balance sheets of companies, but on the balance sheet of society”.18 The Ledger of Harms includes negative impacts of technology, including loss of attention, mental health issues, confusions over personal relationships, risks to our democracies, and issues affecting children.19

16.This proliferation of online harms is made more dangerous by focussing specific messages on individuals as a result of ‘micro-targeted messaging’—often playing on and distorting people’s negative views of themselves and of others. This distortion is made even more extreme by the use of ‘deepfakes’, audio and videos that look and sound like a real person, saying something that that person has never said.20 As we said in our Interim Report, these examples will only become more complex and harder to spot, the more sophisticated the software becomes.21

17.The Health Secretary, Rt Hon Matthew Hancock MP, recently warned tech companies, including Facebook, Google and Twitter, that they must remove inappropriate, harmful content, following the events surrounding the death of Molly Russell who, aged 14, took her own life in November 2017. Her Instagram account contained material connected with depression, self harm and suicide. Facebook, which owns Instagram, said that it was ‘deeply sorry’ over the case.22 The head of Instagram, Adam Mosseri, had a meeting with the Health Secretary in early February 2019, and said that Instagram was “not where we need to be on issues of self-harm and suicide” and that it was trying to balance “the need to act now and the need to act responsibly”.23

18.We also note that in her speech on 5 February 2019 that Margot James MP, the Minister for Digital, at the Department for Digital, Culture, Media and Sport expressed her concerns that:

For too long the response from many of the large platforms has fallen short. There have been no fewer than fifteen voluntary codes of practice agreed with platforms since 2008. Where we are now is an absolute indictment of a system that has relied far too little on the rule of law. The White Paper, which DCMS are producing with the Home Office, will be followed by a consultation over the summer and will set out new legislative measures to ensure that the platforms remove illegal content, and prioritise the protection of users, especially children, young people and vulnerable adults.24

The new Centre for Data Ethics and algorithms

19.As we said in our Interim Report, both social media companies and search engines use algorithms, or sequences of instructions, to personalise news and other content for users. The algorithms select content based on factors such as a user’s past online activity, social connections, and their location. The tech companies’ business models rely on revenue coming from the sale of adverts and, because the bottom line is profit, any form of content that increases profit will always be prioritised. Therefore, negative stories will always be prioritised by algorithms, as they are shared more frequently than positive stories.25

20.Just as information about the tech companies themselves needs to be more transparent, so does information about their algorithms. These can carry inherent biases, as a result of the way that they are developed by engineers; these biases are then replicated, spread, and reinforced. Monika Bickert, from Facebook, admitted that Facebook was concerned about “any type of bias, whether gender bias, racial bias or other forms of bias that could affect the way that work is done at our company. That includes working on algorithms”. Facebook should be taking a more active and urgent role in tackling such inherent biases in algorithm development by engineers, to prevent these biases being replicated and reinforced.26

21.Following an announcement in the 2017 Budget, the new Centre for Data Ethics and Innovation was set up by the Government to advise on “how to enable and ensure ethical, safe and innovative uses of data, including for AI”. The Secretary of State described its role:

The Centre is a core component of the Government’s Digital Charter, which seeks to agree norms and rules for the online world. The Centre will enable the UK to lead the global debate about how data and AI can and should be used.27

22.The Centre will act as an advisory body to the Government and its core functions will include: analysing and anticipating gaps in governance and regulation; agreeing and articulating best practice, codes of conduct and standards in the use of Artificial Intelligence; and advising the Government on policy and regulatory actions needed in relation to innovative and ethical uses of data.28

23.The Government response to our Interim Report highlighted consultation responses, including the Centre’s priority for immediate action, including “data monopolies, the use of predictive algorithms in policing, the use of data analytics in political campaigning, and the possibility of bias in automated recruitment decisions”. We welcome the introduction of the Centre and look forward to taking evidence from it in future inquiries.

Legislation in Germany and France

24.Other countries have legislated against harmful content on tech platforms. As we said in our Interim Report, tech companies in Germany were initially asked to remove hate speech within 24 hours. When this self-regulation did not work, the German Government passed the Network Enforcement Act, commonly known as NetzDG, which became law in January 2018. This legislation forces tech companies to remove hate speech from their sites within 24 hours, and fines them €20 million if it is not removed.29 As a result of this law, one in six of Facebook’s moderators now works in Germany, which is practical evidence that legislation can work.30

25.A new law in France, passed in November 2018, allows judges to order the immediate removal of online articles that they decide constitute disinformation, during election campaigns. The law states that users must be provided with “information that is fair, clear and transparent” on how their personal data is being used, that sites have to disclose money they have been given to promote information, and the law allows the French national broadcasting agency to have the power to suspend television channels controlled by or under the influence of a foreign state if they “deliberately disseminate false information likely to affect the sincerity of the ballot”. Sanctions imposed in violation of the law includes one year in prison and a fine of €75,000.31

The UK

26.As the UK Information Commissioner, Elizabeth Denham, told us in November 2018, a tension exists between the social media companies’ business model, which is focused on advertising, and human rights, such as the protection of privacy: “That is where we are right now and it is a very big job for both the regulators and the policymakers to ensure that the right requirements, oversight and sanctions are in place.”32 She told us that Facebook, for example, should do more and should be “subject to stricter regulation and oversight”.33 Facebook’s activities in the political sphere, indeed, have been expanding; it has recently launched a ‘Community Actions’ News Feed petition feature, for instance, to allow users to organise around local political issues, by starting and supporting political petitions. It is hard to understand how Facebook will be able to self-regulate such a feature; the more controversial and contentious the local issue, the more engagement there will be on Facebook, with the accompanying revenue from adverts.34

Facebook and regulation

27.Despite all the apologies for past mistakes that Facebook has made, it still seems unwilling to be properly scrutinised. Several times throughout the oral evidence session at the ‘International Grand Committee’, Richard Allan, Vice President of Policy Solutions at Facebook, was asked about Facebook’s views on regulation, and each time he stated that Facebook was very open to the debate on regulation, and that working together with governments would be the best way forward:

I am pleased, personally, and the company is very much engaged, all the way up to our CEO—he has spoken about this in public—on the idea of getting the right kind of regulation so that we can stop being in this confrontational mode. It doesn’t serve us or our users well. Let us try to get to the right place, where you agree that we are doing a good enough job and you have powers to hold us to account if we are not, and we understand what the job is that we need to do. That is on the regulation piece.35

28.Ashkan Soltani, an independent researcher and consultant, and former Chief Technologist to the US Federal Trade Commission (FTC), called into question Facebook’s willingness to be regulated. When discussing Facebook’s internal culture, he said, “There is a contemptuousness—that ability to feel like the company knows more than all of you and all the policy makers”.36 He discussed the California Consumer Privacy Act, which Facebook supported in public, but lobbied against, behind the scenes.37

29.Facebook seems willing neither to be regulated nor scrutinised. It is considered common practice for foreign nationals to give evidence before committees. Indeed, in July 2011, the then Culture, Media and Sport Committee heard evidence from Rupert Murdoch, during the inquiry into phone hacking38 and the Treasury Committee has recently heard evidence from three foreign nationals.39 By choosing not to appear before the Committee and by choosing not to respond personally to any of our invitations, Mark Zuckerberg has shown contempt towards both the UK Parliament and the ‘International Grand Committee’, involving members from nine legislatures from around the world.

30.The management structure of Facebook is opaque to those outside the business and this seemed to be designed to conceal knowledge of and responsibility for specific decisions. Facebook used the strategy of sending witnesses who they said were the most appropriate representatives, yet had not been properly briefed on crucial issues, and could not or chose not to answer many of our questions. They then promised to follow up with letters, which—unsurprisingly—failed to address all of our questions. We are left in no doubt that this strategy was deliberate.

Existing UK regulators

31.In the UK, the main relevant regulators—Ofcom, the Advertising Standards Authority, the Information Commissioner’s Office, the Electoral Commission and the Competition and Markets Authority—have specific responsibilities around the use of content, data and conduct. When Sharon White, the chief executive of Ofcom appeared in front of the Committee in October 2018, following the publication of our interim report, we asked her whether their experience as a broadcasting regulator could be of benefit when considering how to regulate content online. She said:

We have tried to look very carefully at where we think the synergies are. […] It struck us that there are two or three areas that might be applicable online. […] The fact that Parliament has set standards, set quite high level objectives, has felt to us very important but also very enduring with key objectives, whether that is around the protection of children or concerns about harm and offence. You can see that reading across to a democratic process about what are the harms that we believe as a society may be prevalent online. The other thing that is very important in the broadcasting code is that it is sets out explicitly the fact that these things adapt over time as concerns about harm adapt and concerns among consumers adapt. It then delegates the job to an independent regulator to work through in practice how those so-called standards objectives are carried forward. There is transparency, the fact that we publish our decisions when we breach, and that is all very open to the public. There is scrutiny of our decisions and there is independence around the judgment.40

32.She also added that the job of a regulator of online content could be to assess the effectiveness of the technology companies in acting against content which has been designated as harmful; “One approach would be to say do the companies have the systems and the processes and the governance in place with transparency that brings public accountability and accountability to Parliament, that the country could be satisfied of a duty of care or that the harms are being addressed in a consistent and effective manner”.41

33.However, should Ofcom be asked to take on the role of regulating the ability of social media companies, it would need to be given new investigatory powers. Sharon White told the committee that “It would be absolutely fundamental to have statutory information-gathering powers on a broad area”.42

34.The UK Council for Internet Safety (UKCIS) is a new organisation, sponsored by the Department for Digital, Culture, Media and Sport, the Department for Education and the Home Office, bringing together more than 200 organisations with the intention of keeping children safe online. Its website states: “If it’s unacceptable offline, it’s unacceptable online”. Its focus will include online harms such as: cyberbullying and sexual exploitation; radicalisation and extremism; violence against women and girls; hate crime and hate speech; and forms of discrimination against groups protected under the Equality Act.43 Guy Parker, CEO of the Advertising Standards Authority, told us that the Government could decide to include advertising harms within their definition of online harms.44

35.We believe that the UK Council for Internet Safety should include within its remit “the risk to democracy” as identified in the Center for Human Technology’s “Ledger of Harms”, particularly in relation to deep fake films. We note that Facebook is included as a member of the UKCIS and, in view of its potential influence, understand why. However, given the conduct of Facebook in this inquiry, we have concerns about the good faith of the business and its capacity to participate in the work of UKCIS in the public interest, as opposed to its own interests.

36.When the Secretary of State for Digital, Culture, Media and Sport (DCMS), Rt Hon Jeremy Wright MP, was asked about formulating a spectrum of online harm, he gave a limited answer: “What we need to understand is the degree to which people are being misled or the degree to which elections are being improperly interfered with or influenced and, if they are […] we need to come up with appropriate responses and defences. It is part of a much more holistic landscape and I do not think it is right to try to segment it out”.45 However, having established the difficulties surrounding the definition, spread and responsibility of online harms, the Secretary of State was more forthcoming when asked about the regulation of social media companies, and said that the UK should be taking the lead:

My starting point is what are the harms, and what are the responsibilities that we can legitimately expect online entities to have for helping us to minimise, or preferably to eliminate, those harms. Then, once you have established those responsibilities, what systems should be in place to support the exercise of those responsibilities.46

We hope that the Government’s White Paper will outline its view on suitable legislation to ensure there is proper, meaningful online safety and the role expected of the UKCIS.

37.Our Interim Report recommended that clear legal liabilities should be established for tech companies to act against harmful or illegal content on their sites. There is now an urgent need to establish independent regulation. We believe that a compulsory Code of Ethics should be established, overseen by an independent regulator, setting out what constitutes harmful content. The independent regulator would have statutory powers to monitor relevant tech companies; this would create a regulatory system for online content that is as effective as that for offline content industries.

38.As we said in our Interim Report, such a Code of Ethics should be similar to the Broadcasting Code issued by Ofcom—which is based on the guidelines established in section 319 of the 2003 Communications Act. The Code of Ethics should be developed by technical experts and overseen by the independent regulator, in order to set down in writing what is and is not acceptable on social media. This should include harmful and illegal content that has been referred to the companies for removal by their users, or that should have been easy for tech companies themselves to identify.

39.The process should establish clear, legal liability for tech companies to act against agreed harmful and illegal content on their platform and such companies should have relevant systems in place to highlight and remove ‘types of harm’ and to ensure that cyber security structures are in place. If tech companies (including technical engineers involved in creating the software for the companies) are found to have failed to meet their obligations under such a Code, and not acted against the distribution of harmful and illegal content, the independent regulator should have the ability to launch legal proceedings against them, with the prospect of large fines being administered as the penalty for non-compliance with the Code.

40.This same public body should have statutory powers to obtain any information from social media companies that are relevant to its inquiries. This could include the capability to check what data is being held on an individual user, if a user requests such information. This body should also have access to tech companies’ security mechanisms and algorithms, to ensure they are operating responsibly. This public body should be accessible to the public and be able to take up complaints from members of the public about social media companies. We ask the Government to put forward these proposals in its forthcoming White Paper.

Use of personal and inferred data

41.When Mark Zuckerberg gave evidence to Congress in April 2018, in the wake of the Cambridge Analytica scandal, he made the following claim: “You should have complete control over your data […] If we’re not communicating this clearly, that’s a big thing we should work on”. When asked who owns “the virtual you”, Zuckerberg replied that people themselves own all the “content” they upload, and can delete it at will.47 However, the advertising profile that Facebook builds up about users cannot be accessed, controlled or deleted by those users. It is difficult to reconcile this fact with the assertion that users own all “the content” they upload.

42.In the UK, the protection of user data is covered by the General Data Protection Regulation (GDPR).48 However, ‘inferred’ data is not protected; this includes characteristics that may be inferred about a user not based on specific information they have shared, but through analysis of their data profile. This, for example, allows political parties to identify supporters on sites like Facebook, through the data profile matching and the ‘lookalike audience’ advertising targeting tool. According to Facebook’s own description of ‘lookalike audiences’, advertisers have the advantage of reaching new people on Facebook “who are likely to be interested in their business because they are similar to their existing customers”.49

43.The ICO Report, published in July 2018, questions the presumption that political parties do not regard inferred data as personal information:

Our investigation found that political parties did not regard inferred data as personal information as it was not factual information. However, the ICO’s view is that as this information is based on assumptions about individuals’ interests and preferences and can be attributed to specific individuals, then it is personal information and the requirements of data protection law apply to it.50

44.Inferred data is therefore regarded by the ICO as personal data, which becomes a problem when users are told that they can own their own data, and that they have power of where that data goes and what it is used for. Protecting our data helps us secure the past, but protecting inferences and uses of Artificial Intelligence (AI) is what we will need to protect our future.

45.The Information Commissioner, Elizabeth Denham, raised her concerns about the use of inferred data in political campaigns when she gave evidence to the Committee in November 2018, stating that there has been:

A disturbing amount of disrespect for personal data of voters and prospective voters. What has happened here is that the model that is familiar to people in the commercial sector of behavioural targeting has been transferred—I think transformed—into the political arena. That is why I called for an ethical pause so that we can get this right. We do not want to use the same model that sells us holidays and shoes and cars to engage with people and voters. People expect more than that. This is a time for a pause to look at codes, to look at the practices of social media companies, to take action where they have broken the law. For us, the main purpose of this is to pull back the curtain and show the public what is happening with their personal data.51

46.With specific reference to the use of ‘lookalike audiences’ on Facebook, Elizabeth Denham told the Committee that they “should be made transparent to the individuals [users]. They would need to know that a political party or an MP is making use of lookalike audiences. The lack of transparency is problematic”.52 When we asked the Information Commissioner whether she felt that the use of ‘lookalike audiences’ was legal under GDPR, she replied: “We have to look at it in detail under the GDPR, but I am suggesting that the public is uncomfortable with lookalike audiences and it needs to be transparent”.53 People need to be clear that information they give for a specific purpose is being used to infer information about them for another purpose.

47.The Secretary of State, Rt Hon Jeremy Wright MP, also told us that the ethical and regulatory framework surrounding AI should develop alongside the technology, not “run to catch up” with it, as has happened with other technologies in the past.54 We shall be exploring the issues surrounding AI in greater detail, in our inquiry into immersive and addictive technologies, which was launched in December 2018.55

48.We support the recommendation from the ICO that inferred data should be as protected under the law as personal information. Protections of privacy law should be extended beyond personal information to include models used to make inferences about an individual. We recommend that the Government studies the way in which the protections of privacy law can be expanded to include models that are used to make inferences about individuals, in particular during political campaigning. This will ensure that inferences about individuals are treated as importantly as individuals’ personal information.

Enhanced role of the ICO and a levy on tech companies

49.In our Interim Report, we called for the ICO to have greater capacity to be both an effective “sheriff in the Wild West of the Internet” and to anticipate future technologies. The ICO needs to have the same if not more technical expert knowledge as those organisations under scrutiny.56 We recommended that a levy could be placed on tech companies operating in the UK, to help pay for this work, in a similar vein to the way in which the banking sector pays for the operating costs of the Financial Conduct Authority.57

50.When the Secretary of State was asked his thoughts about a levy, he replied, with regard to Facebook specifically: “The Committee has my reassurance that if Facebook says it does not want to pay a levy, that will not be the answer to the question of whether or not we should have a levy.58 He also told us that “neither I, nor, I think, frankly, does the ICO, believe that it is underfunded for the job it needs to do now. […] If we are going to carry out additional activity, whether that is because of additional regulation or because of additional education, for example, then it does have to be funded somehow. Therefore, I do think the levy is something that is worth considering”.59

51.In our Interim Report, we recommended a levy should be placed on tech companies operating in the UK to support the enhanced work of the ICO. We reiterate this recommendation. The Chancellor’s decision, in his 2018 Budget, to impose a new 2% digital services tax on UK revenues of big technology companies from April 2020, shows that the Government is open to the idea of a levy on tech companies. The Government’s response to our Interim Report implied that it would not be financially supporting the ICO any further, contrary to our recommendation. We urge the Government to reassess this position.

52.The new independent system and regulation that we recommend should be established must be adequately funded. We recommend that a levy is placed on tech companies operating in the UK to fund its work.


15 Disinformation and ‘fake news’: Interim Report, DCMS Committee, Fifth Report of Session 2017–19, HC 363, 29 July 2018, para 14.

18 Ledger of Harms, Center for Humane Technology, accessed 29 November 2018.

19 We will explore issues of addiction and digital health further in our immersive and addictive technologies inquiry in 2019.

20 Edward Lucas, Q881

21 Disinformation and ‘fake news’: Interim Report, DCMS Committee, Fifth Report of Session 2017–19, HC 363, 29 July 2018, para 12.

24 Margot James speech on Safer Internet Day, gov.uk, 5 February 2018.

25 Disinformation and ‘fake news’: Interim Report, DCMS Committee, Fifth Report of Session 2017–19, HC 363, 29 July 2018, para 70.

26 Disinformation and ‘fake news’: Interim Report, DCMS Committee, Fifth Report of Session 2017–19, HC 363, 29 July 2018, para 71.

27 Centre for Data Ethics and Innovation: Government response to consultation, November 2018.

28 As above.

30 Professor Lewandovsky, Q233

31 France passes controversial ‘fake news’ law, Michael-Ross Fiorentino, Euronews, November 2018.

38 Uncorrected transcript of oral evidence, CMS Committee inquiry into phone hacking, 19 July 2011. In reference to international witnesses giving evidence before committees, Erskine May states: “Foreign or Commonwealth nationals are often invited to attend to give evidence before committees. Commissioners or officials of the European Commission, irrespective of nationality, have regularly given evidence. Select committees frequently obtain written information from overseas persons or representative bodies.”

39 Anil Kashyap (who lives and works in Canada), External member of the Financial Policy Committee, Bank of England (16 January 2019); Benoit Rochet, Deputy CEO, Port of Calais (5 June 2018); and Joachim Coens, CEO, Port of Zeebrugge (5 June 2018).

43 UK Council for Internet Safety, gov.uk, July 2018.

46 Q229 Evidence session, 24 October 2018, The Work of the Department for Digital, Culture, Media and Sport.

47 Congress grills Facebook CEO over data misuse - as it happened, Julia Carrie Wong, The Guardian, 11 April 2018.

48 California Privacy Act homepage, accessed 18 December 2018.

50 Democracy disrupted? ICO Report, November 2018, para 3.8.2.

54 Q226, Oral evidence, 24 October 2018, Work of the Department for Digital, Culture, Media and Sport.

55 Immersive and addictive technologies inquiry website, DCMS Committee, launched 7 December 2018.

56 Disinformation and ‘fake news’: Interim Report, DCMS Committee, Fifth Report of Session 2017–19, HC 363, 29 July 2018, para 36.

57 Disinformation and ‘fake news’: Interim Report, DCMS Committee, Fifth Report of Session 2017–19, HC 363, 29 July 2018, para 36.




Published: 18 February 2019