1.Our predecessor Committee carried out a landmark inquiry into Disinformation and ‘fake news’ and produced two reports in 2018 and 2019. Recognising a problem that transcended national boundaries, the Committee established and convened the first ‘International Grand Committee on Disinformation’ (IGC) in Westminster in November 2018. The IGC has reconvened twice since and plans to meet next in Washington D.C. to hold tech companies1 to account. We take this opportunity to reaffirm our commitment to working with policymakers from across the globe. Our predecessor Committee also set up a Sub-Committee on Disinformation as Parliament’s ‘institutional home’ to continue this work. As a result of growing disquiet over the role of the internet, the Government subsequently published a White Paper in April 2019 to tackle ‘Online Harms’ such as disinformation. In February 2020, the current Government announced that it was “minded” to name Ofcom as a proposed new ‘Online Harms Regulator’.2
2.The current coronavirus crisis is not just a public health emergency; it has created the conditions that have exacerbated online harms before the machinery to deal with them has been put in place. On 2 February 2020, the World Health Organisation (WHO) warned that the then-epidemic had been accompanied by “a massive ‘infodemic’”, an overabundance of both accurate and false information that prevents people from accessing trustworthy, reliable guidance.3 The combination of both presented an issue for public health authorities, and as such the WHO focused on working with tech companies to clarify authoritative content. By March, the focus had shifted specifically to misinformation and disinformation. UN Secretary-General António Guterres warned specifically about the “’infodemic’ of misinformation and cybercrime”.4 The UN identified several harms caused by the infodemic, ranging from false narratives and scams to indirect harms exacerbated by public health measures, such as increased instances of child exploitation and abuse.5 Months on from that warning, research has shown that a significant number of people still see misinformation about COVID-19 online each week, causing confusion, fear and mistrust.6
3.On 11 March, we re-established the Sub-Committee on Online Harms and Disinformation7 and wrote to the Rt. Hon. Oliver Dowden MP, Secretary of State for Digital, Culture, Media and Sport, expressing our growing concern about the Government’s delay in tackling COVID-19 disinformation and misinformation.8 Since March, we have ourselves responded to the crisis by questioning Facebook, Google and Twitter twice each on how they are tackling misinformation and disinformation. We were dissatisfied with answers we received from the first session and were left with no option but to recall the companies, represented this time by US-based senior executives with accountability and responsibility for company policy. We also took evidence from academics, frontline health workers and Ofcom, to help us understand the causes and impact of the COVID-19 infodemic and how it can be tackled. Finally, we heard from Government ministers across several sessions, including Caroline Dinenage MP, Minister for Digital and Culture, as well as the Secretary of State. In addition to oral evidence, we also called on the public to submit examples of misinformation they have seen, and we thank all those who submitted written evidence.
4.Causes of the infodemic are multifaceted. Evidence we received consistently emphasised loss of trust in institutions as an aim and opportunity for hostile actors. Both state (Russia, China and Iran) and non-state (such as Daesh and the UK and US far right) campaigns have spread false news and malicious content.9 In addition, Heads of State, especially Donald Trump and Jair Bolsonaro, have deliberately spread false narratives regarding COVID-19. Professor Philip Howard, Director of the Oxford Internet Institute, told us that these actors aim “to degrade our trust in public institutions, collective leadership or public health officials”.10 Some of this content is spread through state-backed media agencies; such agencies, unlike tech companies’ platforms, are regulated by Ofcom.11 Evidence submitted by the Henry Jackson Society asserts that information disseminated by the Chinese state aimed to extol China’s role in managing the virus, delegitimise factual reporting that reflects badly on the Chinese Communist Party and to create “doubt, confusion and fear” amongst target audiences whilst the world is distracted by the pandemic.12
5.Others have sought to gain financially. Several witnesses claimed that they had observed people attempting to exploit the crisis for financial gain, either through scams or quack cures.13 Dr. Claire Wardle of First Draft News told us “[w]e are seeing a huge increase in scams and hoaxes and people motivated by financial gain”, including elderberry supplements or testing kits that are falsely advertised as FDA- or CDC-approved.14 Whilst not directly harmful, such scams may divert sick patients away from medical interventions and allow the virus to continue to spread. Another witness alleged that a registered nurse has used the “veneer of trust, which other nurses have deservedly earned, to manipulate the public” by mis-selling health products.15
6.While the reasons for sharing content are well understood, it is fair to say that there remain very significant gaps in our knowledge of the originators of these messages, and their motivations, beyond those initiated by hostile foreign states and political extremists as mentioned above. Many people have shared misleading or false information with well-meaning intentions. Dr. Wardle provided insight into the psychological and social reasons why people may share misinformation, saying that “[l]arger proportions of the population are losing trust in institutions, feel hard done by, and conspiracies basically say, ‘you don’t know the truth. I’m telling you the truth’”.16 As a result many people “are inadvertently sharing false information believing they are doing the right thing”.17
7.Throughout our inquiry, we have heard about harms caused by misinformation to individual and public health, critical national infrastructure and frontline workers. Early examples of misinformation during the pandemic often misled people about cures or preventative measures to infection. Some people have mistakenly turned to unproven home remedies, stopped taking ibuprofen and prescribed medicine, or elsewise ingested harmful chemicals such as disinfectant.18 Otherwise, people have avoided hospital altogether. Dr. Megan Emma Smith, consultant anaesthetist at a leading London hospital and EveryDoctor member, told us that, “[a]t the point in time when they come through the doors of the hospital, because they did not want to come to hospital, they are so, so sick. They are unbelievably unwell”.19 This impact has been particularly drastic amongst specific British communities. Another UK GP in written evidence claimed that this type of misinformation has caused particularly acute panic and confusion amongst British Asian communities, some of whom “feel adamant that doctors are actively trying to harm them or discharging them without treating them”.20
8.Whilst misinformation has encouraged some people to take drastic measures with their own health, it has also provoked action against others. Written evidence from BT stated that, between 23 March and 23 April alone, there were 30 separate attempts of sabotage on the UK’s digital infrastructure and that there had likely been 80 attacks across sites operated by all four mobile networks, with 19 occurring near critical infrastructure such as fire, police and ambulance stations.21 EE personnel and subcontractors alone have faced 70 separate incidents, including “threats to kill and vehicles driven directly at staff”.22 Mobile UK, the trade association for the UK’s four mobile network operators, was forced to issue a statement in April warning about the impact of the harassment of staff and damage to infrastructure on “the resilience and operational capacity of the networks to support mass home working and critical connectivity to the emergency services, vulnerable consumers and hospitals”.23
9.Misinformation has also directly and indirectly impacted health workers themselves. As one doctor wrote, medical staff are “battling two challenges: trying to save the lives of ICU patients succumbing to the virus and tackling the infodemic”.24 Thomas Knowles, an advanced paramedic practitioner, described the disparity in reach between authoritative NHS 111 information and misinformation spread through social media:
I can speak to one person for ten minutes and have an influence on that one person’s experience of healthcare. The Committee is probably familiar with the pandemic documentary that was circulating on YouTube, and one version of that had 40 million views within 48 hours. That is 25,000 people in ten minutes. I cannot speak to 25,000 people in ten minutes, so that level of exposure is why I think so many of us are so concerned that we need to take action to identify those clear harms that people are experiencing as a consequence.25
Conspiracy theories have also helped fuel targeted abuse and harassment online.26 Worryingly, a belief that “’Asians carry the virus’” has also led to attacks and trolling and one doctor, based in the USA, wrote to us that “[a]n Asian colleague […] has had people yell at her in stores […] and had patients refuse to allow her to treat them”.27 Whilst this might appear anecdotal, UK police statistics have registered a 20% increase in anti-Asian hate crimes with more than 260 offences recorded in the UK since lockdown began.28
10.The causes and impacts of the infodemic are many and varied. Tackling such harms therefore requires a multifaceted approach. In its Online Harms White Paper, the Government stated its aim “to make Britain the safest place in the world to be online”.29 Legislation will take a “proportionate, risk-based response” by introducing “a new duty of care on companies and an independent regulator responsible for overseeing this framework”.30 These proposals satisfied two of the most important recommendations of our predecessor Committee’s Disinformation and ‘fake news’ inquiry. This approach can be contrasted to Section 230 of the Communications Decency Act in the US, which protects tech companies from being held liable for third-party content hosted on their sites and takes a self-regulatory approach,31 and the Network Enforcement Act (‘NetzDG’) in Germany, which forces tech companies to remove hate speech from their sites within 24 hours or face a 20 million euro fine.32
11.Throughout our inquiry, we expressed concern to Ministers about the pace of legislation. Changes in leadership have not helped. There have been five Secretaries of State since the Internet Safety Strategy Green Paper was introduced.33 It has been more than a year since the White Paper consultation closed and a final consultation response has not yet been published, nor has there been a final decision on who should be the independent regulator.34 There is no definitive date for when a Bill will be published (draft or otherwise)35 and interim voluntary codes of practice for terrorist and child sexual exploitation and abuse content that were due in the spring are yet to materialise.36 Our letter to the Secretary of State on 11 March 2020 raised concerns about the Government’s delays in standing up the Counter Disinformation Unit despite the fact that false narratives were already spreading uncontrollably in January.37 The Minister for Digital has contradicted initial assurances to us that legislation will be brought forward alongside the final consultation response this autumn38 in response to a subsequent written question, where she stated instead that legislation will follow the consultation response sometime during this parliamentary session.39 This lack of clarity at the heart of Government is deeply concerning.
12.We are pleased that the Government has listened to our predecessor Committee’s two headline recommendations, and that it will launch a duty of care and an independent regulator of online harms in forthcoming legislation. However, we are very concerned about the pace of the legislation, which may not appear even in draft form for over two years since the White Paper was published in February 2019. We recommend that the Government publish draft legislation, either in part or in full, alongside the full consultation response this autumn if a finalised Bill is not ready. Given our ongoing interest and expertise in this area, we plan to undertake pre-legislative scrutiny. We also remind the Government of our predecessor Committee’s recommendation for the DCMS Committee to have a statutory veto over the appointment and dismissal of the Chief Executive to ensure public confidence in their independence, similar to the Treasury Committee’s veto over senior appointments to the Office of Budget Responsibility, and urge the Government to include similar provisions in the Bill.
13.Throughout our inquiry, we have raised concerns that legislation must not be “light touch”.40 The White Paper initially set out an illustrative, non-exhaustive list of harms in scope of the statutory duty of care that included disinformation.41 This Government has since changed its approach. The initial consultation response subsequently clarified that there would be differentiated expectations for illegal content and so-called “harmful but legal” content.42 At the outset of our inquiry, the Secretary of State clarified that, beyond illegal or age-restricted content, “[t]he essence of online harms legislation is holding social media companies to what they have promised to do and to their own terms and conditions”.43 Following our second session with the companies, when the impact of COVID-19 disinformation was put to him, the Secretary of State reiterated that legislation will simply “hold social media companies to their own terms and conditions”.44 Ministers repeatedly cited the tension between online harms legislation and freedom of expression as the reason for this.45
14.We are aware of and appreciate concerns about freedom of speech. Campaign groups Global Partners Digital, Index on Censorship, Open Rights Group, and Article 19, in a joint submission, warned against Government overreach in the context of coronavirus, arguing that “many governments are taking steps to restrict freedom of expression on the basis of the health crisis”.46 On the other hand, we are concerned about deferring responsibility for the scope of restrictions to speech to tech companies. In correspondence with Facebook, we questioned how the company defines ‘harmful misinformation’ and ‘imminent physical harm’ in the policies that underpin its action against online harms.47 We have also observed a lack of consistent standards across platforms throughout our inquiry, and we will discuss this further in the next chapter. Moreover, ongoing bilateral discussions between tech companies and public authorities lack transparency, scrutiny and an underlying legal framework. In their submission, the four campaign groups raised “serious concerns around informal government pressure, with no legal basis, for platforms to censor, filter or restrict content” in the name of tackling online harms.48
15.We concur with evidence we received that legislation to tackle online harms must comply with principles established in international human rights law. Articles 10 and 11 (the rights to freedom of expression and freedom of assembly) of the Human Rights Act 1998 provide some conditions for freedoms of expression and assembly (as ‘qualified’ rights) where necessary. The recent Civil Rights Audit emphatically criticised Facebook’s attitudes towards free speech, arguing that “the value of non-discrimination is equally important” as freedom of expression and that “the two need not be mutually exclusive”.49 Consultation responses to the White Paper from stakeholders such as the Internet Watch Foundation agree that harms should be set out in secondary legislation or codes of practice to provide parliamentary oversight, proportionality and to prevent overly-broad interpretations of the duty of care.50 Despite Government proposals that the regulator should determine the scope of online harms,51 Ofcom, the preferred candidate, told us definitively that such scope should be a matter for Parliament.52 Full Fact suggest using the super-affirmative procedure as set out in the Legislative and Regulatory Reform Act 2006 as a mechanism to do this.53 Several examples of ‘harmful but legal’ content raised during our inquiry that would require further consideration (and need to be established in legislation) are given below:
Table 1: Online Harms
Harms |
Notes |
Harmful misinformation |
Throughout our inquiry, companies recognised that spreading ‘harmful misinformation’ was against their policies, though (as will be discussed below) such policies often differed in their definition and breadth of what constitutes ‘misinformation’ and what might make it ‘harmful’. |
Disinformation |
Disinformation was a harm proposed by the Online Harms White Paper as a ‘harm with a less clear definition’. The White Paper stated that “[c]ompanies will need to take proportionate and proactive measures to help users understand the nature and reliability of the information they are receiving, to minimise the spread of misleading and harmful disinformation”.54 |
Hatred by sex—whether birth sex or acquired sex/gender |
Written evidence from Glitch, a leading UK charity championing people’s right to be online safely without discrimination, called for the Government to include ‘hatred by sex’ in its definition of online harms. Their evidence observed that “[m]ultiple reports have shown that women and marginalised communities, who are at higher risk of facing online abuse, have been heavily impacted by the pandemic’s effect on online safety”.55 |
Incitement to self-harm |
The Online Harms White Paper also proposed that ‘Advocacy of self-harm’ could be a harm in scope.56 Dame Melanie Dawes, CEO of Ofcom, noted that “images of self-harm can be hugely damaging, so platforms that have a younger audience will need to demonstrate that they understand the sorts of harms that might be happening on their platforms, that they have identified what those are, that they have researched the impact and that they have in place procedures to prevent, mitigate or deal with those sorts of harms when they come up”.57 |
Anonymous online abuse |
The impact of anonymous online abuse was raised in our first session with tech companies and in a Home Affairs Select Committee session at which our Chair was a guest. In the latter session, the Minister for Digital said that “this kind of faceless attack can bully people away from engaging in social media and other platforms in which they might want to participate, so it is anti-democratic in many senses”.58 |
16.Online harms legislation must respect the principles established in international human rights law, with a clear and precise legal basis. Despite the Government’s intention that the regulator should decide what ‘harmful but legal’ content should be in scope, Ofcom has emphasised repeatedly that it believes this is a matter for Parliament. Parliamentary scrutiny is necessary to ensure online harms legislation has democratic legitimacy, and to ensure the scope is sufficiently well-delineated to protect freedom of expression. We strongly recommend that the Government bring forward a detailed process for deciding which harms are in scope for legislation. This process must always be evidence-led and subject to democratic oversight, rather than delegated entirely to the regulator. Legislation should also establish clearly the differentiated expectations of tech companies for illegal content and ‘harmful but legal’.
17.These technologies, media and usage trends are fast-changing in nature. Whatever harms are specified in legislation, we welcome the inclusion alongside them of the wider duty of care, which will allow the regulator to consider issues outside the specified list (and allow for recourse through the courts). The Committee rejects the notion that an appropriate definition of the anti-online harms measures that operators should be subject to are simply those stated in their own terms and conditions.
1 Consistent with the report of our predecessor Committee, we use the term ‘tech company’ to refer to the different types of social media and online service providers, including Facebook, Google, Twitter and TikTok. Facebook also owns Instagram and WhatsApp; Google and YouTube are owned by the parent company Alphabet.
2 Department for Digital, Culture, Media and Sport and Home Office, Online Harms White Paper - Initial consultation response, February 2020
3 World Health Organisation, Novel Coronavirus (2019-nCoV) Situation Report - 13 (2 February 2020), p 2
4 United Nations, ‘UN tackles ‘infodemic’ of misinformation and cybercrime in COVID-19 crisis,’ accessed 9 July 2020
5 Ibid
6 Ofcom, ‘COVID-19 news and information: consumption and attitudes,’ accessed 21 June 2020
7 Like all sub-committees, the predecessor Sub-Committee on Disinformation lapsed at the end of the last Parliament. The scope of the new Sub-Committee was broadened to reflect the Committee’s ongoing intention to scrutinise the Government’s online harms legislation.
8 Letter from Chair to Rt Hon Oliver Dowden MP, Secretary of State for DCMS, re Coronavirus disinformation, 11 March 2020
10 Q2
11 Ibid
13 Q34
14 Qq34, 40
15 Q127
16 Q37
17 Q46
19 Q116
22 Ibid
23 Mobile UK, ‘Statement: Mobile industry warns against the spread of baseless 5G Coronavirus (COVID-19) theories,’ accessed 21 June 2020
25 Q120
29 Department for Digital, Culture, Media and Sport and Home Office, Online Harms White Paper, CP 57, April 2019, p 5
30 Department for Digital, Culture, Media and Sport and Home Office, Online Harms White Paper, CP 57, April 2019, p 8
31 “DOJ takes aim at law that shields tech companies from lawsuits over material their users post”, CNBC, 17 June 2020
32 Digital, Culture, Media and Sport Committee, Fifth Report of the Session 2017–19, Disinformation and ‘fake news’: Interim Report, HC 363, paras 54–5
33 Oral evidence taken before the Digital, Culture, Media and Sport Committee on 22 April 2020, HC (2019–21) 157, Q31
34 Oral evidence taken before the Home Affairs Committee on 13 May 2020, HC (2019–21) 232, Qq520–1
35 Oral evidence taken before the Digital, Culture, Media and Sport Committee on 9 June 2020, HC (2019–21) 291, Qq376–7
36 Department for Digital, Culture, Media and Sport and Home Office, Online Harms White Paper - Initial consultation response, February 2020
37 Letter from Chair to Rt Hon Oliver Dowden MP, Secretary of State for DCMS, re Coronavirus disinformation, 11 March 2020
38 Oral evidence taken before the Digital, Culture, Media and Sport Committee on 9 June 2020, HC (2019–21) 291, Q383
40 Oral evidence taken before the Home Affairs Committee on 13 May 2020, HC (2019–21) 232, Q520
41 Department for Digital, Culture, Media and Sport and Home Office, Online Harms White Paper, CP 57, April 2019, p 31
42 Department for Digital, Culture, Media and Sport and Home Office, Online Harms White Paper - Initial consultation response, February 2020
43 Oral evidence taken before the Digital, Culture, Media and Sport Committee on 22 April 2020, HC (2019–21) 157, Q20
45 Oral evidence taken before the Digital, Culture, Media and Sport Committee on 22 April 2020, HC (2019–21) 157, Q26; Oral evidence taken before the Home Affairs Committee on 13 May 2020, HC (2019–21) 232, Qq512–3, 528–9, 543; Oral evidence taken before the Digital, Culture, Media and Sport Committee on 9 June 2020, HC (2019–21) 291, Q381, 383, 386
47 Letter from the Chair to Facebook, re Misinformation about the COVID-19 crisis supplementary, 7 May 2020
49 Laura W. Murphy, Megan Cacace et al, Facebook’s Civil Rights Audit – Final Report (July 2020), p 12
50 Internet Watch Foundation, Online Harms White Paper Response (April 2019), p 7
51 Oral evidence taken before the Digital, Culture, Media and Sport Committee on 9 June 2020, HC (2019–21) 291, Q380
52 Oral evidence taken before the Digital, Culture, Media and Sport Committee on 23 June 2020, HC (2019–21) 439, Q6
53 Full Fact, Full Fact response to the Online Harms White Paper, accessed 25 June 2020
54 Department for Digital, Culture, Media and Sport and Home Office, Online Harms White Paper - Initial consultation response, February 2020
56 Department for Digital, Culture, Media and Sport and Home Office, Online Harms White Paper - Initial consultation response, February 2020
57 Oral evidence taken before the Digital, Culture, Media and Sport Committee on 23 June 2020, HC (2019–21) 439, Q14
58 Oral evidence taken before the Home Affairs Committee on 13 May 2020, HC (2019–21) 232, Q529
Published: 21 July 2020