Misinformation in the COVID-19 Infodemic Contents

2Tech companies’ response

Monetising misinformation

The role of algorithms

18.The prevalence of misinformation online must be understood within the business context of tech companies. Tech companies generate revenue primarily through advertising targeted at users based on observed or perceived tastes and preferences, which is maximised by increasing the user base, data collection, average user time and user personalisation. We know that novelty and fear (along with anger and disgust) are factors which drive ‘engagement’ with social media posts; that in turn pushes posts with these features further up users’ newsfeeds—this is one reason why false news can travel so fast. This is opposite to the corporate social responsibility policies espoused by tech companies relying on this business model. The more people engage with conspiracy theories and false news online, the more platforms are incentivised to continue surfacing similar content, which theoretically encourages users to continue using the platform so that more data can be collected and more adverts can be displayed. This model, described as the attention economy, underpins the addictive59 features of social media.60 Stacie Hoffmann of Oxford Information Labs told us that misinformation and disinformation in particular “elicits a very strong reaction one way or the other but we do know that the algorithms are rewarding negative reactions” to this content.61 Thomas Knowles described how the social and financial costs of mitigating the impact of misinformation then falls to the public:

If somebody watches something that might be a bit flat-earthy, maybe they will be interested in something a bit homeopathic. It is effectively a monetisation of the content that is published on those services. When we are looking at monetisation of content that actively engenders a social harm, that is actively damaging to public trust and to public health, I think we are looking at a moral obligation. That cost cannot be borne by the public purse when we are looking at organisations that are turning over billions of pounds a year.62

The Minister for Digital has confirmed that, given that algorithms constitute a design choice, the online harms regulator will be empowered “to request explanations about the way an algorithm operates, and to look at the design choices that some companies have made and be able to call those into question”.63

19.Tech companies rejected this characterisation, citing ongoing efforts to promote authoritative information and demote misinformation, though we did not receive evidence to support this.64 Asked whether the fact that people engage with misinformation meant the companies had no incentive to remove it, Richard Earley of Facebook argued that the company’s aim is instead to drive “meaningful social interaction” with “content we think people are most likely to engage with”.65 YouTube, similarly, claimed that, “[o]n the recommendation-driven watch time of this type of borderline content, it reflects less than 1% of the totality of the watch time of content that is being recommended”.66 Despite these efforts, algorithms continue to recommend harmful material. Referring specifically to Google Search’s algorithms, Stacie Hoffmann told us that:

Google has tweaked its algorithms—and we know this from studies that we have done and that we have seen—since 2016 to help reduce the prominence of junk news or misinformation in their searches, but those also come back up. It takes about not even a year for the reach of those websites to go back up again.67

In further correspondence, YouTube did commit to “taking a closer look at how we can further reduce the spread of content that comes close to—but doesn’t quite cross the line of—violating our Community Guidelines and will continue to make necessary changes to improve the effectiveness of our efforts”.68 We welcome this commitment, though we request that the company report back to the Committee in the future in recognition of our concerns on the subject and in good faith that this work will be undertaken.

20.The need to tackle online harms often runs at odds with the financial incentives underpinned by the business model of tech companies. The role of algorithms in incentivising harmful content has been emphasised to us consistently by academia and by stakeholders. Tech companies cited difficulties in cases of ‘borderline content’ but did not fully explain what would constitute these cases. Given the central role of algorithms in surfacing content, and in the spread of online harms such as misinformation and disinformation in particular, it is right that the online harms regulator will be empowered to request transparency about tech companies’ algorithms. The Government should consider how algorithmic auditing can be done in practice and bring forward detailed proposals in the final consultation response to the White Paper.

Transparency in advertising

21.Oral evidence to our inquiry argued that some companies have taken some action against opportunistic advertisers,69 though some inconsistencies where some scammers have slipped through the net have also been observed.70 Advertising libraries, which provide an archive of adverts promoted on their platforms, are not standardised. This means that different tech companies offer different amounts of information on their ads, which makes oversight difficult. Twitter’s ‘Ads Transparency Centre’, for example, only provides an archive for adverts that appear in the previous seven days, with no meaningful data on targeting, audience or advertising spend, unlike the ad libraries of Google and Facebook, which archive more ads and do provide this information.

Funding false narratives

22.Tech companies have also allowed spreaders of misinformation to monetise their content, to the benefit of both platform and publisher. Our inquiry found that YouTube, for example, have allowed actors to profit from peddling harmful misinformation.71

23.As well as selling advertising space on their own platforms, some tech companies, like Google and Amazon, provide adverts for third-party sites. Stacie Hoffmann noted that ad provider tech companies have often directly supplied advertising to sites that spread misinformation:

we found that Google and Amazon are the two biggest ad providers for junk news purveyors and those are the websites that are getting those click-throughs to try to gain money as part of a round ecosystem. We have known that there is a plethora of websites since 2016 that have been key purveyors of junk news.72

Research from the Global Disinformation Index has recently found that Google has provided adverts for almost 90% of sites spreading coronavirus-related conspiracies.73 When we first put this to Google the company questioned the validity of this study, claiming that “it is hard to peer review its findings” and that the “the revenue estimates also do not accurately represent how publishers earn money on our advertising platforms”.74 However, we observed that, despite providing general figures from 2019 (unrelated to misinformation and prior to the pandemic) and figures relating to takedowns of individual adverts, Google’s response conspicuously omitted the number of advertising accounts removed for coronavirus-related misinformation.75 When we challenged the company with corroborating studies in our second evidence session, Google reflected on the limitations of its proactive systems and policies, stating that “this has been a very fluid situation where we have been having to, in real time and 24/7, look at our policies, re-evaluate them and see how we can improve”.76

24.The current business model not only creates disincentives for tech companies to tackle misinformation, it also allows others to monetise misinformation too. To properly address these issues, the online harms regulator will need sight of comprehensive advertising libraries to see if and how advertisers are spreading misinformation through paid advertising or are exploiting misinformation or other online harms for financial gain. Tech companies should also address the disparity in transparency regarding ad libraries by standardising the information they make publicly available. Legislation should also require advertising providers like Google to provide directories of websites that they provide advertising for, to allow for greater oversight in the monetisation of online harms by third parties.

Funding quality journalism

25.Quality journalism has often been cited as an effective counter to misinformation, though the traditional markets for news have been disrupted by the advent of new media. We were told that tech companies’ funding and support for quality journalism has increased during the pandemic, in recognition of the threat posed to the industry.77 Google in particular emphasised its record in funding journalism projects, which have included setting up a global Journalism Emergency Relief Fund and making a $1 million donation to the International Center for Journalists.78 As the biggest beneficiaries of traditional journalism, Google and YouTube were questioned whether the current division of revenue between quality news organisations, who generate information and are cited as authoritative sources counterbalancing false news, and themselves, who simply deliver that information, was equitable. Google robustly and repeatedly declined to comment on the division of revenue. Our concerns have since been vindicated by the Competition and Markets Authority’s recent market study final report into the Online Platforms and Digital Advertising, which concluded that weak competition in digital advertising caused by players such as Facebook and Google “undermines the ability of newspapers and others to produce valuable content, to the detriment of broader society”.79

26.Tech companies rely on quality journalism to provide authoritative information. They earn revenue both from users consuming this on their platforms as well as (in the case of Google) providing advertising on news websites, and news drives users to their services. We agree with the Competition and Markets Authority that features of the digital advertising market controlled by companies such as Facebook and Google must not undermine the ability of newspapers and others to produce quality content. Tech companies should be elevating authoritative journalistic sources to combat the spread of misinformation. This is an issue to which the Committee will no doubt return.

27.We are acutely conscious that disinformation around the public health issues of the COVID-19 crisis have been relatively easy for tech companies to deal with, as binary true/false judgements are often applicable. In normal times, dealing with the greater nuance of political claims, the prominence of quality news sources on platforms, and their financial viability, will be all the more important in tackling misinformation and disinformation.

Platform policies against misinformation

28.Tech companies’ policies, terms, conditions, guidelines and community standards set the rules for what is and is not acceptable when posting or behaving on their platforms. These often go beyond the requirements of the law, such as in the case of hate speech or graphically violent content.80 Throughout our inquiry, tech companies told us that their policies were their primary consideration when tackling misinformation, disinformation and other so-called ‘harmful but legal’ content online.81 The Government has said that the “essence”82 of online harms legislation will be to “hold social media companies to their own terms and conditions”.83

29.The tech companies were often criticised for having unclear policies and applying them inconsistently. Indeed, Facebook conceded during our inquiry that “our enforcement is not perfect” regarding online harms.84 Stacie Hoffmann explained that, whilst enforcement of policies had improved somewhat, such policies often do not set out how they will be applied in practice, particularly regarding ‘takedowns’, where content is removed outright.85 When we wrote to Facebook after our first session in April, we asked about several examples of misinformation to see whether they would violate company policies on “harmful misinformation” and “imminent physical harm”.86 These examples included posts containing ineffective or outright harmful medical advice wrongfully attributed to either Stanford or St. George’s Hospital, a video of several body bags wrongfully claiming to depict COVID-19 victims at St. Mary’s Hospital, and an image of a crowded mosque wrongfully purporting to have been taken during the lockdown period.87 Facebook’s response did not address these examples, saying that “[t]he content and context of specific posts are essential to determining whether a piece of content breaches our Community Standards”, despite their standards themselves describing several hypothetical examples.88 Beyond misinformation, we also raised with Facebook two instances of hate speech that had been reported to the company. The first post incited violence against a minority community, threatening to “Bomb the Board of Deputies of British Jews”; the second racially caricatured and mocked the death of George Floyd.89 Though Facebook acknowledged to us that these examples did go against their policies, we were surprised to hear that both were initially described by Facebook moderators as “not [going] against any of our community standards” and that, in the first instance, moderators suggested that “you unfriend the person who posted it”.90 Our findings were supported by the findings of the Civil Rights Audit, which found that Facebook’s policy response to hateful content targeting Muslims and Black and Jewish people has been consistently inadequate.91

30.Prior to the pandemic, many of the tech companies did not have robust policies against harmful misinformation and have also often been slow in adapting their policies to combat it. Stacie Hoffmann told us that many tech companies “do not necessarily have a lot of terms specific to misinformation, disinformation or false news”, whilst those “that are directly related to misinformation or junk news tend to be very high level and confusing”.92 Only Facebook argued that tackling COVID-19 misinformation such as 5G conspiracies was a matter of enforcing existing policies around real world harm rather than introducing new ones.93 By contrast, American news website The Hill reported in February that Reddit, a sharing and discussion site, did not have any policies on health misinformation at all, leaving decisions to the discretion of volunteer ‘subreddit’ moderators; when asked if a policy against medical misinformation would help moderators, one reportedly replied “yes, full stop”.94 Similarly, TikTok’s policies at the beginning of the pandemic reportedly only covered scams and fake profiles,95 but has since been broadened to include medical misinformation, misinformation based on hate speech and misinformation likely to cause societal panic and real world harm.96

31.The lack of consistency in policy enforcement is in contrast to the standards enforced on broadcasters. In an interview in April hosted on the London Real channel, David Icke made several false claims linking coronavirus to 5G and that a vaccine would contain “nanotechnology microchips” that went unchallenged throughout the show.97 When asked if people should attack 5G masts, he responded that “people have to make a decision” about what to do as “[i]f 5G continues and reaches where they want to take it, human life as we know it is over”; several users called for further attacks on 5G towers in the comments that appeared alongside the feed.98 An expedited Ofcom investigation found the owner ESTV to be in breach of the broadcast code for “[failing] in its responsibility to ensure that viewers were adequately protected”.99 The next day, YouTube removed the video and announced that it would ban COVID-19 conspiracy theories, though the BBC reported that YouTube was aware of the video at the time it was livestreamed. Moreover, though Google argued at the time that it had donated its share of the revenue to charity,100 it later confirmed in correspondence to us that the hosts had been allowed to keep revenue generated from ‘Super Chats’.101 When we raised this with Google, the company told us that initial action against 5G misinformation was constrained by pre-existing policies and only became possible when these policies were updated:

At the time that we first viewed the video it was not against the policies we had at that time. That is why we understood that our policies needed to evolve.102

Google justified its approach, saying that “it was the first instance that we have seen where these kinds of 5G allegations were creating real-world harm and were being linked to the coronavirus in particular”.103 However, the company admitted that it was aware of trends regarding 5G misinformation prior to the London Real livestream.104

32.The Government has repeatedly stated that online harms legislation will simply hold platforms to their own policies and community standards. However, we discovered that these policies were not fit for purpose, a fact that was seemingly acknowledged by the companies. The Government must empower the new regulator to go beyond ensuring that tech companies enforce their own policies, community standards and terms of service. The regulator must ensure that these policies themselves are adequate in addressing the harms faced by society. It should have the power to standardise these policies across different platforms, ensuring minimum standards under the duty of care. The regulator should moreover be empowered to hand out significant fines for non-compliance. It should also have the ability to disrupt the activities of businesses that are not complying, and ultimately to ensure that custodial sentences are available as a sanction where required.

33.Other lawmakers have taken steps to address disparities between platform responses to misinformation and disinformation, albeit through voluntary arrangements. The Electoral Commission of India, for example, last year developed a voluntary code of ethics with tech companies for all future elections, involving more transparency in political advertising and a 48-hour silence period before the end of polling.105 The European Union has similarly agreed a voluntary code of practice for disinformation, requiring monthly reporting and certain principles of best practice in advertising, user reporting and fake accounts,106 though it has been reportedly criticised by some member-states as “insufficient and unsuitable to serve as the basis for sustainably addressing disinformation on social platforms”.107 Finally, the Australian Government has asked digital platforms to develop voluntary codes of practice for online misinformation and the provision of quality journalism, which it expects to be in place by December 2020, and has tasked the Australian Communications and Media Authority to assess the codes’ development and effectiveness.108

34.Alongside developing its voluntary codes of practice for child sexual exploitation and abuse and terrorist content, the Government should urgently work with tech companies to develop a voluntary code of practice to protect citizens from the harmful impacts of misinformation and disinformation, in concert with academics, civil society and regulators. A well-developed code of practice for misinformation and disinformation would be world-leading and will prepare the ground for legislation in this area.

Identifying and reporting misinformation

Automated flagging vs human reporting

35.The first step in effectively tackling false information about COVID-19 is to identify and flag misinformation and disinformation. There are two main ways of identifying harmful content online. First, companies can respond to harmful content reported by users. The Online Harms White Paper’s proposed duty of care would require companies to take “prompt, transparent and effective action following user reporting” of harms and to be transparent about “the number of reports received and how many of those reports led to action”.109 For disinformation specifically, the White Paper proposed “[r]eporting processes […] to ensure that users can easily flag content that they suspect or know to be false, and which enable users to understand what actions have been taken and why”.110 Second, companies can proactively use systems to identify and tackle harmful content themselves. This is done using a combination of automated systems, based on artificial intelligence, and human moderators, who do not proactively search for illegal or harmful content but review content that has been flagged to them. Tech companies like Facebook, Google and Twitter have previously been criticised for outsourcing moderation to users to minimise expenses,111 but nowadays the companies have moved more towards investment in AI flagging and moderation.112

36.At the outset of our inquiry, written and oral evidence endorsed the need for more user reporting and better responses from tech companies, particularly for instances of misinformation. Evidence submitted by the Henry Jackson Society recommended “the creation of a new misinformation flag […], which would allow users to pinpoint content that is factually incorrect or harmful”.113 The Tony Blair Institute similarly observed a “[l]ack of clear reporting frameworks specifically for public health misinformation” and recommended that “[t]he trusted-flagger system needs to be explicitly extended to COVID-19 misinformation to ensure external experts can advise on false information”.114 The response from tech companies has been inconsistent. One the one hand, TikTok told us that they have implemented a granular reporting function for misinformation, allowing users to “select ‘Misleading information’ and then ‘COVID-19 misinformation’ as the reason for their report”.115 Facebook also acknowledged the value of user reporting, saying that misinformation linking 5G to coronavirus was initially raised both by “reports from our work with Government, the media, NGO partners and also as flagged by our users” and subsequently “started then removing it on the basis of where it was flagged to us by users or where others flagged it to us”.116 On the other hand, oral evidence from researchers called for more granular reporting on Google Search to similar standards as provided by YouTube to report and counteract junk news surfacing through algorithmic curation and feedback loops.117 We also saw evidence that companies were not responding efficiently to user reporting. Alongside the two examples of hate speech discussed above, we received written evidence from the Pirbright Institute, a research centre studying infectious diseases in farm animals, who detailed how, due to conspiracies linking Bill Gates to the virus outbreak, conspiracy theorists had begun harassing and doxxing (i.e. leaking personal or identifying information of) staff.118 Pirbright’s evidence, which was subsequently reported by BBC News Reality Check, claimed that trolls had created a false website to exacerbate these conspiracies, leading to other people being misled that the Institute was suppressing a vaccine to the virus.119 The Institute informed us that Google Business had consistently refused to take action on this website despite reports to them emphasising the reputational damage and personal harm being done to the Institute and its staff.120

37.Throughout our inquiry, tech companies consistently downplayed the role of user reporting. Google, when justifying the lack of granular user reporting in Search compared to user reporting on YouTube,121 wrote that “this kind of anecdotal reporting is not always the best way to address the important issues of low quality or misleading web pages in search results”.122 When later asked about this disparity directly, Google replied:

Search is a reflection of the web; it indexes the web. It is not content we directly have control over or are responsible for, but we certainly take action on illegal content when we are sent notices for removal. As I said at the outset, we work very diligently to raise up authoritative sources and down-rank things that are low quality or misleading. We rely on a range of different signals to do that well. We have on every search page a place for people to send feedback, and we then take that feedback into account.123

However, whilst Google asserted in this response that Search simply ‘reflects the web’, it argued elsewhere that it had been curating results, including based on user feedback, as evidence of its action against misinformation, with no acknowledgement of this contradiction.124 Twitter, meanwhile, which allows users to report “fake accounts” but not specific tweets as false,125 argued that “user reports can add a lot of noise to the system, slowing down response and enabling people to report Tweets with which they simply disagree—not because they break the rules”.126

38.Instead, tech companies consistently championed the efficiency of their own procedures in flagging and removing harmful content, particularly AI content moderation. These assertions often came in response to questions about, or in contrast to, user reporting,127 even though both user reporting and proactive systems are considered complementary within the Online Harms White Paper.128 In oral evidence, Facebook claimed that “[i]n the case of the child exploitative material […] that is well above 99% and has been for a number of years”.129 In correspondence, Google said that, on YouTube, “[w]e have removed thousands of videos promoting COVID-19 misinformation from our platform, and the majority of these videos were viewed 100 times or fewer”.130 Twitter, similarly, wrote that, during the 2019 general election, “the majority of Tweets we removed for breaking our rules on voter misinformation were detected proactively through our own systems, and that user reports were a far less effective indicator of urgency and priority”.131 Despite these claims, written evidence consistently emphasised the limitations of automated systems. The charity Glitch wrote that “[i]ncreased reliance on artificial intelligence to filter out abusive harmful content on social media platforms during the pandemic can lead to erroneous content moderation decisions”.132 Glitch’s criticism of tech companies’ overreliance on AI moderation was evidenced by research recently published by the Internet Watch Foundation, which found that, as a result of COVID-19-related staffing constraints on tech company moderators and law enforcement, the number of URLs containing images of child sexual abuse taken down during the pandemic has fallen by 89%.133 Google did acknowledge the limitations of AI moderation, emphasising the need for human review: “[m]achines help us with scale and speed, whereas humans can bring judgement and can understand context”.134 In oral evidence, the company reiterated that AI moderation can be limited when identifying misinformation, as it is “not as good at identifying particular context, and that is often very important or always very important when it comes to speech issues”.135 Evidence from Facebook, which does allow users to report specific posts as “false news”,136 similarly recognised that, “due to the adversarial nature of the space we find ourselves in, sometimes people are able to get round our systems—our human reviewers or our automated systems—and content can appear on the platform for a short time”.137 Moreover, Google also acknowledged that automated systems face “additional complexities” and can be less accurate when reviewing media such as images and video compared to text.138

39.Currently, tech companies emphasise the effectiveness of AI content moderation over user reporting and human content moderation. However, the evidence has shown that an overreliance on AI moderation has limitations, particularly as regards speech, but also often with images and video too. We believe that both easy-to-use, transparent user reporting systems and robust proactive systems, which combine AI moderation but also human review, are needed to identify and respond to misinformation and other instances of harm. To fulfil their duty of care, tech companies must be required to have easy-to-use user reporting systems and the capacity to respond to these in a timely fashion. To provide transparency, they must produce clear and specific information to the public about how reports regarding content that breaches legislative standards, or a company’s own standards (where these go further than legislation), are dealt with, and what the response has been. The new regulator should also regularly test and audit each platform’s user reporting functions, centring the user experience from report to resolution in its considerations.

Bots and ‘blue ticks’

40.Our inquiry examined the role of different types of accounts during the COVID-19 infodemic: bots and influencers. We looked at the impact of bots. Bots are autonomous programmes designed to carry out specific tasks; chatbots, for instance, are used to conduct online conversations, typically in customer service, request routing or information-gathering contexts.139 Bots can also be used to kickstart the spread disinformation amongst people on social media. Professor Philip Howard of the Oxford Internet Institute told us that “[o]ne day they start waking up and spreading conspiracy stories about COVID-19 and that is how the content leaks into our social media feeds”.140 Professor Howard also notes that the use of bots and ‘cyborg’ accounts (which mix human and automated features)141 in online manipulation can sometimes be hard to identify for the average user, particularly when the account in question does not conform to typical identifiers such as no profile picture, history or followers.142 The Henry Jackson Society argued that China in particular has used “organised groups of online activists […] backed up by virtual-identity ‘bots’ and have spread disinformation about COVID-19”.143 Academic research has posited that there has been an upswell of bot activity on Twitter in particular to amplify disinformation.144

41.Throughout our inquiry, Twitter did not adequately engage with our concerns on the topic, claiming that it could not provide information on what proportion of accounts identified as spreading disinformation were bots or used extensive automation.145 Twitter emphasised that the use of bots and automated functions (such as scheduling) is not against company policies and told us that “accounts use a range of different automated measures and so it would be misleading to say a specific number”.146 Concurrent to our inquiry, Twitter’s Global Policy Director Nick Pickles, who also gave evidence on the subject, argued in a company blog post that “[w]e’ve seen innovative and creative uses of automation to enrich the Twitter experience—for example, accounts like @pentametron and @tinycarebot” (though it should be noted that these examples describe bots that are clearly labelled as such).147 In correspondence, the company argued that its system of “source labels”, which informs users whether content is published on a phone app or through third party software, was an adequate alternative approach to taking more proactive efforts to label bots.148

42.Research has consistently suggested that bots play an active role in spreading disinformation into users’ news feeds. Despite our several attempts to engage with Twitter about the extent of the use of bots in spreading disinformation on their platform, the company failed to provide us with the information we sought. Tech companies should be required to regularly report on the number of bots on their platform, particularly where research suggests these might contribute to the spread of disinformation. To provide transparency for platform users and to safeguard them where they may unknowingly interact with and be manipulated by bots, we also recommend that the regulator should require companies to label bots and uses of automation separately and clearly.

43.We also examined the role of prominent public figures in spreading misinformation, and the implications of verification of these accounts (often designated by a ‘blue tick’). Twitter acknowledged that although verification “was meant to authenticate identity and voice”, it has since also “been interpreted as an endorsement or an indicator of importance” by platforms.149 Professor Howard told us that influencer accounts can often act as a “gateway drug” for misinformation and exacerbate the impact of bots:

If a prominent Hollywood star or a prominent political figure says things that are not consistent with the science or the public health advice, some people will go looking for that stuff and they will spread it. That is how misinformation develops. Those human influencers are often the pivot point that takes a lie from something that bots just share with each other to something that passes in human networks.150

In oral evidence, we suggested that, given it amounts simply to a validation of identity, Twitter could offer verification to all users to prevent the blue tick being considered as an endorsement, as well as to tackle anonymous abuse.151 Research from Clean Up the Internet conducted during lockdown has demonstrated a clear link between anonymous Twitter accounts and the spread of 5G conspiracy theories about COVID-19.152 In subsequent correspondence, Twitter argued that in response to the ‘verification as endorsement’ misconception, it “closed all public submissions for verification in November 2017” and promised to keep us updated pending review of this process.153

44.Further, tech companies have not enforced policies, particularly around misinformation, as robustly or consistently for verified users as for the public. The independent Civil Rights Audit supports our findings in this regard. It states that, by not acting against the powerful (including powerful politicians), “a hierarchy of speech is created that privileges certain voices over less powerful voices”.154 In our first session with the companies, Twitter claimed that “[w]e have taken action against other world leaders around the globe, particularly in the past few weeks, when it comes to COVID-19 misinformation”, though did not confirm explicitly whether this had been applied to the President of the United States, Donald Trump.155 One week prior to our second session with the companies, Twitter labelled several tweets by President Trump for making misleading claims;156 Facebook by contrast, left the same posts up, and was subsequently criticised by dozens of former employees in an open letter to CEO Mark Zuckerberg published in The New York Times.157 Monika Bickert, Facebook’s Head of Product Policy and Counterterrorism, surprisingly said she was unaware of the letter. When challenged about Facebook’s lack of action, she emphasised several times that the posts did not violate Facebook’s terms. When challenged specifically on one post, which is known to have originated from pro-segregationists during the civil rights movement,158 Ms. Bickert argued that “[o]ur policy is that we allow people to discuss Government use of force”.159 The Civil Rights Audit does note that, however, that Facebook also failed to act on a series of posts that “labelled official, state-issued ballots or ballot applications ‘illegal’ and gave false information about how to obtain a ballot”, despite clear company policies against voter suppression.160 These inconsistencies are not exclusive to Facebook. Twitter was also criticised for locking one user out of a parody account, @SuspendThePres, which copied the President’s tweets word for word, for glorifying violence.161 In response, Twitter said that:

We said if an account breaks our rules but meets the criteria of being verified, having more than 100,000 followers and being operated by a public figure, we may take the option that, in the public interest, we want that tweet to be available. One of those accounts meets those criteria; one of them does not. […] This is the system working equally. Both tweets broke the rules; both tweets were actioned. One from a public figure was maintained to allow debate.162

45.The pandemic has demonstrated that misinformation and disinformation are often spread by influential and powerful people who seem to be held to a different standard to everyone else. Freedom of expression must be respected, but it must also be recognised that currently tech companies place greater conditions on the public’s freedom of expression than that of the powerful. The new regulator should be empowered to examine the role of user verification in the spread of misinformation and other online harms, and should look closely at the implications of how policies are applied to some accounts relative to others.

Stopping the spread: labelling and ‘correct the record’ tools

46.This crisis has demonstrated that some tech companies can use technological innovations to tackle online harms such as misinformation. Both Twitter and Facebook have begun to apply warning labels to content that has been independently fact-checked and debunked. Several contributors to our inquiry, including Professor Philip Howard, Dr. Claire Wardle of First Draft News, and the Tony Blair Institute, endorsed the use of warning labels to cover or contextualise misinformation and other harmful content, and direct users to authoritative sources information as a proportionate alternative to straightforward content takedowns.163 Alongside Twitter’s aforementioned labelling,164 Facebook similarly asserted in correspondence that “100% of those who see content already flagged as false by our fact-checkers” will see a warning screen, which was applied to 40 million pieces of content in March and 50 million pieces of content in April.165 Two weeks after our second evidence session, Google also announced that it would add warning labels to edited or decontextualised images.166 YouTube and TikTok have not rolled out similar functions, instead tagging all COVID-19-related videos to direct users to trusted information and prioritising takedowns of violative content.167 In written evidence, Dr. Harith Alani of the Open University called for investment in the development of tools and campaigns to raise awareness of people’s exposure to and consumption of COVID-19 misinformation.168

47.Facebook has gone further and also developed a ‘correct the record’ tool to retroactively provide authoritative information to some people who have encountered misinformation. This tool sends notifications in two circumstances:

(1) To users who have previously shared information that has since been debunked, with a link to a fact-checked article; and

(2) To users who have engaged with (i.e. reacted to, shared or commented on) content that Facebook has removed as harmful, with links to the World Health Organisation’s myth-busting page.169

Medical professionals supported the concept of correct the record tools. Dr. Megan Emma Smith of EveryDoctor observed that such tools would strike a balance between providing authoritative information and protecting freedom of expression:

We are not saying clamp down and get rid of free speech, but you have to go back and correct it, and hopefully that will go at least some way towards helping those sorts of people who might be getting a bit of a kick out of putting themselves forward as a pseudo-expert, and/or hopefully it will at least flag up for those to whom they are proffering this misinformation that it is not accurate and is not true.170

Thomas Knowles, also of EveryDoctor, similarly argued that correcting the record could help discredit disreputable sources as well as provide authoritative information.171

48.By design, this tool does not provide notifications to every user that comes across examples of misinformation.172 Facebook justified this decision so as not “to draw attention to false narratives among people who may not have noticed them”. In a second round of correspondence, Facebook added that “it also risks diluting the impact of receiving a notification if it becomes too wide spread and commonplace, which is likely if it’s sent to everyone who may have seen this kind of content”.173 Despite these arguments, we noted several times that Facebook measures ‘linger time’ (i.e. time a user spends looking at a post), and questioned the feasibility of introducing this feature for those who have spent enough time on misleading content or false news to have read it.174 Other tech companies did not commit to rolling out similar tools on their platforms. Twitter’s Nick Pickles, for example, rejected supportive academic research in support of such tools, saying that “a number of studies around correct the record are not peer reviewed”.175 Mr. Pickles added that “[t]here was a paper in Science earlier this year looking at something similar in Brazil, using the World Health Organisation, and it did not work”.176 We note, however, that the article referenced found that, whilst corrective information did not work for myths about Zika virus, it did decrease false beliefs about yellow fever.177 Moreover, though the paper (specifically) concluded that “providing accurate factual information does not always have the expected effect on public support for related policies or leaders”, it also recommended further research into different myth-busting sources and/or less neutral language about the myths themselves with a more representative sample.178 Written evidence from Dr. Alani said that though “the publication of fact-checks has a positive impact in reducing the spread of misinformation on Twitter”, there needs to be more “interdisciplinary research to assess the performance of current official fact-checks in halting the spread and acceptance of COVID-19 misinformation, and to establish more efficient and effective procedures and tools to boost this performance”.179

49.We recognise tech companies’ innovations in tackling misinformation, such as ‘correct the record’ tools and warning labels. We also applaud the role of independent fact-checking organisations, who have provided the basis for these tools. These contributions have shown what is possible in technological responses to misinformation, though we have observed that often these responses do not go far enough, with little to no explanation as to why such shortcomings cannot be addressed. Twitter’s labelling, for instance, has been inconsistent, while we are concerned that Facebook’s corrective tool overlooks many people who may be exposed to misinformation. For users who are known to have dwelt on material that has been disproved and may be harmful to their health, it strikes us that the burden of proof should be to show why they should not have this made known to them, rather than the other way around.

50.The new regulator needs to ensure that research is carried out into the best way of mitigating harms and, in the case of misinformation, increasing the circulation and impact of authoritative fact-checks. It should also be able to support the development of new tools by independent researchers to tackle harms proactively and be given power to require that, where practical, those methods found to be effective are deployed across the industry in a consistent way. We call on the Government to bring forward proposals in response to this report, to give us the opportunity to engage with the research and regulatory communities and to scrutinise whether the proposals are adequate.

59 Our predecessor also published a report into ‘Immersive and addictive technologies’, examining these issues in more detail; the Government agreed to our predecessor’s most significant recommendations into loot boxes and the need for greater research in its response.

60 Dr. Nejra van Zalk (DIS0020)

61 Q29

62 Q128

63 Oral evidence taken before the Home Affairs Committee on 13 May 2020, HC (2019–21) 232, Q561

64 Q67 [Katy Minshall]; Qq91–2, 94 [Richard Earley]; Qq99–100, 109 [Alina Dimofte]; Qq135, 139, 145 [Derek Slater]; Qq141, 147–8 [Leslie Miller]

65 Q92 [Richard Earley]

66 Q141 [Leslie Miller]

67 Q25

68 Letter from Rebecca Stimson, Facebook, re evidence follow-up, 26 June 2020

69 Q22 [Stacie Hoffmann]

70 Qq34, 40 [Dr. Claire Wardle]; Q127 [Thomas Knowles]

71 Letter from Alina Dimofte, Google, re evidence follow-up, 11 May 2020

72 Q22

73 Qq103–4 [Philip Davies MP]; Letter from the Chair to Google, re Misinformation about the COVID-19 crisis, 4 May 2020

74 Letter from Alina Dimofte, Google, re evidence follow-up, 11 May 2020

75 Ibid

76 Q161 [Derek Slater]

77 Q28

78 Letter from Alina Dimofte, Google, re evidence follow-up, 11 May 2020

79 Competition and Markets Authority, Online Platforms and Digital Advertising (July 2020), p 5

80 Home Affairs Committee, Fourteenth Report of the Session 2016–17, Hate crime: abuse, hate crime and extremism online, HC 609 para 38

81 Q69 [Katy Minshall]; Q85 [Richard Earley]; Qq103–5, 107, 109 [Alina Dimofte]; Qq141, 143, 153–4, 158–161 [Derek Slater, Leslie Miller]; Qq165, 168, 170–2, 183, 191 [Monika Bickert]; Qq198, 203–4, 206, 217, 220, 225 [Nick Pickles]

82 Oral evidence taken before the Digital, Culture, Media and Sport Committee on 22 April 2020, HC (2019–21) 157, Q20

83 HC Deb, 4 June 2020, col 984 [Commons Chamber]

84 Qq186–190 [John Nicolson MP, Monika Bickert]

85 Q21

86 Letter from the Chair to Facebook, re Misinformation about the COVID-19 crisis supplementary, 7 May 2020

87 We chose these examples as we considered that each example could, directly or indirectly, cause harm to their audience, either to the recipient through incorrect medical advice or encouraging them to stay away from hospitals to inciting damage against critical national infrastructure and employees or minority communities.

88 Letter from Richard Earley, Facebook, re evidence follow-up, 14 May 2020

89 Qq186–9 [John Nicolson MP]

90 Ibid

91 Laura W. Murphy, Megan Cacace et al, Facebook’s Civil Rights Audit – Final Report (July 2020), p 8

92 Qq18, 21

93 Q84

96 TikTok (DIS0018), p 3

98 Ibid

100 Qq110–2 [Clive Efford MP, Julian Knight MP, Alina Dimofte]

101 Letter from Alina Dimofte, Google, re evidence follow-up, 11 May 2020

102 Q109

103 Q107

104 Q108

106 European Commission, Code of Practice on Disinformation, accessed 9 July 2020

109 Department for Digital, Culture, Media and Sport and Home Office, Online Harms White Paper - Initial consultation response, February 2020

110 Ibid

111 Home Affairs Committee, Fourteenth Report of the Session 2016–17, Hate crime: abuse, hate crime and extremism online, HC 609 para 31

113 Henry Jackson Society (DIS0010) para 26

114 Tony Blair Institute (DIS0013) p 3

115 TikTok (DIS0018)

116 Q85 [Richard Earley]

117 Qq24–5 [Stacie Hoffmann]

118 The Pirbright Institute (DIS0009)

120 The Pirbright Institute (DIS0009)

121 By this, we observe that, in YouTube, users can report specific videos for, i.e., being misleading or featuring illegal or copyrighted content; in Google Search, users can only give feedback in a simple text box at the bottom of the page, rather than for specific results that appear with options for why.

122 Letter from Alina Dimofte, Google, re evidence follow-up, 11 May 2020

123 Q139 [Derek Slater]

124 Q145[Derek Slater] (“again what we strive to do with Search is raise up authoritative sources and demote and down-rank low-quality, misleading information”)

125 Letter from the Chair to Twitter, re Misinformation about the COVID-19 crisis, 4 May 2020

126 Letter from Katy Minshall, Twitter, re evidence follow-up, 11 May 2020

127 Qq98–100 [Alina Dimofte]; Letter from Alina Dimofte, Google, re evidence follow-up, 11 May 2020; Letter from Katy Minshall, Twitter, re evidence follow-up, 11 May 2020; Q191 [Monika Bickert]

128 Department for Digital, Culture, Media and Sport and Home Office, Online Harms White Paper, CP 57, April 2019, p 44

129 Q87 [Richard Earley]

130 Letter from Alina Dimofte, Google, re evidence follow-up, 11 May 2020

131 Letter from Katy Minshall, Twitter, re evidence follow-up, 11 May 2020

132 Glitch (CVD0296) pp.8, 11, 22, 34

133 Glitch (CVD0296) pp.8, 11, 22, 34

134 Letter from Alina Dimofte, Google, re evidence follow-up, 11 May 2020

135 Q143 [Derek Slater]

136 Letter from the Chair to Twitter, re Misinformation about the COVID-19 crisis, 4 May 2020

137 Q81[Richard Earley]

138 Qq142, 144 [Giles Watling MP, Derek Slater]

139 Our predecessor Committee discussed the role of bots in more depth in the Interim Report of its inquiry into Disinformation and ‘Fake News’

140 Q4

141 Oxford Internet Institute, University of Oxford, The Global Disinformation Order: 2019 Global Inventory of Organised Social Media Manipulation (4 September 2019) p 11

142 Q4

143 Henry Jackson Society (DIS0010) para 12

145 Q53 [Katy Minshall]

146 Qq52–3 [Katy Minshall]; Letter from Katy Minshall, Twitter, re evidence follow-up, 11 May 2020

148 Letter from Katy Minshall, Twitter, re evidence follow-up, 11 May 2020

149 Letter from Katy Minshall, Twitter, re evidence follow-up, 11 May 2020

150 Q8

151 Qq54–8 [John Nicolson MP, Katy Minshall]

153 Letter from Katy Minshall, Twitter, re evidence follow-up, 11 May 2020

154 Laura W. Murphy, Megan Cacace et al., Facebook’s Civil Rights Audit – Final Report (July 2020), p 9

155 Qq68–9 [Katy Minshall]

159 Qq168–172 [Kevin Brennan MP, Monika Bickert]

160 Laura W. Murphy, Megan Cacace et al., Facebook’s Civil Rights Audit – Final Report (July 2020), p 37

162 Q225 [Nick Pickles]

163 Tony Blair Institute (DIS0013); Q8; Q43

165 Letter from Richard Earley, Facebook, re evidence follow-up, 14 May 2020

167 TikTok (DIS0018) p 1

168 Open University (CVD0489) pp.26 and 27

169 Letter from Richard Earley, Facebook, re evidence follow-up, 14 May 2020

170 Q130

171 Q131

172 Letter from Richard Earley, Facebook, re evidence follow-up, 14 May 2020; Qq183–4 [Damian Hinds MP, Monika Bickert]

173 Letter from Derek Slater, Google, and Leslie Miller, YouTube, re evidence follow-up, 19 June 2020

174 Ibid

175 Q207 [Nick Pickles]

176 Ibid

178 Ibid

179 Open University (CVD0489) pp.26 and 27

Published: 21 July 2020