Misinformation in the COVID-19 Infodemic Contents

Conclusions and recommendations

Introduction

1.We are pleased that the Government has listened to our predecessor Committee’s two headline recommendations, and that it will launch a duty of care and an independent regulator of online harms in forthcoming legislation. However, we are very concerned about the pace of the legislation, which may not appear even in draft form for over two years since the White Paper was published in February 2019. We recommend that the Government publish draft legislation, either in part or in full, alongside the full consultation response this autumn if a finalised Bill is not ready. Given our ongoing interest and expertise in this area, we plan to undertake pre-legislative scrutiny. We also remind the Government of our predecessor Committee’s recommendation for the DCMS Committee to have a statutory veto over the appointment and dismissal of the Chief Executive to ensure public confidence in their independence, similar to the Treasury Committee’s veto over senior appointments to the Office of Budget Responsibility, and urge the Government to include similar provisions in the Bill. (Paragraph 12)

2.Online harms legislation must respect the principles established in international human rights law, with a clear and precise legal basis. Despite the Government’s intention that the regulator should decide what ‘harmful but legal’ content should be in scope, Ofcom has emphasised repeatedly that it believes this is a matter for Parliament. Parliamentary scrutiny is necessary to ensure online harms legislation has democratic legitimacy, and to ensure the scope is sufficiently well-delineated to protect freedom of expression. We strongly recommend that the Government bring forward a detailed process for deciding which harms are in scope for legislation. This process must always be evidence-led and subject to democratic oversight, rather than delegated entirely to the regulator. Legislation should also establish clearly the differentiated expectations of tech companies for illegal content and ‘harmful but legal’. (Paragraph 16)

3.These technologies, media and usage trends are fast-changing in nature. Whatever harms are specified in legislation, we welcome the inclusion alongside them of the wider duty of care, which will allow the regulator to consider issues outside the specified list (and allow for recourse through the courts). The Committee rejects the notion that an appropriate definition of the anti-online harms measures that operators should be subject to are simply those stated in their own terms and conditions. (Paragraph 17)

Tech companies’ response

4.The need to tackle online harms often runs at odds with the financial incentives underpinned by the business model of tech companies. The role of algorithms in incentivising harmful content has been emphasised to us consistently by academia and by stakeholders. Tech companies cited difficulties in cases of ‘borderline content’ but did not fully explain what would constitute these cases. Given the central role of algorithms in surfacing content, and in the spread of online harms such as misinformation and disinformation in particular, it is right that the online harms regulator will be empowered to request transparency about tech companies’ algorithms. The Government should consider how algorithmic auditing can be done in practice and bring forward detailed proposals in the final consultation response to the White Paper. (Paragraph 20)

5.The current business model not only creates disincentives for tech companies to tackle misinformation, it also allows others to monetise misinformation too. To properly address these issues, the online harms regulator will need sight of comprehensive advertising libraries to see if and how advertisers are spreading misinformation through paid advertising or are exploiting misinformation or other online harms for financial gain. Tech companies should also address the disparity in transparency regarding ad libraries by standardising the information they make publicly available. Legislation should also require advertising providers like Google to provide directories of websites that they provide advertising for, to allow for greater oversight in the monetisation of online harms by third parties. (Paragraph 24)

6.Tech companies rely on quality journalism to provide authoritative information. They earn revenue both from users consuming this on their platforms as well as (in the case of Google) providing advertising on news websites, and news drives users to their services. We agree with the Competition and Markets Authority that features of the digital advertising market controlled by companies such as Facebook and Google must not undermine the ability of newspapers and others to produce quality content. Tech companies should be elevating authoritative journalistic sources to combat the spread of misinformation. This is an issue to which the Committee will no doubt return. (Paragraph 26)

7.We are acutely conscious that disinformation around the public health issues of the COVID-19 crisis have been relatively easy for tech companies to deal with, as binary true/false judgements are often applicable. In normal times, dealing with the greater nuance of political claims, the prominence of quality news sources on platforms, and their financial viability, will be all the more important in tackling misinformation and disinformation. (Paragraph 27)

8.The Government has repeatedly stated that online harms legislation will simply hold platforms to their own policies and community standards. However, we discovered that these policies were not fit for purpose, a fact that was seemingly acknowledged by the companies. The Government must empower the new regulator to go beyond ensuring that tech companies enforce their own policies, community standards and terms of service. The regulator must ensure that these policies themselves are adequate in addressing the harms faced by society. It should have the power to standardise these policies across different platforms, ensuring minimum standards under the duty of care. The regulator should moreover be empowered to hand out significant fines for non-compliance. It should also have the ability to disrupt the activities of businesses that are not complying, and ultimately to ensure that custodial sentences are available as a sanction where required. (Paragraph 32)

9.Alongside developing its voluntary codes of practice for child sexual exploitation and abuse and terrorist content, the Government should urgently work with tech companies to develop a voluntary code of practice to protect citizens from the harmful impacts of misinformation and disinformation, in concert with academics, civil society and regulators. A well-developed code of practice for misinformation and disinformation would be world-leading and will prepare the ground for legislation in this area. (Paragraph 34)

10.Currently, tech companies emphasise the effectiveness of AI content moderation over user reporting and human content moderation. However, the evidence has shown that an overreliance on AI moderation has limitations, particularly as regards speech, but also often with images and video too. We believe that both easy-to-use, transparent user reporting systems and robust proactive systems, which combine AI moderation but also human review, are needed to identify and respond to misinformation and other instances of harm. To fulfil their duty of care, tech companies must be required to have easy-to-use user reporting systems and the capacity to respond to these in a timely fashion. To provide transparency, they must produce clear and specific information to the public about how reports regarding content that breaches legislative standards, or a company’s own standards (where these go further than legislation), are dealt with, and what the response has been. The new regulator should also regularly test and audit each platform’s user reporting functions, centring the user experience from report to resolution in its considerations. (Paragraph 39)

11.Research has consistently suggested that bots play an active role in spreading disinformation into users’ news feeds. Despite our several attempts to engage with Twitter about the extent of the use of bots in spreading disinformation on their platform, the company failed to provide us with the information we sought. Tech companies should be required to regularly report on the number of bots on their platform, particularly where research suggests these might contribute to the spread of disinformation. To provide transparency for platform users and to safeguard them where they may unknowingly interact with and be manipulated by bots, we also recommend that the regulator should require companies to label bots and uses of automation separately and clearly. (Paragraph 42)

12.The pandemic has demonstrated that misinformation and disinformation are often spread by influential and powerful people who seem to be held to a different standard to everyone else. Freedom of expression must be respected, but it must also be recognised that currently tech companies place greater conditions on the public’s freedom of expression than that of the powerful. The new regulator should be empowered to examine the role of user verification in the spread of misinformation and other online harms, and should look closely at the implications of how policies are applied to some accounts relative to others. (Paragraph 45)

13.We recognise tech companies’ innovations in tackling misinformation, such as ‘correct the record’ tools and warning labels. We also applaud the role of independent fact-checking organisations, who have provided the basis for these tools. These contributions have shown what is possible in technological responses to misinformation, though we have observed that often these responses do not go far enough, with little to no explanation as to why such shortcomings cannot be addressed. Twitter’s labelling, for instance, has been inconsistent, while we are concerned that Facebook’s corrective tool overlooks many people who may be exposed to misinformation. For users who are known to have dwelt on material that has been disproved and may be harmful to their health, it strikes us that the burden of proof should be to show why they should not have this made known to them, rather than the other way around. (Paragraph 49)

14.The new regulator needs to ensure that research is carried out into the best way of mitigating harms and, in the case of misinformation, increasing the circulation and impact of authoritative fact-checks. It should also be able to support the development of new tools by independent researchers to tackle harms proactively and be given power to require that, where practical, those methods found to be effective are deployed across the industry in a consistent way. We call on the Government to bring forward proposals in response to this report, to give us the opportunity to engage with the research and regulatory communities and to scrutinise whether the proposals are adequate. (Paragraph 50)

Public sector response

15.Research has shown that the public has turned away from tech companies’ platforms as a source of trusted news and towards public sector broadcasting during the COVID-19 crisis, demonstrating a lack of trust in social media. The Government must take account of this as it develops online harms legislation over the coming months. It has already committed to naming an independent regulator; it should also look to the ‘clear set of requirements’ and ‘detailed content standards’ in broadcasting as a benchmark for quantifying and measuring the range of harms in scope of legislation. (Paragraph 54)

16.Resources developed by public service broadcasters such as the Trusted News Initiative show huge potential as a framework in which public and private sector can come together to ensure verified, quality news provision. However, we are concerned that tech companies’ engagement in the initiative is limited. Facebook, for example, has chosen not to provide TNI partners with accounts on WhatsApp, which could otherwise provide an independent but robust source of information of Government and public health advice. The Government should support the BBC to be more assertive in deepening private sector involvement, such as by adapting the Trusted News Initiative to changes in the social media ecosystem such as the emergence of TikTok and other new platforms. The Government and online harms regulator should use the TNI to ‘join up’ approaches to public media literacy and benefit from shared learning regarding misinformation and disinformation. It should do this in a way that respects the independence from Government and expertise of the group’s members, and not impose a top-down approach. (Paragraph 58)

17.The Government should reconsider how the various teams submitting information to the Counter Disinformation Unit best add value to tackling the infodemic. Factchecking 70 instances of misinformation a week duplicates the work of other organisations with professional expertise in the area. Instead, the Government should focus on opening up channels with organisations that verify information in a ‘Factchecking Forum’, convened by the Counter Disinformation Unit, and share instances that are flagged by these organisations across its stakeholders, including and especially to public health organisations and all NHS trusts, key and/or frontline workers and essential businesses to prepare them for what they may be facing as a direct result of misinformation, allowing them to take appropriate precautions. (Paragraph 63)

18.We recommend that the Government also empower the new online harms regulator to commission research into platforms’ actions and to ensure that companies pass on the necessary data to independent researchers and independent academics with rights of access to social media platform data. It should also engage with the Information Commissioner’s Office to ensure this is done with respect to data protection laws and data privacy. In the long term, the regulator should require tech companies to maintain ‘takedown libraries’, provide information on content takedown requests, and work with researchers and regulators to ensure this information is comprehensive and accessible. Proposals for oversight of takedowns, including redressal mechanisms, should be revisited to ensure freedom of expression is safeguarded. (Paragraph 64)

19.In order to role model to demonstrate best practice regarding tech companies’ advertising libraries, the Government should create its own ad archive, independent of the archive made available by tech companies, to provide transparency, oversight and scrutiny about how these ad credits are being used and what information is being disseminated to the public. (Paragraph 67)

20.The Government had committed to publishing a media literacy strategy this summer. We understand the pressures caused by the crisis, but believe such a strategy would be a key step in mitigating the impact of misinformation, including in the current pandemic. We urge the Government to publish its media literacy strategy at the latest by the time it responds to this Report in September. We welcome the non-statutory guidance from the Department for Education on ‘Teaching online safety in school’ (June 2019), bringing together computing, citizenship, health and relationships curricula, which among other things covers disinformation and misinformation. We ask that the Government reports on adoption of this material before the end of the academic year 2020/1. (Paragraph 69)

21.The Government should set out a comprehensive list of harms in scope for online harms legislation, rather than allowing companies to do so themselves or to set what they deem acceptable through their terms and conditions. The regulator should have the power instead to judge where these policies are inadequate and make recommendations accordingly against these harms. (Paragraph 72)

22.We are pleased that the Government has taken up our predecessor Committee’s recommendation to appoint an independent regulator. The regulator must be named immediately to give it enough time to take on this critical remit. Any continued delay in naming an online harms regulator will bring into question how seriously the government is taking this crucial policy area. We note Ofcom’s track record of research and expedited work on misinformation in other areas of its remit in this time of crisis as arguments in its favour. We urge the Government to finalise the regulator in the response to this Report. Alongside this decision, the Government should also make proposals regarding the powers Ofcom would need to deliver its remit and include the power to regulate disinformation. We reiterate our predecessor Committee’s calls for criminal sanctions where there has been criminal wrongdoing. We also believe that the regulator should facilitate independent researchers ‘road testing’ new features against harms in scope, to assure the regulator that companies have designed these features ethically before they are released to the public. (Paragraph 76)

23.The Government should also consider how regulators can work together to address any gaps between existing regulation and online harms. It should do this in consultation with the Digital Regulation Cooperation Forum, the creation of which we note as a proactive step by the regulatory community in addressing this. We believe that other regulatory bodies should be able to bring super-complaints to the new online harms regulator. (Paragraph 78)





Published: 21 July 2020