In February, the World Health Organisation warned that, alongside the outbreak of COVID-19, the world faced an ‘infodemic’, an unprecedented overabundance of information—both accurate and false—that prevented people from accessing authoritative, reliable guidance about the virus. The infodemic has allowed for harmful misinformation, disinformation, scams and cybercrime to spread. False narratives have resulted in people harming themselves by resorting to dangerous hoax cures or forgoing medical treatment altogether. There have been attacks on frontline workers and critical national infrastructure as a result of alarmist conspiracy theories.
The UK Government is currently developing proposals for ‘online harms’ legislation that would impose a duty of care on tech companies. Whilst not a silver bullet in addressing harmful content, this legislation is expected to give a new online harms regulator the power to investigate and sanction tech companies. Even so, legislation has been delayed. As yet, the Government has not produced the final response to its consultation (which closed over a year ago), voluntary interim codes of practice, or a media literacy strategy. Moreover, there are concerns that the proposed legislation will not address the harms caused by misinformation and disinformation and will not contain necessary sanctions for tech companies who fail in their duty of care
We have conducted an inquiry into the impact of misinformation about COVID-19, and the efforts of tech companies and relevant public sector bodies to tackle it. This has presented an opportunity to scrutinise how online harms proposals might work in practice. Whilst tech companies have introduced new ways of tackling misinformation through the introduction of warning labels and tools to correct the record, these innovations have been applied inconsistently, particularly in the case of high-profile accounts. Platform policies have been also been too slow to adapt, while automated content moderation at the expense of human review and user reporting has had limited effectiveness. The business models of tech companies themselves disincentivise action against misinformation while affording opportunities to bad actors to monetise misleading content. At least until well-drafted, robust legislation is brought forward, the public is reliant on the goodwill of tech companies, or the bad press they attract, to compel them to act.
During the crisis the public have turned to public service broadcasting as the main and most trusted source of information. Beyond broadcasting, public service broadcasters (PSBs) have contributed through fact-checking and media literacy initiatives and through engagement with tech companies. The Government has also acted against misinformation by reforming its Counter Disinformation Unit to co-ordinate its response and tasked its Rapid Response Unit with refuting seventy pieces of misinformation a week. We have raised concerns, however, that the Government has been duplicating the efforts of other organisations in this field and could have taken a more active role in resourcing an offline, digital literacy-focused response. Finally, we have considered the work of Ofcom, as the Government’s current preferred candidate for online harms regulator, as part of our discussion of online harms proposals. We call on the Government to make a final decision now on the online harms regulator to begin laying the groundwork for legislation to come into effect.
Published: 21 July 2020