Online Safety (Re-committed Clauses and Schedules) Bill

Written evidence submitted by Full Fact (O SB105)

Written evidence to the Online Safety Bill Public Bill Committee (re-committed Clauses and Schedules)

Full Fact and our interest in the Online Safety Bill

1. Full Fact fights bad information. We’re a team of independent fact checkers, technologists, researchers, and policy specialists who find, expose and counter the harm it does.

2. Bad information ruins lives. It promotes hate, damages people’s health, and it hurts democracy. So we tackle it in four ways. We check claims made by politicians, public institutions, in the media and online and we ask people to correct the record where possible to reduce the spread of specific claims. We campaign for systems changes to help make bad information rarer and less harmful, and we advocate for higher standards.

3. Full Fact is a registered charity. We're funded by individual donations, charitable trusts, and by other funders. We receive funding from both Facebook and Google. Details of our funding can be found on our website.

4. Full Fact’s expertise covers online misinformation and public debate. Some areas of the Online Safety Bill, including specific harms, fall beyond our areas of expertise (e.g. illegal harms around protecting children and relating to terrorism) and our written evidence reflects this.

Introduction

5. This short submission focuses on the changes that the Government is proposing to make to the re-committed Clauses and Schedules at Committee Stage. It relates purely to provisions about non-criminal content that is harmful to adults, and not illegal content or content harmful to children. Full Fact’s wider positions on the Bill were set out in more detail in our first written evidence to the Committee in May [1] .

6. In summary, Full Fact believes that these changes are misguided and should be resisted by the Committee:

The requirement for companies to undertake adult risk assessments must be retained in the Bill. Along with the transparency requirements, risk assessments are essential for ensuring platforms can identify harm on their platform and set out clearly what their policy on those risks are in their terms of service.

The Government amendments will leave it up to platforms to decide what their terms of service have to cover when it comes to content harmful to adults. This puts the power in the hands of the platforms rather than Parliament and the independent regulator, and could incentivise a ‘race to the bottom’ on platforms’ terms of service.

The Government has reneged on its promise to include protections for health misinformation in the Online Safety Bill. This must be addressed so that platforms are required to have a clear policy on harmful health misinformation in their terms of service.

The removal of the adult risk assessment and safety duties undermines the central purpose of the Bill

7. Government amendments 6 and 7 (and associated amendments to the rest of the Bill) will result in the removal of the risk assessment obligations in Clause 12 and the transparency obligations found in Clause 13. This means that the Government has completely dropped the need for platforms to:

do a risk assessment of the potential (non-criminal) content that presents a material risk of significant harm to adults on their platforms,

transparently explain the findings of those risk assessments to their users, and

set out in their terms of service what, if anything, they will do in relation to such content where it is designated as priority harmful content.

8. It is now very unclear how we can ensure that platforms are protecting or empowering their users effectively if neither they nor the regulator know what is happening on the service. Moreover, the Government’s abandonment of its previously stated goal of making platforms transparently explain how they will deal with certain types of harmful content on their platform leaves users unable to make informed decisions about their use of a platform.

9. The Government must reinstate the requirements for companies to do adult risk assessments to identify potential harm on their platform, explain those risks, and then set out clearly what their policy on those risks are in their terms of service. It is essential that this includes harmful false and misleading health information (see further below).

The new provisions on terms of service will not protect freedom of expression

10. Government amendments NC3 and NC4 (and associated new clauses and amendments) will introduce new duties around terms of service. The centrepiece of this being a ‘ duty not to act against users except in accordance with terms of service’. This is supported by further amendments designed to ensure that platforms then apply those terms consistently. One of the key differences between these changes, and the existing adult protection provisions in the Bill, is that the legislation will no longer stipulate which types of harmful content those terms of service should cover.

11. The Government amendments will instead leave us with a situation where it is up to platforms what their terms of service cover (unless the content is criminal), and leave them accountable only for how they treat those things they choose to include. This puts the power, and decisions on our freedom of expression, in the hands of platforms rather than Parliament and the independent regulator.

12. There is also a risk that this incentivises a ‘race to the bottom’ on the terms of service as platforms seek to give themselves maximum flexibility and minimise their risk of breach. This could allow harmful misinformation to flourish, particularly if platforms remove the provisions that deal with it.

13. The principle that platforms should not be taking down content unless it breaches their terms of service is right. But the Government’s approach to achieving this is deeply flawed and risks creating a binary situation where the only regulatory considerations become whether or not a platform can remove or restrict a particular type of content on their service. This mistaken approach seems to stem from the emergence of a polarised and often false debate about what the Bill currently requires when it comes to non criminal harmful content. Despite regular assertions to the contrary, the Bill has never required platforms to take down 'legal but harmful' content unless that content posed a risk to children. The requirement was to transparently set out how the platform planned to treat it, and then apply that approach consistently.

14. It should be Parliament that takes the lead in setting out the key harms that platforms should transparently address in their terms of service. Platforms can then be held accountable for acting consistently with those terms of service, including not removing content unless it is clearly prohibited.

15. Such an approach does not need to be reduced to decisions about whether or not to censor content. When it comes to harmful misinformation and disinformation there are a growing number of resources and methods available that protect users’ freedom of speech, and that mean restricting or removing content should rarely be necessary. For example:

Ensuring that reliable information from authoritative sources is available on platforms.

Proactive provision of such information (such as the Covid-19 information centres Facebook and others established).

Friction inducing initiatives (for example including ‘read-before-you-share’ prompts).

Labelling and fact checking to more clearly surface false information.

Better user control over the curation of information, and better human moderation.

Increasing the resilience of a platform’s users by taking steps to improve their media literacy.

16. Instead of leaving it to platforms, who already remove legal content at scale, the Bill should be amended to set out the need for these sort of proportionate responses more clearly. This could be supported by an Ofcom code practice on proportionately reducing harm from misinformation and disinformation.

The Government has abandoned its commitment on harmful false health content

17. These changes mean that the Government has effectively reneged on its promise to protect the public from health misinformation in the Online Safety Bill. Harmful and demonstrably false health content had previously been included in the Government’s indicative list of priority harmful content [2] that companies would have been required to address in their terms of service under the adult safety duties. However, the consequence of the Government’s proposed amendments will be that platforms will no longer be obliged to explain how they treat harmful health misinformation and disinformation on their service.

18. The proposed amendments to the user empowerment duties (in amendments 8 to 17) do not purport to address this new gap, because harmful false health content is not covered by the changes.

19. The false communication offence in Clause 156 is also not the answer as it requires the sending of a communication which the person knows to be false with the intention of causing psychological or physical harm (the example given by the Government is trying to harm people through knowingly false messages encouraging them to drink bleach as a cure for Covid [3] ). The need to establish both knowledge of falsehood and intent to cause harm to a criminal standard means both that it will likely exclude most harmful health misinformation online, and that it is unsuitable to be applied at internet scale without significant risks of over moderation.

20. Health-related misinformation and disinformation undermines public health, and the Government has not learned lessons from the last three years, where misinformation and disinformation had a devastating impact during the Covid-19 pandemic.

21. Platforms must be required to have a clear policy on dealing with harmful, false and misleading health information in their terms of service. As we have set out above, there are a growing number of resources and methods available that protect users’ freedom of speech by tackling harmful misinformation without restricting or removing content.

December 2022


[1] Full Fact written evidence to the Online Safety Bill Public Bill Committee (May 2022). https://publications.parliament.uk/pa/cm5803/cmpublic/OnlineSafetyBill/memo/OSB28.htm

[2] Ministerial Statement (7 July 2022). https://questions-statements.parliament.uk/written-statements/detail/2022-07-07/hcws194

[3] Online Safety Bill: communications offences fact sheet (updated April 2022). https://www.gov.uk/government/publications/online-safety-bill-supporting-documents/online-safety-bill-communications-offences-factsheet

 

Prepared 13th December 2022