Online Safety Bill

Written evidence submitted to the Public Bill Committee by Kir Nuthi, Senior Policy Analyst at the Center for Data Innovation, on the Online Safety Bill (OSB43)

1. The Center for Data Innovation is part of the Information Technology & Innovation Foundation-a nonpartisan, nonprofit think tank leading the world in science and technology policy. The Center for Data Innovation formulates, promotes, and analyses pragmatic public policy at the intersection of data, technology, and public policy. The Center has offices in Washington, London, and Brussels to tackle issues such as data protection, content moderation, platform regulation, cybersecurity, and technology adoption. This submission, authored by Senior Policy Analyst, Kir Nuthi, responds to the Public Bill Committee’s call for written evidence regarding the Online Safety Bill.

2. This submission makes a case for amending the Online Safety Bill significantly to protect United Kingdom (UK) users online. User safety is of the utmost importance online, but any regulatory intervention must preserve users’ rights to legal free expression and privacy in the digital space. Unfortunately, the Online Safety Bill will exacerbate privacy and content moderation concerns online, making the UK less safe for many online communities.

3. The submission will discuss two main conclusions:

3.1. The Online Safety Bill will incentivise businesses to over-moderate online speech at the expense of legal free expression to avoid penalties.

3.2. The Online Safety Bill undermines encryption and anonymity online, which will erode online privacy and jeopardise user welfare.

4. We hope to work with Ofcom, the Public Bill Committee, members of Parliament, the Secretary of State, and the Department for Digital, Culture, Media and Sport to discuss further the policy debate surrounding the Online Safety Bill. We recommend the following adjustments to the proposal as a first step-but not a comprehensive list of solutions-to maximise online safety, privacy, and free expression:

4.1. Do not impose general content monitoring obligations on online services to protect UK users’ legal free speech.

4.2. Do not require the use of "proactive technology" to monitor user-created content.

4.3. Remove recommendations of age assurance or age verification from the proposal and include specific parameters to prevent Ofcom from prescribing such "proactive technology" in the future.

4.4. Exclude messaging services from the covered user-to-user services.

5. The Online Safety Bill will incentivise businesses to over-moderate online speech at the expense of legal free expression to avoid penalties.

5.1. The Online Safety Bill uses unclear and subjective definitions to allow policymakers to restrict online content arbitrarily. The resulting uncertainty about what speech the government intends to regulate will cause online platforms to over-moderate content to avoid penalties, stifling legal free expression online. The Online Safety Bill empowers the UK government-through the discretion of Ofcom and the Secretary of State-to decide what content is harmful to children or harmful to adults. Content is harmful to either group when deemed so by the Secretary of State, designated as part of the bill-defined category of priority illegal content, or if it presents a risk of significant harm to a hard-to-define "appreciable number" of either children or adults in the United Kingdom. The content scanned for, removed, or fenced off from children behind age assurance walls or other verification mechanisms will largely be decided after the fact at the discretion of the Secretary of State or through a government interpretation of what is considered a "material risk of significant harm to an appreciable number" of children in the UK. Online services also risk paying hefty fines for noncompliance with the Online Safety Bill’s duties of care. The combination of uncertainty about what content must be removed and the threat of penalties will lead to the over-removal of content to prevent legal battles or fines over context or the vagueness of legal but potentially considered harmful speech.

5.2. Similarly, the legislative text does not define what "legal but harmful" content online services must scan for and remove. Instead, the Online Safety Bill has put Parliament, during legislative discussions of the proposal, in charge of approving what types of content fall into this category. [1] This lack of clarity over "legal but harmful" only further muddles what online content services will have to prevent, scan, and monitor for. And once again, in the face of hefty fines for noncompliance, online services will be incentivised to over-moderate-removing more legal expression by UK users. The duty of care model pushes companies to be overprotective about moderation because they will have to comply with changing definitions from Ofcom and the Secretary of State on what content to monitor and moderate. Worse, allowing the government to prescribe which harmful content platforms must remove that is not considered illegal implies that the government will be restricting lawful speech. Allowing the government this much discretion on what types of otherwise lawful content is or isn’t allowed online is a broad overreach and will stifle legal free speech regardless of if online services over-moderate or not.

6. Recommendations to mitigate speech issues and instead foster legal free expression online

6.1. The explanatory notes acknowledge that the Online Safety Bill is counter to the European Union’s e-Commerce Directive-a law that provided the foundation for transparency requirements and intermediary liability for e-commerce in Europe. [2] The e-Commerce Directive prevented member states of the European Union from creating a general content monitoring obligation for online services. It is crucial that online services have intermediary liability protections that do not hold them responsible for the speech and conduct of users. Only with sufficient intermediary liability protections can online services strike a balance of online safety and free expression. We recommend codifying intermediary liability protections that do not obligate online service providers to actively monitor users and user-created content, including requiring the use of "proactive technology." We also recommend revising the Online Safety Bill to narrow its scope, focus on unlawful speech, and clarify any definitions used to delineate what content services must remove or protect users from. One way to accomplish this could be to move types of content from the lawful to unlawful category-similar to how the Online Safety Bill criminalises cyberflashing. Likewise, we recommend eliminating three provisions that create a subjective standard for what content is subject to removal: content deemed harmful to children by the Secretary of State; content that falls into the overbroad definition of "significant harm to an appreciable number of" children or adults in the United Kingdom; or content that meets a post-proposal parliamentary definition of "legal but harmful" content. Doing so will further minimise the potential over-moderation of legal free speech online.

7. The Online Safety Bill undermines encryption and anonymity online, which will erode online privacy and jeopardise user welfare.

7.1. The bill’s recommendations that online services use systems like age assurance measures to protect children will weaken privacy protections. [3] While the bill is careful not to prescribe the exact processes for age verification, the Online Safety Bill still directs Ofcom to incentivise collecting people’s personal data to prevent children from reaching harmful content-i.e., pornographic content, content deemed harmful to children by the Secretary of State, or content that presents a risk of significant harm to a hard-to-define "appreciable number of children in the United Kingdom." According to the Online Safety Bill, Ofcom could require more complicated age assurance measures like users’ passport numbers or driver’s license scans to view online services and content the government deems unsuitable for children. The collection and use of personal information to fence off content links the personally identifiable information of UK users to the content they used to be able to view anonymously or semi-anonymously. If linked to pornographic or adult-only content, bad actors could use inputted personal information for extortion, identity theft, and other nefarious activities. Linking a user’s online history to personal information could make it harder for marginalised communities like dissidents, human rights activists, and abuse survivors to rely on online anonymity to stay safe-having a chilling effect on their rights to privacy and safe expression. [4]

7.2. Messaging services are covered by the Online Safety Bill’s definition of the user-to-user services it regulates. Many of these services encrypt communications to ensure the service providers and other third parties cannot read private messages. Duties of care to prevent users from accessing "priority illegal" content-such as child sexual abuse material-will require online services to scan formerly secure communications platforms. [5] Scanning for priority illegal content could require that online services create a way to read encrypted messages or risk fines for noncompliance. So without explicitly saying it was targeting encryption, the Online Safety Bill weakens secure communications. [6] A general moderation obligation that includes online messaging incentivises online services to either begin client-side scanning, create a backdoor that can be exploited in cyber attacks, or weaken and remove the encoded protections these services use to scramble content. [7] In this sense, moderating against offensive content on private communications requires removing the methods services use to protect users from criminals, adversarial foreign intelligence agencies, and other bad actors.

8. Recommendations to protect the privacy of UK users

8.1. We recommend both the removal of age assurance or age verification recommendations from the proposal and the inclusion of specific parameters to prevent Ofcom from prescribing such "proactive technology." This will ensure that UK adults will not have to disclose personally identifiable information to access content like legal online pornography that previously they could access anonymously or semi-anonymously. It would also protect UK users from an increased vulnerability to extortion and identity theft. Similarly, excluding messaging services from the list of covered user-to-user services will help ensure UK users have a right to private communications online. The proposal should explicitly note within the legislative text that nothing in the Online Safety Bill discourages online services from using end-to-end encryption or requires client-side scanning.

May 2022

[1] Department for Digital, Culture, Media & Sport and the Rt Hon Nadine Dorries MP, "World-first online safety laws introduced in Parliament," 17 March 2022, .

[2] European Parliament and European Council, "Directive 2000/31/EC on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market ('Directive on electronic commerce')," 8 June 2000, .

[3] Local Government Association, "​​Online Safety Bill, Second Reading, House of Commons" 19 April 2022, .

[4] Janus Kopstein, "Social movements need anonymity, but corporations are taking it away," July 1, 2015, .

[5] Department for Digital, Culture, Media & Sport, "Online Safety Bill: factsheet," 19 April 2022, .

[6] Sam Ashworth-Hayes, "With the Online Safety Bill, Britain is going from Nanny State to Granny State," 20 April 2020, .

[7] Global Encryption Coalition, "45 organizations and cybersecurity experts sign open letter expressing concerns with UK’s Online Safety Bill," 14 April 2022, .


Prepared 8th June 2022