Draft Online Safety Bill Contents

8Role of the regulator

The suitability of Ofcom as regulator

308.The draft Bill contains provision about the regulation of certain internet services by Ofcom (Clause 1(1)). In the White Paper response, the Government says it chose Ofcom to enforce the draft Bill because it is: “a well-established independent regulator with a strong reputation internationally and deep experience of balancing prevention of harm with freedom of speech considerations”, as well as due to its role regulating video-sharing platforms.543 Ofcom also carries out research on “market trends, online habits and attitudes” and will be able to “draw on strong relationships with industry, policymakers, academic experts, charities and other regulators”.544

309.Witnesses generally agreed that Ofcom was the right choice of regulator.545 Those who did oppose Ofcom’s designation generally did so due to concerns about their expertise or that they might simply replicate the broadcasting model of regulation.546 A third option arose during discussions, in which Ofcom’s role could be understood as one of a series of co-regulating bodies with shared duties. For example, the Information Commissioner made the case that her Office (the ICO) should determine issues of data protection and privacy, whilst the IWF argued that they should be co-designated for content relating to the sexual abuse and exploitation of children.547

The powers of the regulator

310.Some of our witnesses expressed concerns that large tech companies might treat Ofcom with contempt, citing both the historic example of how bankers treated the Financial Standards Authority in the 1990s and more recent examples of how tech executives have “repeatedly shown contempt for elected officials and regulators”.548 In 2019, Facebook agreed to pay a £500,000 fine imposed by the ICO in relation to the processing and sharing of its users’ personal data by Cambridge Analytica, but only as part of a settlement deal after an appeal to a First-tier Tribunal; the settlement included no admission of liability on Facebook’s part.549 In October 2021, the CMA fined Facebook £50.5 million for breaching an initial enforcement order relating to their merger with Giphy.550 Joel Bamford, Senior Director of Mergers at the CMA, said: “We warned Facebook that its refusal to provide us with important information was a breach of the order but, even after losing its appeal in two separate courts, Facebook continued to disregard its legal obligations.”551 On 30th November 2021, the CMA ordered Facebook to sell Giphy. Facebook intends to appeal the ruling.552

311.When Dame Melanie appeared before the Committee, she acknowledged that “this is a really challenging task”, but asked: “more generally across the Bill, do we feel that we have what we need to act, and act quickly when we need to? The answer is broadly yes … The Bill gives us, broadly, the right overall things that we need.”553 Ofcom has suggested some small improvements to the Bill, including in safety duties and in the use of technology reports, but is largely content with the powers that have been extended to it in the Bill as it is currently drafted.

312.Robust regulatory oversight is critical to ensuring the ambition of the Online Safety Bill is fully met. Tech companies must not be allowed to snub the regulator, to act with impunity, to continue to rely on self-regulation, or to abdicate responsibility for the harms which occur through the operation of their services or because of their governance structures. In turn, Ofcom must be able to move at pace to hold providers to account authoritatively to issue substantial fines, and assist the appropriate authorities with criminal prosecutions. The Bill extends substantial powers to the Regulator, but there are improvements to be made if the Government is to ensure the Bill is enforced effectively.

International co-operation

313.Dame Melanie told us: “co-operation with international regulators is where we think the Bill could be slightly improved. We might find we need the ability to share with International Regulators in some circumstances.”554

314.Ms Denham told us: “we need co-operation at a domestic level … but we also need the ability to collaborate and to cooperate at international level.”555

315.Ofcom should have the power on the face of the Bill to share information and to co-operate with international regulators at its discretion.

Risk Assessments

Naming the risk assessments

316.The draft Bill requires Ofcom to conduct an overall assessment of the types of risk across the range of online services and service providers to undertake their own risk assessments. During oral evidence, it was not always easy to differentiate between the Ofcom risk assessment and the service providers’ own risk assessments. Dame Melanie said: “if we could have different names for our risk assessment and the platforms’ risk assessment, it would be quite helpful.”556

317.To help differentiate between the risk assessment undertaken by the regulator and that undertaken by the service providers, Ofcom’s risk assessment should be renamed the “Ofcom register of risks of regulated services” (henceforth, register of risks). Ofcom should begin working on this immediately so that it is ready to be actioned when the Bill becomes law.

Establishing risk profiles for companies of different kinds

318.Clause 61(3) states that: “Ofcom must develop risk profiles for different kinds of regulated services, categorising the services as Ofcom consider appropriate, taking into account (a) the characteristics of the services, and (b) the risk levels and other matters identified in the risk assessment”. These characteristics include “the functionalities of the service, its user base, business model, governance and other systems and processes” (Clause 61(6)). In turn, service providers must consider the relevant risk profile when completing their risk assessments.

319.Although the Bill contains provisions for Ofcom to develop risk profiles based on the characteristics of services, some witnesses felt that this should be more central to the Bill, and that further clarity was needed on which characteristics Ofcom should consider. 5Rights argued that Ofcom should develop risk profiles for different kinds of regulated services, which should consider several factors when assessing the risk posed by a service including the characteristics of the service, platform design, risk level, and the service’s business model and its overall corporate aim.557 The UK Interactive Entertainment Association argued:

“ … a proportionate approach should be taken to the extent of requirements for transparency and risk assessment on different services. For instance, online services with minimal user-to-user interaction should not be expected to bear the same burdens as full social media platforms or other online services where user-to-user interaction is core to the service’s offering.”558

Similarly, Match Group argued that as its business model relies on user subscriptions, rather than “revenue streams like advertisements and data harvesting”, it should be grouped with other similar businesses for the purposes of risk assessment.559

320.Allowing a more holistic approach to risk assessment will allow Ofcom to meet the challenges of regulating emerging technologies and their associated risks. We were challenged by Dr Edina Harbinja, Senior Lecturer in Law at Aston University, to consider, for example, how the Bill will manage “virtual reality, augmented reality, and the Metaverse that Facebook is now building”, alongside “deep fakes, chatbots of us after we die, or before”.560 Both the Bill and its regulatory duties need to be ‘future proof’ and able to manage the emergence of future technologies and platforms, allowing Ofcom to produce risk profiles for similar businesses will allow it more flexibility than the current system.

321.According to the Government’s April 2021 Impact Assessment, 81 per cent of businesses in scope of the Bill are likely to be microbusinesses.561 The Coalition for a Digital Economy argues that “smaller businesses should not be burdened with the same obligations as their larger counterparts”, warning that “the proposed framework could have a significant and disproportionately negative financial impact on start-ups” with “a chilling impact on digital competition”.562 A study commissioned by DCMS which found that smaller services with video-sharing capabilities were spending over £45 per user to protect them from content that creates a risk of harm, compared to the biggest services, which spent £0.25–50 per user.563

322.We heard that a service’s size or number of employees is not necessarily a vector for its risk, and thus a small service could potentially be a very harmful one. Characteristics of a service which were suggested might influence which risk profile companies will come under included:

a)Risks created by algorithms, including the promotion of divisive content and “out-group animosity” [and] rewarding hostility online with virality”564; and “pushing” people towards extremist content and groups.565

b)Risks created by a reliance on AI moderation, including risks to freedom of expression, for example, the over-zealous moderation of LGBTQ+ groups or women.566 The Lords Communications and Digital Committee heard that content is twice as likely to be deleted if it is in Arabic or Urdu than if it is in English, whilst Legal to Say, Legal to Type reported that leading AI models for hate speech are 2.2 times more likely to flag tweets written in African American English.567 We also heard evidence from the Board of Deputies of British Jews asking for in-country teams monitoring suspected breaches of community guidelines, as they “will be more likely to have political, cultural, and linguistic context for cases”.568

c)Risks caused by unlimited, “one-click” sharing leading to e.g. the viral spread of false or illegal content, especially on end-to-end encrypted services. One 2018 study found the false stories were 70 per cent more likely to get retweeted than accurate stories.569

d)Risks caused by “designed addiction”, including infinite scrolling pages and automatic, frictionless recommender tools which maximise “engagement” e.g. time spent on the service, which then allows the service provider to generate saleable user data and advertising revenue. This type of design can, in some cases, take people down “rabbit holes that lead to a warren of conspiracy” and normalise content that creates a risk of harm or sensationalist content.570 As 5Rights observes, “pro-suicide, self-harm, or eating disorder content is far more dangerous when served up automatically, proactively, and repeatedly by the recommender systems of platforms popular with young people”.571

e)Risk of unsupervised contact between adults and children which may create circumstances where children can be “groomed” for abuse online or offline.572 The NSPCC reports that “when children are contacted [online] by someone they don’t know in person, in nearly three quarters (74 per cent) of cases, this contact initially takes place by private message.”573

f)Risks caused by surveillance advertising (also called targeted or microtargeted advertising).574 Surveillance advertising “requires the large-scale collection, profiling and sharing” of user data which “can be harvested for behavioural profiling and recommender algorithms which maximise “engagement” … to the detriment of all other considerations”.575 5Rights states “there is not a single online harm or socio-digital problem that is not made worse by micro-targeting”.576 Surveillance advertising can be particularly harmful to children. The End Surveillance Advertising to Kids Coalition suggest that “based on average online time, a third of 14-year-olds could be exposed to 1,332 adverts a day—ten to twenty times as many adverts as children see on TV alone”.577 “Surveillance advertising frequently enables children to be targeted with harmful products”.578 A 2021 study found that it was possible to target 13–17 year olds on Facebook with adverts “based on interests such as alcohol, smoking and vaping, gambling, extreme weight loss, fast foods and online dating services”.579 Research published by New York University’s Cyber Security for Democracy research team and imec-DistriNet at KU Leuven in Belgium has also highlighted the failings of Facebook’s monitoring of political adverts. Between July 2020 and February 2021, globally, Facebook made the wrong decision for 83 percent of ads that had not been declared as political by their advertisers. Facebook both overcounted and undercounted political advertisements in this group. They also missed a higher proportion of political advertisements outside the United States. However, Facebook also allowed more than 70,000 political adverts to run during its moratorium on political ads around the U.S. 2020 elections.580

g)Risks caused by features designed to enhance reach or to maximise ‘network effect’ such as live-streaming, the ability to create ‘groups’ or to add multiple contacts/users at the same time.

h)Such other risks as Ofcom identifies in its overall register of risks.

323.The Bill’s provision that Ofcom should develop risk profiles based on the characteristics of services should be strengthened. Ofcom should begin drawing up risk profiles immediately so that they are ready to be actioned when the Bill becomes law. Risk profiles should reflect differences in the characteristics of the service. These could include (but are not limited to) risks created by algorithms; risks created by a reliance on artificial intelligence moderation; risks created by unlimited ‘one-click’ sharing; risks caused by “engagement” maximising design features; risk of unsupervised contact between adults and children which may give rise to grooming; risks caused by surveillance advertising; and such other risks as Ofcom identifies in its overall risk assessment, as well as platform design, risk level, end-to-end encryption, algorithmic design, safety by design measures, and the service’s business model and overall corporate aim. Ofcom should also be able to take into account whether a company has been the subject of a super complaint, other legal proceedings or publicly documented evidence of poor performance e.g. independent research, a poor monitoring report in the EU’s Code of Conduct for Illegal Hate, or whistleblowers’ evidence.

Enforcement against the safety duties

324.Ofcom has expressed concerns that they might struggle to build up enough momentum for enforcement action if they are compelled to keep going back to the start of the process whenever they identify an issue. Dame Melanie told the Committee:

“We think there is a slight risk that a service may not identify a risk, and then not be required under the safety duties to address that risk. Our concern is that, if we did then identify one of those problems, we would have to go all the way back to the risk assessments and get them to do it again before we were able to engage the safety duties for any kind of enforcement action.”581

325.The Bill should be amended to clarify that Ofcom is able to take enforcement action if it identifies a breach of the safety duties, without requiring a provider to redo a risk assessment.

Establishing minimum quality standards for risk assessments

326.Under Clause 62, the Regulator will prepare guidance for providers of regulated services to assist them in complying with their duties to carry out risk assessments. However, the Bill as drafted does not specify minimum quality standards for the providers’ risk assessments or require companies to take the risk profile produced by the Regulator into account when producing their own risk assessments. This was a source of widespread concern among witnesses, who argued that a lack of quality standards could give service providers an incentive to underplay or not go looking for risks that their services might cause. 582 Ms Haugen told us:

“I believe that, if Facebook does not have standards for those risk assessments, it will give you a bad risk assessment, because Facebook has established over and over again that when asked for information it misleads the public. I do not have any expectation that it will give you a good risk assessment unless you articulate what a good one looks like.”583

327.Ofcom agreed that the Bill would benefit from stronger provisions relating to minimum quality standards for risk assessments. Dame Melanie told us: “the way the Bill is drawn on risk assessments is good in large part … we are broadly there, but with the gap of adequacy in standards not being quite strong enough at the moment”584 and “I certainly think it should be clearer in the Bill that risk assessments need to be of a certain standard”.585 Ofcom’s written evidence elaborated, stating that whilst the duties on providers to complete a risk assessment were clear, it would be harder for them to take enforcement action against a provider for deliberately or negligently understating risk.586

328.Ofcom also suggested that the concept of “reasonable foreseeability” should be introduced into the risk assessment, meaning that references to “risk” or “level of risk” should mean risks or levels of risk that are reasonably foreseeable.587 The idea that “companies should take reasonable steps to prevent reasonably foreseeable harms that occur through the operation of their services” was mooted by Carnegie UK Trust and supported by the NSPCC; a duty to address reasonably foreseeable harms was also proposed by the Antisemitism Policy Trust.588 Facebook challenged the use of the term “reasonably foreseeable” in the Bill and asked for it to be further defined.589 We note that reasonable foreseeability is both an objective standard and an established principle in law, and use it here to mean that a reasonable person could reasonably foresee that a given risk would occur on a service.

329.During oral evidence, Mr Philp stated:

“ … we have constructed this so that there is no wiggle room for platforms that may try to fudge their risk assessment in relation to children. Ofcom will do its own sector risk assessment first, and the companies’ own risk assessments will be measured against that … We will make sure that they cannot get themselves some sort of get out of jail free card by fudging or diluting their risk assessment. That will not be acceptable at all”.590

330.We were told there is no shortage of models for minimum standards of risk assessment for regulators in the financial sector to implement minimum standards, with both clearly laid out threshold conditions—minimum standards that regulated entities must meet at all times in order to be permitted to carry on the regulated activities in which they are engaged—and high-level fundamental rules that express the general objective of promoting the safety of regulated entities.591 The ICO sets minimum standards for the Data Protection regime, and the language of the “minimum standards” framework for a Data Protection Impact Assessment is well understood among regulated entities. Such a framework of minimum standards for risk assessments would be adaptable and allow for scalability, so even the smallest service providers could design and implement one.

331.The Government told us:

“In line with the risk-based and proportionate approach to regulation, the Bill does not additionally seek to set a specific standard to determine what needs to be done to comply with [risk assessment] obligations. In this case companies will need to refer to the guidance about compliance with their assessment duties which Ofcom is required to publish under Clause 62, which should include risk profiles to establish the standards expected of them”.592

332.It should not be possible for a service provider to underestimate the level of risk on their service without fear of sanction. If Ofcom suspects such a breach, it should have the power to investigate, and, if necessary, to take swift action. We are not convinced that the draft Bill as it currently stands achieves this.

333.Ofcom should be required to set binding minimum standards for the accuracy and completeness of risk assessments. Ofcom must be able to require a provider who returns a poor or incomplete risk assessment to redo that risk assessment. Risk assessments should be carried out by service providers as a response to the Online Safety Act before new products and services are rolled out, during the design process of new features, and kept up to date as they are implemented.

334.The required content of service providers’ risk assessments should follow the risk profiles developed by Ofcom, which in turn should be based on the differences in the characteristics of the service, platform design, risk level, and the service’s business model and overall corporate aim. For example, a provider that does not have an engagement-based service would not need to address irrelevant risks associated with virality, whilst a site containing adult content would have to address the higher level of risks associated with children accessing the site.

335.The Bill should be amended to clarify that risk assessments should be directed to “reasonably foreseeable” risks, to allow Ofcom greater leeway to take enforcement action against a company that conducts an inadequate risk assessment.

336.Ofcom should look to the Data Protection Impact Assessment as they come to form their own guidance for minimum standards for risk assessments for regulated services.

Powers of audit

337.Algorithms can both increase and reduce the spread of content that creates a risk of harm. As Full Fact put it: “content moderation algorithms can do real good if they work well, and if they malfunction, they can cause real harm”, yet “the safety consequences of deploying a certain content moderation algorithm are not always obvious”.593 Throughout the inquiry, we heard from witnesses who were concerned that Ofcom’s powers of audit, particularly with regard to algorithms, did not go far enough, and who called, in the words of the Ada Lovelace Institute, for Ofcom to be given the power to “perform technical audits, assessments, and monitoring of platform behaviour, including algorithmic behaviour, whenever Ofcom deems appropriate.”594 As Mr Ahmed told the Committee: “[The Bill] needs independent auditing powers and the ability to go in and get other bodies, not just self-reporting. You cannot ask Facebook to mark their own homework. That is why we are where we are. Self-regulation is over. It has to be over.”595

338.There is something of a discrepancy between the ICO’s sense of what additional powers Ofcom needs “to be able to look under the bonnet” of the tech companies and what Ofcom feels the draft Bill already empowers it to do. When asked whether Ofcom’s auditing powers were as strong as those held by the ICO, Ms Denham stated she “would like to see stronger powers of compulsory audit” given to Ofcom by the Bill.596 However, Dame Melanie stated that she considers Ofcom’s power to ask for a skilled person’s report to be “the same sort of thing as, for example, the Information Commissioner’s Office is able to use to get under the bonnet when it needs to”.597 Ofcom clarified in writing that they consider they “have broadly similar investigative and information gathering powers under the draft Online Safety Bill to those ICO has to carry out audits.”598

Box 1: The audit powers of the ICO

1)The Information Commissioner wrote to us to detailing the ICO’s audit powers, which are “either consensual or compulsory, and may be deployed in an ex-post and ex-ante manner”.

2)The Information Commissioner explained:

  • “From an ex-post perspective … the ICO can seek its own assurances that an enforcement notice has been complied with by directly auditing the current practices in an organisation.”
  • “The majority of the ICO’s audit activity however takes place where we have concerns about ongoing data processing … but the threshold for taking immediate enforcement action has not been reached. In such cases, we can undertake an audit … if our audit raises concerns then this may lead to a subsequent enforcement notice.”
  • “The deployment of audit powers as a check against compliance with a notice to improve data practices, is an important examination tool for the ICO; whilst proactive audits based on concerns also provide a level of consistent assurance for the public that improvements have been made by an organisation to the extent expected by the independent Regulator.”

3)Ofcom stated that they “consider that Ofcom would have broadly similar investigative and information gathering powers under the draft Online Safety Bill to those ICO has to carry out audits”.

Source: Information Commissioner’s Office (OSB0210) 4.7–4.10

339.In bringing forward the final Bill, we recommend the Government publish an assessment of the audit powers given to Ofcom and a comparison to those held by the Information Commissioner’s Office and the Financial Conduct Authority. Parliament should be reassured that the Bill will give Ofcom a suite of powers to match those of similar regulators. Within six months of the Act becoming law, Ofcom should report to Parliament on how it has used those powers.

340.We recommend that the largest and highest-risk providers should be placed under a statutory responsibility to commission annual, independent third-party audits of the effects of their algorithms, and of their risk assessments and transparency reports. Ofcom should be given the explicit power to review these and undertake its own audit of these or any other regulated service when it feels it is required. Ofcom should develop a framework for the effective regulation of algorithms based on the requirement for, and auditing of, risk assessments.

Coregulation

341.In their “Response to the Consultation on the Online Harms White Paper”, the Government stated that it would “work with Ofcom to ensure that the regulator is able to work effectively with a range of organisations. This will be delivered through a range of means including co-designation powers, memorandums of understanding, forums, and networks” (our emphasis).599

342.The current draft Bill does not explicitly mention co-regulation (with other regulators) or co-designation (with third parties) powers or give any detail on how the Government or Ofcom intends to achieve this. Ofcom has stated that it has delegation arrangements in place in other situations through the Deregulation and Contracting Out Act 1994 and the Communications Act 2003 and has suggested that it could delegate functions in this manner without adding additional provisions on the face of the Bill.600 However, the IWF suggests, and we agree, that “it would have been beneficial to see more information published alongside the Bill about how such co-designation might be achieved or even a timeline on when such decisions will be taken”. This would have helped such bodies prepare.601

343.During oral evidence with financial service regulators, the Committee heard that there was no objection to a cooperation duty, and a great appetite, in the words of Mark Steward, Executive Director of Enforcement and Market Oversight for the FCA, for “allowing information and intelligence to be shared between all the regulators on a mutual basis”.602 It was felt that this was important to prevent “things falling between the cracks”. Michael Grenfell, Executive Director for Enforcement at the CMA, for example, suggested that Ofcom was unlikely to prioritise smaller consumer protection breaches, and so it might be prudent to “give parallel concurrent powers to other regulators too … to enforce those bits.”

344.On 1st July 2020, Ofcom, the ICO, and the CMA came together to form the Digital Regulation Cooperation Forum (DRCF), which “aims to strengthen existing collaboration and coordination between the three regulators by harnessing their collective expertise when data, privacy, competition, communications, and content interact.”603  The Financial Conduct Authority (FCA) joined the DRCF as a full member in April 2021 (having previously been an observer member). The Committee welcomes the foundation of the DRCF, which is, in the words of the Information Commissioner, “a pathfinder in the areas of safety online and data protection and competition [which is] setting international norms now”.604 Ofcom, however, told us: “we and our fellow regulators could do with a little more by way of legislative support to be able to work together. I am thinking of things such as information powers and requirements to consult each other.”605 In their recent report Digital regulation: joined-up and accountable, the House of Lords Communications and Digital Committee identified that for the DRCF to operate effectively, cooperation between its members needs to be extended and formalised. Regulators within the DRCF need to be subject to statutory requirements to cooperate and consult with one another and to share information. This would allow them to share their powers and would facilitate joint regulation. Placing the DRCF on a statutory footing with the power to resolve conflicts by directing its members would further support its functions.606

345.The ICO also holds the position that legislative support would aid cooperation between regulators. Ms Denham called for the regulators to be given “duties to respect the other [regulators’] regulatory objectives as well as information sharing between the regulators”.607 Ofcom similarly wrote to us to say: “we need to ensure that we are able to share information as needed, subject to appropriate safeguard, and that we are able to consult the ICO on privacy matters.”608 We note that Ofcom and the ICO already have a Memorandum of Understanding but agree that further clarity on the bounds of their respective remits and a greater emphasis on cooperation and sharing would provide clarity for both regulators and regulated services.

346.In taking on its responsibilities under the Bill, Ofcom will be working with a network of other regulators and third parties already working in the digital world. We recommend that the Bill provide a framework for how these bodies will work together including when and how they will share powers, take joint action, and conduct joint investigations.

347.We reiterate the recommendations by the House of Lords Communications and Digital Committee in their Digital Regulation report: that regulators in the Digital Regulation Cooperation Forum should be under a statutory requirement to cooperate and consult with one another, such that they must respect one another’s objectives, share information, share powers, take joint action, and conduct joint investigations; and that to further support coordination and cooperation between digital regulators including Ofcom, the Digital Regulation Cooperation Forum should be placed on a statutory footing with the power to resolve conflicts by directing its members.

348.The draft Bill does not give Ofcom co-designatory powers. Ofcom is confident that it will be able to co-designate through other means. The Government must ensure that Ofcom has the power to co-designate efficiently and effectively, and if it does not, this power should be established on the face of the Bill.

The regulation of child sexual exploitation and abuse material

349.Some concerns have been raised about whether Ofcom is the correct regulator to deal with CSEA material. Dr Dimitris Xenos, Lecturer in Law at Cardiff Metropolitan University, argued that Ofcom is a “soft moderator”, and that “some types of harm, especially those relating to extreme and child pornography, torture and serious violence should be organised under a different regulatory framework with different legal obligations (criminal liability) and more robust monitoring bodies, such as the CPS.”609 In turn, the CPS “supports Ofcom as the chosen appointed regulator for the draft Online Safety Bill” but is concerned that the Bill “lacks detail about how Ofcom will interact with law enforcement and the CPS as a regulator of illegal content”, and recommends that “Ofcom should establish internal mechanisms for reporting any indecent or illegal material they receive directly to law enforcement and/or the IWF.”610 The IWF made a persuasive case that they should be co-designated by Ofcom to regulate CSEA content, an argument supported by the CPS and by TalkTalk.611 Ofcom mentioned the need to have a “strong partnership” with “third-sector organisations like the IWF” but no formal arrangement has been made, reflecting the general lack of clarity on co-designation discussed above.612

350.The IWF specialises in tackling online CSEA material hosted anywhere in the world and non-photographic CSEA images hosted in the UK. In 2020, the UK’s Independent Inquiry into Child Sexual Abuse described it as a “genuine success story” which “deserves to be publicly acknowledged as a vital part of how and why comparatively little child sexual abuse is hosted in the UK”.613 Many UK Internet Service Providers are members of the IWF or use its watch list to block CSEA content via third parties.614

351.The IWF has a Memorandum of Understanding between the CPS and the National Police Chiefs’ Council, which “ensures immunity from prosecution for our analysts and recognises our role as “the appropriate authority” for the issuing of Takedown Notices in the UK”.615 The CPS was concerned that Ofcom might “receive unsolicited illegal material from the public through their public complaints procedures, and some of this material could relate to CSEA material. Receiving this material would constitute an offence under section 1 of the Protection of Children Act 1978”.616

352.During the course of its duties, Ofcom will be required to investigate companies for a range of breaches, some of which will relate to suspected or known child sexual exploitation and abuse material. As child sexual exploitation and abuse investigations lie so far outside Ofcom’s normal duties, we expect Ofcom to work closely with experts like the Internet Watch Foundation, to develop and update the child sexual exploitation and abuse Code of Practice; monitor providers to ensure compliance with the child sexual exploitation and abuse code; and during investigations relating to child sexual exploitation and abuse content.

353.Ofcom may receive unsolicited child sexual exploitation and abuse material which would constitute an offence under Section 1 of the Protection of Children Act 1978. The Bill should be amended to provide Ofcom with a specific defence in law to allow it to perform its duties in this area without inadvertently committing an offence.

Codes of Practice

354.During oral evidence, the Secretary of State was adamant that this Bill “has to be watertight. That includes the codes of practice and the terms and conditions.”617 The Bill requires Ofcom to prepare Codes of Practice for providers of regulated services describing recommended steps for the purposes of compliance with duties in relation to terrorism, CSEA, and other relevant duties. Currently, whilst safety duties under the Bill are binding, Codes of Practice are not. A service provider can demonstrate compliance with a safety duty by taking steps set out in a Code of Practice, or in another way which would be assessed by Ofcom having regard to the online safety objectives and protections for freedom of speech and privacy.618 As such, there are multiple routes that service providers can take to fulfil their safety duties.

355.Ofcom describe this approach as “leaning towards flexibility” but acknowledge that it “will make it harder for Ofcom to judge compliance with safety duties and ultimately to enforce against any breaches, particularly if the safety duties themselves are specified at a high level.”619 When asked whether she thought Ofcom’s Codes of Practice should be binding, Dame Melanie replied:

“They are statutory codes, but the way platforms are able to discharge their duties, particularly their safety duties, means that they can choose another route. … At some point it is right that there is flexibility for services to determine how they address the safety duties. At the same time, that makes it potentially harder for us to prove a breach against those duties, because it leaves open so many different options through which they could be addressed.”620

356.Richard Wronka, Director for Online Harms at Ofcom, clarified that Ofcom can “make a requirement on services to take specific steps where we have identified that they have breached their safety duties”, but they cannot set out “binding requirements before the event through codes of practice.”621 Reset advocated for “minimum standards for compliance with the safety duties, perhaps through binding codes of practice.”622

357.During oral evidence, there was a lack of clarity about whether amendments to the Codes of Practice would be subject to affirmative or negative parliamentary procedure.623 Amendments to the Codes of Practice require only negative procedure, which Carnegie UK Trust have argued “gives the executive too much power on matters of free expression” and advocated instead for giving Parliament “more influence at the outset and more flexibility for the Regulator downstream.”624

Box 2: Indicative list of Codes of Practice

  • Terrorism (interim code should be updated)
  • CSEA (interim code should be updated)
  • Regulated content and activity for adults
  • Child online safety
  • Safety by design
  • Age assurance
  • Freedom of speech (including content in the public interest)
  • Moderation, reporting, complaints, and redress
  • Accessibility and consistency of terms and conditions (including Online Safety Policies)
  • Transparency reporting
  • Digital literacy
  • Risk assessment
  • Any other Codes of Practice the Regulator deems necessary

358.The Bill should be amended to make clear that Codes of Practice should be binding on providers. Any flexibility should be entirely in the hands of and at the discretion of the Regulator, which should have the power to set minimum standards expected of providers. They should be subject to affirmative procedure in all cases.

359.Ofcom should start working on Codes of Practice immediately, so they are ready for enforcement as soon as the Bill becomes law. A provisional list of Codes of Practice, including, but not necessarily limited to, those listed in Box 2 above should be included on the face of the Bill. Some of the Codes should be delegated to co-designated bodies with relevant expertise, which would allow work on multiple Codes to happen simultaneously and thus the entire endeavour to be completed more quickly. Once the Codes of Practice are completed, they should be published.

Criminal liability

360.The draft Bill provides for criminal liability for senior managers who fail to comply with the information notice provisions. This provision can only come into force after the two-year review of the legislation required under Clause 115.625 Throughout this inquiry, there has been debate about when criminal liability should come into force. The CCDH said that a two-year delay “would be a grave mistake” as “tech executives have repeatedly shown contempt for elected officials and regulators”.626 When she appeared before the Committee, the Secretary of State told us: “I say to the platforms, ‘Take note now. It will not be two years. We are looking at truncating that to a very much shorter timeframe. … I am looking at three to six months for criminal liability.”627 We saw recently how Facebook criticised the CMA in respect of their £50.5 million fine for “consciously refusing” to supply all the required information under an Initial Enforcement Order.628 Facebook described the fine as “grossly unreasonable and disproportionate” and questioned the CMA’s authority to enforce it.629

361.We welcome the introduction of criminal sanctions as a demonstration of the seriousness with which the Government is taking the matter of holding tech executives to account. However, as it stands, a named senior manager can only be held liable for the following offences: failure to comply with an information notice; deliberately or recklessly providing or publishing false information; providing or publishing encrypted information with the intention of preventing the Regulator from understanding such information. As the NSPCC has pointed out, these criminal sanctions “would not apply in respect of actual product or safety decisions.”630 Ms Pelham told us she considered criminal responsibility for failure to ensure online safety was the single most impactful thing that the Bill could do.631

362.Governance structures were a key issue that came up in our evidence. Ms Haugen told us: “I think there is a real problem with the left hand not speaking to the right hand at Facebook”, describing it as “a world that is too flat, where no one is really responsible”. She put it plainly: “the organisational choices of Facebook are introducing systemic risk.”632

363.Our sessions with the major service providers did little to allay our concerns about their governance structures. Antigone Davis, the Global Head of Safety at Facebook, does not report to Facebook’s Audit and Risk Oversight Committee and could not tell us whether papers had been submitted to that Committee detailing the online harms discussed at our session, nor who would be submitting the risk assessment when the Bill becomes law.633 In a subsequent letter, Facebook told us that the Committee reviews Community Standards and Safety Issues “at least annually” and are generally briefed “twice a year”.634 Leslie Miller, Vice President of Government Affairs and Public Policy at YouTube, assured us that the YouTube risk assessment “will certainly have a review by executives” but could not be more specific; Markham C. Erickson, who holds the same position at Google, could only say “it will be reviewed at the appropriate level” at Google.635 On the other hand, Twitter assured the Committee that they produce a range of risk assessments, including a corporate governance risk document which their board sees “several times a year”.636

364.The case against senior management liability was presented by the Open Rights Group, who argued that it would dissuade people from taking up jobs where they might be held criminally liable, that the offence targets a small handful of “specific and high-profile individuals, all of whom are American”, that it will “create a culture of fear which results in a phenomenon known as ‘’collateral censorship’” where “vast swathes” of “perfectly harmless” content is taken down, and that it “sets a very poor global example”.637 The Open Rights Group argued that personal criminal liability for senior managers and company directors will “provide inspiration to authoritarian nations who look up to the UK as an example to follow: if the UK arrests company employees for the political speech carried on their platforms, why shouldn’t they?”638 Any enforcement process must be compatible with due process commitments under both the Human Rights Act and in common law.

365.On the other hand, Ms Pelham told us “the key to securing good regulation is the provisions on personal responsibility for senior managers in the social media companies. I have been in a position myself where I was personally responsible and, my goodness, it focuses your mind.”639

366.The Government confirmed that Ofcom can only take action under the draft Bill against senior managers on failures to supply information. They told us that they have “targeted sanctions in this area as it is vital that Ofcom gets the information it needs to regulate the sector.” They expect criminal sanctions will “instil strong engagement and cooperation with the regime among tech executives, and are satisfied that Ofcom’s suite of enforcement powers will push strong compliance across the board.”640

367.The Bill should require that companies’ risk assessments be reported at Board level, to ensure that senior management know and can be held accountable for the risks present on the service, and the actions being taken to mitigate those risks.

368.We recommend that a senior manager at board level or reporting to the board should be designated the “Safety Controller” and made liable for a new offence: the failure to comply with their obligations as regulated service providers when there is clear evidence of repeated and systemic failings that result in a significant risk of serious harm to users. We believe that this would be a proportionate last resort for the Regulator. Like any offence, it should only be initiated and provable at the end of an exhaustive legal process.

369.The Committee welcomes the Secretary of State’s commitment to introduce criminal liability within three to six months of Royal Assent and strongly recommends that criminal sanctions for failures to comply with information notices are introduced within three months of Royal Assent.

Secretary of State powers

370.When she appeared before the Committee, the Secretary of State described her powers under the draft Bill as “novel”.641 Reset described them as “unprecedented, not only in the UK but also as compared to other online safety regulations” and said “they undermine the independence of the UK’s regime and cause unnecessary uncertainty for companies in scope”.642

Box 3: Criticism of the powers of the Secretary of State

  • Carnegie UK Trust states that the draft Bill “allows the Secretary of State to interfere with Ofcom’s independence on content matters in four [principal] areas”:
  • The draft Bill “gives the Secretary of State relatively unconstrained powers to:
    • Set strategic priorities which OFCOM must take into account (109 and 57)
    • Set priority content in relation to each of the safety duties (41 and 47)
    • Direct OFCOM to make amendments to their codes to reflect Government policy (33)
  • Give guidance to OFCOM on the exercise of their functions and powers (113).”

Source: https://www.carnegieuktrust.org.uk/blog-posts/secretary-of-states-powers-and-the-draft-online-safety-bill/

371.Clause 33.1 of the draft Bill empowers the Secretary of State to direct Ofcom to modify a code of practice submitted under section 32(1) where the Secretary of State believes that modifications are required (a) to ensure that the code of practice reflects government policy or (b) in respect of CSEA and/or terrorism content, for reasons of national security or public safety.

372.We heard from many witnesses who were concerned that the proposed powers of the Secretary of State to modify a code of practice so that it reflects government policy may undermine Ofcom’s independence.643 The IWF summarised the issue: “The possibility of too much central government constraint on Ofcom could undermine Ofcom’s independence as a regulator and its ability to draft, implement and enforce mandatory Codes of Practice in a politically neutral way.”644 Prof Wilson told the Committee that it would be “better in the long run to grant Ofcom more independence and authority than the Bill does, because that will give the exercise of its regulatory powers more legitimacy.”645

373.There is a case for retaining the Secretary of State’s power to direct Ofcom in matters relating to CSEA and/or terrorism as far as they pertain to national security and public safety. As Ofcom put it: “there will clearly be some issues where the Government has access to expertise of information that the regulator does not, such as national security.”646 However, these powers should not be exercised without oversight or scrutiny. We note the Secretary of State’s power to direct Ofcom to amend codes of practice so that they reflect government policy is likely to be incompatible with best practice in other regulatory fields, for example the Council of Europe’s Regulatory Best Practice Code.647

374.Clause 113 states that the Secretary of State may give guidance to Ofcom about the exercise of their functions under this Act; under section 1(3) of the Communications Act to carry out research in connection with online safety matters or to arrange for others to carry out research; and about the exercise of their media literacy functions under section 11 of the Communications Act. Dr Damian Tambini, Distinguished Policy Fellow and Associate Professor at LSE, described this power as “closer to authoritarian than to liberal democratic standards even with the safeguards”.648

375.The Government told the Committee that: “the Secretary of State’s powers under Clause 109 (to publish a statement of strategic priorities in relation to online safety matters) [cater] for long term changes in the digital and regulatory landscape” and that “a similar power already exists (under section 2A of the Communications Act 2003) for telecommunications, the management of the radio spectrum, and postal services”.649 Furthermore, “it is not the Government’s intention that such a statement will be in place, or be needed, at the outset of the regime”.650 They said “these powers are part of the overall approach of balancing the need for regulatory independence with appropriate roles for parliament and government.”651

376.The power for the Secretary of State to exempt services from regulation should be clarified to ensure that it does not apply to individual services.

377.The powers for the Secretary of State to a) modify Codes of Practice to reflect Government policy and b) give guidance to Ofcom give too much power to interfere in Ofcom’s independence and should be removed.

378.Exercise of the Secretary of State’s powers in respect of national security and public safety in respect of terrorism and child sexual exploitation and abuse content should be subject to review by the Joint Committee we propose later in this report.

Media Literacy

Minimum standards for media literacy initiatives

379.The draft Bill places a duty on Ofcom to improve the media literacy of the public, building on the duty given to Ofcom in the Communications Act 2003 to promote media literacy. This duty largely involves improving awareness around how technology works and how to protect oneself online. We recognise that improved media literacy plays an important role in keeping people safe online, and as such welcome the publication of the Government’s Online Media Literacy Strategy, which is designed to complement the media literacy duties in the Online Safety Bill by exploring how, in practice, the duty can be met.652 We also welcome that the draft Bill expands on the Communications Act 2003 to give greater detail on Ofcom’s duties in relation to media literacy.

380.We heard from 5Rights that, under the draft Bill, Ofcom does not have to set minimum standards for what an initiative aimed at improving media literacy must include.653 The content is left to the initiative provider. The Government’s Online Media Literacy Strategy encourages a landscape where initiative providers can be anyone from a civil society organisation, to a news provider, to a service provider of an online platform.654 We heard the concern that leaving these organisations to produce media literacy initiatives without oversight and guidance from Ofcom could allow them to distribute an “educational” resource that is biased, self-serving or factually incorrect. 5Rights gave the example of Google and Facebook, who both offer educational resources to schools around the world: “but teach children to accept certain service design elements as ‘unavoidable’ risks when in fact they could and should be tackled at a design level by those very same companies.”655 Mr Steyer told us that: “The idea that the industry will do high-quality media literacy or digital literacy and citizenship is crazy.”656

381.If the Government wishes to improve the UK’s media literacy to reduce online harms, there must be provisions in the Bill to ensure media literacy initiatives are of a high standard. The Bill should empower Ofcom to set minimum standards for media literacy initiatives that both guide providers and ensure the information they are disseminating aligns with the goal of reducing online harm.

382.We recommend that Ofcom is made responsible for setting minimum standards for media literacy initiatives. Clause 103 (4) should be amended to include “(d) about minimum standards that media literacy initiatives must meet.”

Ofcom’s duty to improve media literacy

383.Under the draft Bill, Ofcom alone is given a duty to improve the media literacy of members of the public, though this can be undertaken through organisations other than themselves. The LSE questioned whether, given that the: “scope of the regulator’s role and power … [is] to determine business practice”, it is: “ideally placed to take responsibility for public education in relation to media literacy.” 657 We received evidence that organisations other than Ofcom that play an important role in media literacy should be given a duty to improve the media literacy of certain groups. For example, the APPG Coalition suggested that: “given the focus on children in the draft Bill, there is surprisingly little insight into the role of teachers, Ofsted, and the Department of Education in developing and delivering a media literacy programme in schools.”658 Mr Steyer made a similar observation and thought that media literacy training should “[belong] in the Education Department.”659 The House of Lords Communications and Digital Committee has recommended that Ofcom should be a co-ordinating body, bringing together the work of Government, civil society, the private sector and academia, and has set out detailed recommendations on what a cross-government digital literacy programme might look like.660

384.We heard that service providers might also play a useful role in improving media literacy, given that they have direct access to and engagement with people who use their services. Carnegie UK Trust argued that media literacy should be built into risk assessments as a mitigation measure, which would compel service providers to ensure it is delivered to users.661

385.We recommend that the Bill reflects that media literacy should be subject to a “whole of Government” approach, involving current and future initiatives of the Department of Education in relation to the school curriculum as well as Ofcom and service providers. We have heard throughout this inquiry about the real dangers that some online content and activity poses to children. Ofsted already assesses how schools manage online safety as part of their safeguarding policies. We recommend that Ofsted, in conjunction with Ofcom, update the school inspection framework to extend the safeguarding duties of schools to include making reasonable efforts to educate children to be safe online

386.Ofcom should require that media literacy is built into risk assessments as a mitigation measure and require service providers to provide evidence of taking this mitigation measure where relevant.

Media literacy and a focus on individual rather than societal harms

387.The draft Bill gives Ofcom the duty to improve the media literacy of “members of the public”. This is a change in wording from the Communications Act 2003, where the duty was to improve “public” awareness of the media. This seems to reflect the draft Bill’s focus on individual rather than societal harms. For example, the definition of media literacy involves an understanding of how material is published and accurate it is, how personal information may be protected, and how someone might control what material they receive.662 Prof Edwards said: “If media literacy is deployed only as a mode of self-protection from exploitation or harm, its potential for supporting our deliberative and democratic capacities could be severely weakened.”663 Glitch told us that media literacy needed to involve “digital citizenship”, which “is respecting and championing the human rights of all individuals online” to reduce cases of online abuse.664

388.We recommend that Clause 103(11) is amended to state that Ofcom’s media literacy duties relate to “the public” rather than “members of the public”, and that the definition of media literacy is updated to incorporate learning about being a good digital citizen and about platform design, data collection and the business models and operation of digital services more broadly.

Use of technology warning notices

389.The draft Bill does not require service providers to use technology to identify and remove CSEA or terrorism content. However, under Clause 63, Ofcom can issue a use of technology warning notice if they have reasonable grounds to believe that a service provider is failing to comply with their safety duties relating to CSEA and/or terrorism content. The purpose of the warning notice is to alert a service provider that Ofcom is considering requiring it to use the technology specified in the notice to identify and remove terrorist content on public channels and/or CSEA content on private and/or public channels.

390.The Bill allows Ofcom to compel a service to use technology to detect CSEA and terrorism content on private and public channels and CSEA content on private communication channels and to “swiftly take down that content” (64(4)(b)). Some children’s advocacy groups have welcomed this clause.665 We heard concerns from others that “these steps would significantly undermine individual privacy and be incompatible with end-to-end encrypted services.”666 Facebook pointed out that the safeguards around these provisions were limited when compared to the powers established in other regimes: there is no judicial oversight, ability to appeal, or “explicit requirement to consider the privacy impact of any use of technology notice, including in the public interest in the integrity and security of the underlying services.”667

391.The ability to require the use of automated moderation technology has received considerable criticism, not least the because of the inability of automated technology to understand images in context.668 CSEA content is always illegal, but images or videos used by extremists in one context may be used for educational, journalistic or other legitimate purposes elsewhere.669 There were also concerns about the inability of mandated technology to keep up with technological advances (e.g. livestreaming), and the risk that it might “lock providers into using tools that have been ‘gamed’ by bad actors”.670

392.Another issue arising in this area is how Ofcom can gather enough evidence to justify mandating the use of technology. There are challenges in gathering evidence from private channels, from end-to-end encrypted channels, and if a service is not already using CSEA detection technology or is using it ineffectively. Short of responding to a user report, there is currently no way for a service provider to detect CSEA content on an end-to-end encrypted channel without compromising encryption.671 Ofcom stated:

“the bar for Ofcom to be able to require the use of these technologies should be high. But if we are given these powers, we will need to be able to use them effectively. In this regard, we need to avoid a catch-22 whereby it is only through the deployment of these technologies that we are able to generate a threshold of evidence that justifies our requiring their use.”672

393.In the current drafting, Ofcom may issue a use of technology warning notice based on evidence demonstrating “the prevalence” and “the persistent prevalence” of terrorism and/or CSEA content on a service. This evidence could come from independent investigations, from news reports, civil society, whistle-blowing, or academic studies. The Children’s Charities Coalition on Internet Safety called this wording “ambiguous”; International Justice Mission said that “any amount of CSEA content is unacceptable” and if “a regulated service is not actively trying to detect and prevent CSEA, there should still be consequences if this Bill is to truly hold services accountable”.673 They question whether only “prevalent” and “persistent” CSEA content should mark the threshold for triggering enforcement powers, and suggest instead that there should be multiple thresholds for triggering Ofcom’s enforcement powers.674

394.The highest risk services, as assessed by Ofcom, should have to report quarterly data to Ofcom on the results of the tools, rules, and systems they have deployed to prevent and remove child sexual exploitation and abuse content (e.g. number and rates of illegal images blocked at upload stage, number and rates of abusive livestreams terminated, number and rates of first- and second- generation images and videos detected and removed).

395.Ofcom should have the power to request research and independent evaluation into services where it believes the risk factors for child sexual exploitation and abuse are high.

396.Ofcom should move towards a risk factors approach to the regulation of child sexual exploitation and abuse material. It should be able to issue a Use of Technology notice if it believes that there is a serious risk of harm from child sexual exploitation and abuse or terrorism content and that not enough is being done by a service to mitigate those risks. The Bill should be amended to clarify that Ofcom is able to consider a wider range of risk factors when deciding whether to issue a Use of Technology notice or take enforcement action. Risk factors should include:

a)The prevalence or the persistent prevalence of child sexual exploitation and abuse material on a service, or distributed by a service;

b)A service’s failure to provide and maintain adequate tools, rules, and systems to proactively prevent the spread of child sexual exploitation and abuse content, and to provide information on those tools, rules, and systems to Ofcom when requested;

c)A service’s failure to provide adequate data to Ofcom on the results of those tools, rules, and systems (e.g., number and rates of illegal images blocked at upload stage, number and rates of abusive livestreams terminated, number and rates of first- and second- generation images and videos detected and removed);

d)The nature of a service and its functionalities;

e)The user base of a service;

f)The risk of harm to UK individuals (and the severity of that harm) if the relevant technology is not used by the service;

g)The degree of interference posed by the use of the relevant technology with users’ rights to freedom of expression and privacy; and

h)The safety by design mechanisms that have been implemented.


543 Department for Digital, Culture, Media and Sport and The Home Office, Online Harms Consultation: Full Government Response to the consultation, CP 354, December 2020, p 60: https://www.gov.uk/government/consultations/online-harms-white-paper/outcome/online-harms-white-paper-full-government-response [accessed 17 November 2021]

544 Ofcom, ‘Ofcom to regulate harmful content online’: https://www.ofcom.org.uk/about-ofcom/latest/features-and-news/ofcom-to-regulate-harmful-content-online [accessed 9 December 2021]

545 See, for example, written evidence from: Crown Prosecution Service (OSB0179), point 20, techUK (OSB0098), 5.1

546 Written evidence from: British & Irish Law, Education & Technology Association (OSB0073); 7.1–7.2.; Dr Kim Barker (Senior Lecturer in Law at Open University); Dr Olga Jurasz (Senior Lecturer in Law at Open University) (OSB0071); 11.1–11.4; Dr Edina Harbinja (Senior lecturer in law at Aston University, Aston Law School) (OSB0145); para 28–31; Dr Dimitris Xenos (Lecturer in Law at Cardiff Metropolitan University) (OSB0157)

547 Q 86; Written evidence from Internet Watch Foundation (IWF) (OSB0110), 7.1

548 Q 14, Written evidence from Center for Countering Digital Hate (OSB0009), p 9.

549 Hunton Andrews Kurth, ‘Facebook reaches settlement with ICO over £500,000 data protection fine’: https://www.huntonprivacyblog.com/2019/11/05/uk-ico-imposes-maximum-fine-on-facebook-for-compromising-user-data/ [accessed 1 December 2021]

550 Competition and Markets Authority, ‘CMA fines Facebook over enforcement order breach’: https://www.gov.uk/government/news/cma-fines-facebook-over-enforcement-order-breach [accessed 1 December 2021]

551 Competition and Markets Authority, ‘CMA fines Facebook over enforcement order breach’: https://www.gov.uk/government/news/cma-fines-facebook-over-enforcement-order-breach [accessed 1 December 2021]

552 Competition and Markets Authority, ‘CMA directs Facebook to sell Giphy’: https://www.gov.uk/government/news/cma-directs-facebook-to-sell-giphy [accessed 1 December 2021]

557 Written evidence from 5Rights Foundation (OSB0096), point 5; see also Match Group (OSB0053), 56c.

558 Written evidence from UK Interactive Entertainment (OSB0080), para 24.

559 Written evidence from Match Group (OSB0053). See also written evidence from Microsoft (OSB0076)

560 Q 77. See also Q 271.

561 Written evidence from Coadec (OSB0029), XXXII.

562 Written evidence from: Coadec (OSB0029), XXVIII; VI; see also: Snap Inc. (OSB0012), pp 6–7; British & Irish Law, Education & Technology Association (OSB0073), 6.3.1.

563 Written evidence from Coadec (OSB0029), XXXI

564 Written evidence from HOPE not hate (OSB0048), 6.17.

566 See written evidence from: LGBT Foundation (OSB0191); ibid.; Legal to Say, Legal to Type (OSB0049); p 3; Reddit, Inc. (OSB0058), p 7. We heard from Mumsnet that their pages are repeatedly blacklisted as ‘obscene’ by algorithms used by programmatic advertising agencies “because our users post about breasts (in the context of breastfeeding, or in discussions of clothing shapes) and vulvas and vaginas (in the context of discussions of their health and wellbeing). Trained on databases of largely male speech, algorithms are simply unable to interpret non-pornographic discussions of female anatomy”. Mumsnet (OSB0031).

567 Q 44, Written evidence from Legal to Say, Legal to Type (OSB0049), p 3

568 Written evidence from Board of Deputies of British Jews (OSB0043), p 3. See also Q 154.

569 Written evidence from RSA (Royal Society for the Encouragement of Arts, Manufactures and Commerce) (OSB0070), 7.III.

570 Written evidence from Center for Countering Digital Hate (OSB0009), p 1.

571 Written evidence from Global Action Plan, on behalf of the End Surveillance Advertising to Kids coalition, The Mission and Public Affairs Council of the Church of England, Global Witness, New Economics Foundation, Foxglove Legal, Fairplay, 5Rights Foundation, Andrew Simms, New Weather Institute, Dr Elly Hanson, Avaaz (OSB0150), p 2.

572 Written evidence from: Parent Zone (OSB0124), p 2; Yoti (OSB0130), p 8; Reset (OSB0138), appendix 1; The Arise Foundation (OSB0198), p 5; Barnardo’s (OSB0017), p 1.

573 Written evidence from NSPCC (OSB0109), p 4.

574 Written evidence from Global Action Plan (OSB0027)

575 Written evidence from Global Action Plan, on behalf of the End Surveillance Advertising to Kids coalition, The Mission and Public Affairs Council of the Church of England, Global Witness, New Economics Foundation, Foxglove Legal, Fairplay, 5Rights Foundation, Andrew Simms, New Weather Institute, Dr Elly Hanson, Avaaz (OSB0150)

576 Written evidence from Global Action Plan, on behalf of the End Surveillance Advertising to Kids coalition, The Mission and Public Affairs Council of the Church of England, Global Witness, New Economics Foundation, Foxglove Legal, Fairplay, 5Rights Foundation, Andrew Simms, New Weather Institute, Dr Elly Hanson, Avaaz (OSB0150)

577 Written evidence from Global Action Plan, on behalf of the End Surveillance Advertising to Kids coalition, The Mission and Public Affairs Council of the Church of England, Global Witness, New Economics Foundation, Foxglove Legal, Fairplay, 5Rights Foundation, Andrew Simms, New Weather Institute, Dr Elly Hanson, Avaaz (OSB0150)

578 Written evidence from Global Action Plan (OSB0027)

579 Written evidence from Global Action Plan, on behalf of the End Surveillance Advertising to Kids coalition, The Mission and Public Affairs Council of the Church of England, Global Witness, New Economics Foundation, Foxglove Legal, Fairplay, 5Rights Foundation, Andrew Simms, New Weather Institute, Dr Elly Hanson, Avaaz (OSB0150)

580 Research published on 9 December 2021. Researchers with imec-DistriNet at KU Leuven in Belgium and New York University’s Cybersecurity for Democracy conducted a comprehensive audit of Facebook’s political advertisement detection and policy enforcement. The researchers examined 33.8 million Facebook ads that ran between July 2020 and February 2021—a timeframe that included elections in both the U.S. and Brazil. This is the first known study to quantify the performance of Facebook’s political ad policy enforcement system at a large and representative scale.

582 Written evidence from: Centenary Action Group, Glitch, Antisemitism Policy Trust, Stonewall, Women’s Aid, Compassion in Politics, End Violence Against Women Coalition, Imkaan, Inclusion London, The Traveller Movement (OSB0047), p 9; Reset (OSB0138); See also Compassion in Politics (OSB0050), p 1; Carnegie UK (OSB0095), p 7: “Without regulation, internal risk assessments would then underplay the probability of harm, lack rigour or be quashed at a senior level.”; and NSPCC (OSB0109), p 3: “the legislation introduces a risk of moral hazard for online services to overlook the more risk-inducing or complex aspects of their services.”

586 Written evidence from Ofcom (OSB0223)

587 Written evidence from Ofcom (OSB0223)

588 Written evidence from: Carnegie UK (OSB0095), p 1; NSPCC (OSB0109), p 2; Antisemitism Policy Trust (OSB0005), p 2. See also Mr John Carr ( Secretary at Children’s Charities’ Coalition on Internet Safety) (OSB0167), para 29.

589 Written evidence from Facebook (OSB0147), p 14. See also written evidence from Dame Margaret Hodge (Member of Parliament for Barking and Dagenham at House of Commons) (OSB0201), pp 13–14.

591 Written evidence from: NSPCC (OSB0109); Antisemitism Policy Trust (OSB0005), point 4.

592 Written evidence from Department of Digital, Culture, Media & Sport (OSB0243), 20

593 Written evidence from Full Fact (OSB0056), p 5

594 Written evidence from Ada Lovelace Institute (OSB0101). See also written evidence from: Reset (OSB0203); Center for Countering Digital Hate (OSB0009); Common Sense (OSB0018); Full Fact (OSB0056), p 1; APPG Coalition (OSB0202), p 3; Common Sense (OSB0018), p 2; Glitch (OSB0097), p 9; Demos (OSB0159), p 4; Written evidence from LSE roundtable LSE Department of Media and Communications—Anonymity & Age Verification Roundtable (OSB0236) p 2

598 Written evidence from Ofcom (OSB0223)

599 Department for Digital, Culture, Media and Sport and The Home Office, Online Harms Consultation: Full Government Response to the consultation, CP 354, December 2020, p 60: https://www.gov.uk/government/consultations/online-harms-white-paper/outcome/online-harms-white-paper-full-government-response [accessed 17 November 2021]

600 Written evidence from Ofcom (OSB0288)

601 Written evidence from Internet Watch Foundation (IWF) (OSB0110), 6.3, 6.4. See also written evidence from TalkTalk (OSB0200), pp 8–9.

603 Information Commissioner’s Office, ‘UK regulators join forces to ensure online services work well for consumers and businesses’: https://ico.org.uk/about-the-ico/news-and-events/news-and-blogs/2020/07/uk-regulators-join-forces-to-ensure-online-services-work-well-for-consumers-and-businesses/ [accessed 1 December 2021]

606 Communications and Digital Committee, Digital regulation: joined-up and accountable (3rd Report, Session 2021–22, HL Paper 126)

608 Written evidence from Ofcom (OSB0223)

609 Written evidence from Dr Dimitris Xenos (Lecturer in Law at Cardiff Metropolitan University) (OSB0157), p 4–5

610 Written evidence from Crown Prosecution Service (OSB0179), para 20, 23, 24.

611 Written evidence from: Internet Watch Foundation (IWF) (OSB0110), para 1.7; TalkTalk (OSB0200), p 7; Crown Prosecution Service (OSB0179), para 23, 24.

612 Written evidence from Ofcom (OSB0021)

613 Written evidence from Internet Watch Foundation (IWF) (OSB0110), para 1.4

614 Written evidence from ISPA (The Internet Service Provider Association) (OSB0059), p 1.

615 Written evidence from Internet Watch Foundation (IWF) (OSB0110), para 3.2–3.3

616 Written evidence from Crown Prosecution Service (OSB0179), para 23.

618 Draft Online Safety Bill, CP 405, May 2021, Clause 36

619 Written evidence from Ofcom (OSB0021)

622 Written evidence from Reset (OSB0138)

624 Written evidence from Carnegie UK (OSB0095)

625 The offence is in Draft Online Safety Bill, CP 405, May 2021, Clause 73, the requirement for commencement is in Clause 140(4b). Written evidence from Department of Digital, Culture, Media & Sport (OSB0243), Q 19

626 Written evidence from Center for Countering Digital Hate (OSB0009)

628 Competition and Markets Authority, CMA fines Facebook over enforcement order breach (October 2021): https://www.gov.uk/government/news/cma-fines-facebook-over-enforcement-order-breach [accessed 1 December 2021]

629 ‘Facebook criticises UK competition watchdog’s concern over Giphy takeover’, Evening Standard (8 September 2021): https://www.standard.co.uk/news/uk/facebook-giphy-b954378.html [accessed 17 November 2021]

630 Written evidence from NSPCC (OSB0109)

631 Q 62; see also: Antisemitism Policy Trust (OSB0005); APPG Coalition (OSB0202); Center for Countering Digital Hate (OSB0009); Refuge (OSB0084); NSPCC (OSB0109)

634 Written evidence from Meta (Facebook) (OSB0224)

637 Written evidence from Open Rights Group (OSB0118). See also written evidence from Internet Association (OSB0132) and Brother Watch (OSB0136)

638 Written evidence from Open Rights Group (OSB0118). See also written evidence from Brother Watch (OSB0136)

640 Written evidence from Department of Digital, Culture, Media & Sport (OSB0243); Q 19

642 Written evidence from Reset (OSB0203). See also written evidence from Coadec (OSB0029).

643 See, for example, written evidence from: Carnegie UK (OSB0095); Professor Damian Tambini (Distinguished Policy Fellow and Associate Professor at London School of Economics and Political Science) (OSB0066); Barbora Bukovská (Senior Director, Law and Policy, Article 19) (Q138); TalkTalk (OSB0200); LSE Department of Media and Communications (OSB0001); Snap Inc. (OSB0012); Full Fact (OSB0056); ISPA (The Internet Service Provider Association) (OSB0059); Dr Martin Moore (Senior Lecturer at King’s College London) (OSB0063); Written evidence from LSE, Department of Media & Communications—Freedom of Expression Roundtable (OSB0247)

644 Written evidence from Internet Watch Foundation (IWF) (OSB0110)

646 Written evidence from Ofcom (OSB0021)

647 See Council of Europe: Committee of Ministers, ‘Recommendation Rec (2000)23 of the Committee of Ministers to member states on the independence and functions of regulatory authorities for the broadcasting sector’: https://rm.coe.int/16804e0322.See [accessed 1 December 2021] which states that the rules governing regulatory authorities for the broadcasting sector are a key element of their independence and should be defined to protect them against any interference in particular by political forces or economic interests. Specific rules should be avoided which place regulatory authorities under the influence of political power.

648 Written evidence from Dr Damian Tambini (Distinguished Policy Fellow and Associate Professor at London School of Economics and Political Science) (OSB0066)

649 Written evidence from Department of Digital, Culture, Media & Sport (OSB0243); Q 16

650 Written evidence from Department of Digital, Culture, Media & Sport (OSB0243); Q 16

651 Written evidence from Department of Digital, Culture, Media & Sport (OSB0243); Q 16

652 Department for Culture, Media and Sport, Online Media Literacy Strategy, (July 2021): https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1004233/DCMS_Media_Literacy_Report_Roll_Out_Accessible_PDF.pdf [accessed 15 November 2021]

653 Written evidence from 5Rights Foundation (OSB0096)

654 Written evidence from Department for Culture, Media and Sport, Online Media Literacy Strategy, (July 2021): p 5: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1004233/DCMS_Media_Literacy_Report_Roll_Out_Accessible_PDF.pdf [accessed 15 November 2021]

655 Written evidence from 5Rights Foundation (OSB0096)

657 Written evidence from LSE Department of Media and Communications (OSB0001)

658 Written evidence from APPG Coalition (OSB0202)

660 Communications and Digital Committee, Free for all? Freedom of expression in the digital age (1st Report, Session 2021–22, HL Paper 54), para 293–296; Communications and Digital Committee, Breaking News? The Future of UK Journalism (1st Report, Session 2019–21, HL Paper 176), para 87–89

661 Written evidence from Carnegie UK (OSB0095)

662 See Draft Online Safety Bill, CP 405, May 2021, Clause 103

663 Lee Edwards, ‘Media literacy in the Online Safety Bill: Sacrificing citizenship for resilience?’: https://blogs.lse.ac.uk/medialse/2021/11/09/media-literacy-in-the-online-safety-bill-sacrificing-citizenship-for-resilience/ [accessed 16 November 2021]

664 Written evidence from Glitch (OSB0097)

665 Written evidence from: the Office of the Children’s Commissioner (OSB0019), p 9; NSPCC (OSB0109) p 4

666 Written evidence from Facebook (OSB0147), p 23; See also Tech Against Terrorism (OSB0052), p 66

667 Written evidence from: Facebook (OSB0147); Microsoft (OSB0076)

668 Written evidence from: Ms. Daphne Keller (Director, Program on Platform Regulation at Stanford Cyber Policy Center) (OSB0057); See also British & Irish Law, Education & Technology Association (OSB0073), para 6.3.2

669 Written evidence from Ms. Daphne Keller (Director, Program on Platform Regulation at Stanford Cyber Policy Center) (OSB0057).

670 Written evidence from: International Justice Mission (OSB0025); Google (OSB0175)

671 Written evidence from: Internet Watch Foundation (IWF) (OSB0110); Facebook (OSB0147)

672 Written evidence from: Ofcom (OSB0021); see also NSPCC (OSB0109)

673 Written evidence from International Justice Mission (OSB0025).

674 Written evidence from: Mr John Carr (Secretary at Children’s Charities’ Coalition on Internet Safety) (OSB0167); International Justice Mission (OSB0025)




© Parliamentary copyright 2021