Draft Online Safety Bill Contents

2Objectives of the Online Safety Bill

13.All the service providers we heard from were taking measures to reduce activity that creates a risk of harm and illegal activity on their platforms.16 These measures were wide-ranging and included explicit content filters on search results;17 manual curation of content on public-facing areas of the service;18 user voting which affects the visibility of content;19 and, following the introduction of the Age Appropriate Design Code, default privacy settings for children who use their platforms.20

14.Nevertheless, we heard that illegal and harmful activity remain prevalent online. Throughout our inquiry, we have heard about the failures of self-regulation by online service providers. Witnesses have told us that the current system of self-regulation is akin to allowing service providers to mark their own homework, and that this has made the online world more dangerous.21 This has real-world implications—during the short timescale of our inquiry, illegal and harmful activity online has been linked to the suicide of 15 year old Frankie Thomas22 and the kidnap, rape, and murder of Sarah Everard.23 To give just a few examples of events that occurred in the months and years immediately preceding our inquiry:

15.In this Chapter we give an overview of the content and activity that creates risks of harm experienced by different groups of people online. We then discuss the relationship between people’s experiences of online risks, their prevalence online and the systems that underpin most large online platforms. Finally, we draw conclusions for the objectives that we believe the Bill should pursue.

Harms affecting children

16.Research by DCMS has shown that “80 per cent of six to 12 year-olds have experienced some kind of harmful content online”, whilst half of 13 to 17 year-olds believe they have seen something in the last three months that constitutes illegal content.35 Children can be vulnerable to a wide range of online harms.36 Izzy Wick, Director of Policy at 5Rights, told us:

“We know from speaking with children and young people that the harms they experience online are extensive and wide ranging. They can be extreme, from exposure to self-harm and suicide content, violent sexual pornography and unsolicited contact with adults they do not know, right the way through to more insidious harms that might build up over time.”37

17.The National Society for the Prevention of Cruelty to Children (NSPCC) reported that 10,391 child sex crimes were recorded by police forces across the UK for 2019/20, an increase of 16 per cent.38 Since 2017/18, Sexual Communication with a Child offences have increased by 70 per cent reaching a record high of 5,441 recorded crimes between April 2020 and March 2021. Three quarters of these offences involved the use of Instagram, WhatsApp, Facebook Messenger, and Snapchat.39

18.Intentional access and accidental exposure to pornography is increasing among children. The Office of the Children’s Commissioner told us that over half of 11–13-year-olds have seen pornography online.40 Witnesses explained that pornography can distort children’s understanding of healthy relationships, sex, and consent by, for example, normalising violence during sexual activity.41 It has also been linked to addiction.42

19.Ian Russell, founder of the Molly Rose Foundation, told us that, in 26 per cent of cases where young people present to hospital with self-harm injuries and suicide attempts, those young people have accessed related content online.43 The Samaritans reported that children as young as 12 have accessed suicide and self-harm material online.44 We have heard that, while children and young people are particularly at risk, adults can also be led to suicide and self-harm as a consequence of online content and activity.45

20.We heard that Ofsted’s review of sexual abuse in schools and colleges found 88 per cent of girls and 49 per cent of boys surveyed said that being sent pictures that they did not want to see happened “a lot” or “sometimes”.46 We also heard that children feel pressured by what they see online. This makes them feel insecure about their body image and can have significant impacts on their health, confidence, and self-esteem.47 Frances Haugen noted: “When kids describe their usage of Instagram, Facebook’s own research describes it as an addict’s narrative. The kids say, ‘This makes me unhappy. I feel like I don’t have the ability to control my usage of it. And I feel that if I left, I’d be ostracized.’”48

Harms affecting adults

21.In their pilot Online Harms Survey, Ofcom found that three quarters of adult respondents reported having been exposed to at least one incidence of content or activity that creates a risk of harm in the previous month.49 A number of individuals, online dating services, LGBTQ+ and disability rights groups, and campaigners against racism and antisemitism gave details and statistics relating to this imbalance and we discuss them more below.50 Adults can be harmed online in a range of different ways,51 including by fraud and scams (discussed in Chapter 4).52

Racist abuse

22.In many cases, the harm that these individuals face is direct abuse exacerbated or amplified by system design. In professional football, for example, an analysis by Signify funded by the Professional Football Association found that there was a 48 per cent increase in racist online abuse in the 2020–21 football season, with racist abuse peaking in May 2021 (excluding the Euro 2020 final).53 Rio Ferdinand, former professional footballer, told us about his experiences of receiving racist abuse online. He said that experiencing racist abuse online can affect mental health and self-esteem and said it can have severe impacts on an individual’s friends and family. He had personal experience of family members “disintegrating” because of online abuse being targeted at him.54 We were told that the prevalence of racist abuse directed at football players is so great that the Football Association (FA) have had to provide guidance to their players on how to filter it from their social media feeds.55

23.We were very aware that the experiences of such high-profile people reflect much wider patterns of abuse and harm. Imran Ahmed, CEO and Founder of the Center for Countering Digital Hate (CCDH), told us:

“When it comes to racism against footballers, the point that I have made to their representatives and to others is that the abuse of Marcus Rashford matters not because he is a wealthy footballer, but because if they can call Marcus Rashford the N-word, imagine what they would call me or my mum or anyone else from a minority, a woman, a gay person, anyone else.”56

Abuse against LGBTQ+ people

24.When asked by Stonewall about their experiences of online abuse, one in ten LGBTQ+ people had experienced online abuse directed specifically at them within the preceding month.57 We heard about the serious real-world impacts that online harms can have for LGBTQ+ people who, for example, have been “outed”, resulting in the loss of their homes and jobs.58 The LGBT Foundation told us that LGBTQ+ people are also at risk of being harmed by the actions of platforms themselves, with LGBTQ+ content being erroneously blocked or removed at greater rates than other types of content.59

Misogynistic abuse and violence against women and girls

25.Women are disproportionately affected by online abuse and harassment.60 They are 27 times more likely to be harassed online than men.61 36 per cent of women report having been a victim of online abuse and harassment, with this rising to 62 per cent in women aged 18–34.62 Abuse and harassment are not only directed towards adults: in 2020–21, half of 11–16 year old girls experienced hate speech online and a quarter were harassed or threatened.63 Nina Jankowicz, Author and Global Fellow at the Wilson Center, told us:

“Being a woman online is an inherently dangerous act. That is the long and short of it. It does not matter what you do. You are opening yourself up to criticism from every angle … Many women are changing what they write, what they speak about, what careers they choose to pursue because of that understanding that it is part and parcel of existing as a woman on the internet.”64

26.Violence against women and girls (VAWG) “is increasingly perpetrated online” and online VAWG “should be understood as part of a continuum of abuse which is often taking place offline too.”65 Professor Clare McGlynn QC, Durham Law School, described an “epidemic of online violence against women and girls”.66 Online VAWG “includes but is not limited to, intimate image abuse, online harassment, the sending of unsolicited explicit images, coercive ‘sexting’, and the creation and sharing of ‘deepfake’ pornography.”67

27.Cyberflashing—the unsolicited sending of images of genitalia68 is a particularly prevalent form of online VAWG. 76 per cent of girls aged 12–18 and 41 per cent of all women reported having been sent unsolicited penis images. Regardless of the intention(s) behind it, cyberflashing can violate, humiliate, and frighten victims, and limit women’s participation in online spaces.69 The use of deepfake pornography in online VAWG is also becoming increasingly prevalent and is of great concern, having been recently debated in the House of Commons on 2nd December 2021.70

Religious hate and antisemitism

28.Antisemitism online is a cause of great concern,71 comprising approximately 40 per cent of all antisemitic incidents recorded in the UK.72 The Community Security Trust recorded 355 incidents of online antisemitism in the first six months of 2021, primarily through Twitter (35 per cent) and instant messaging services (22 per cent).73 Danny Stone MBE, Director of the Antisemitism Policy Trust, told us about the impacts of antisemitism online:

“There are a range of impacts. I do not post pictures of my children online often, because … there is a chance that someone will try to hurt my children … That is an individual impact.

… There was a video on BitChute about the Antisemitism Policy Trust, my organisation. That has impacts on my board and what they consider about their own safety and what that means. …

Also, on Jews in public life, Luciana Berger was in this House and faced an onslaught of antisemitic abuse. …

There are all these impacts. There are many different impacts.”74

29.Hate crime offences against Muslims constituted 45 per cent of recorded religious hate crimes from 2020–21,75 with reports of online Islamophobia rising by 40 per cent during the first UK COVID-19 lockdown.76 Islamophobic online material has real consequences—the attackers in both the Finsbury Park Mosque attack in 2017 and the 2019 Christchurch Mosque attack were thought to have been at least in part radicalised online, with the Finsbury Park Mosque attacker said to have become “obsessed” with Muslims.77 Reset told us that, currently, “widely debunked far-right conspiracy theories about Islam run rife on social media sites/blogs”, ranging from “claims of ‘No Go Zones’ in Western nations which are run by Sharia Law and bar non-Muslims and police” to claims about “a plot by Islamic nations to take over Europe to create ‘Eurabia’”.78

Abuse against disabled people

30.In its report Online abuse and the experience of disabled people in January 2019, the House of Commons Petitions Committee found that, despite the importance of social media for many disabled people’s lives, many felt that the online environment was toxic for them. The harms faced by disabled people online include direct abuse, problems with accessibility, and exploitation by malicious actors.79 Matt Harrison, Public Affairs and Parliamentary Manager at the Royal Mencap Society, told us that negative attitudes and stigma towards disabled people expressed online can “unravel those threads of work that lots of people with learning disabilities themselves have been doing on social media” to move in a positive direction.80

31.We have also heard about the unique risk that online platforms can present to individuals with photosensitive epilepsy. Clare Pelham, Chief Executive of the Epilepsy Society, told us that people with photosensitive epilepsy are “regularly” targeted with flashing images that are intended to cause a seizure.81 She told us that, beyond the severe physical harm that can be caused by having a seizure, this can cause isolation as individuals are “driven off” social media.82

Impact on freedom of speech

32.Compassion in Politics described the “current climate of hostility, toxicity, and abuse online” and told us that this “prevents many people from joining social media sites”. Their polling found that this can infringe on individuals’ freedom of expression, with “1 in 4 … scared of voicing an opinion online because they expect to receive abuse if they do so.” 83 Mr Ahmed illustrated this:

“You do not have free speech if you are a black footballer and 100 racist people jump down your throat every time you post. In fact … this vital tool for promoting your brand and for transacting business is taken away from you.”84

33.The freedom of social media and search engines to make their own decisions on censoring and recommending content without accountability or oversight was also raised. For example, DMG media told us:

“We believe it is incompatible with freedom of expression and media plurality for legitimate, responsible news content to be subject to blocking and take-down by a commercial organisation which is open to business pressures such as advertising boycotts, operates without due process, and has no authority to make judgments about the value of journalism.”85

Societal harms

34.The harms resulting from activity online are not limited to individuals. For example, online disinformation—the intentional spreading of factually incorrect information—and online misinformation—the unknowing spreading of factually incorrect information— harm society more broadly.86 We heard that the prevalence of disinformation during the COVID-19 pandemic has resulted in vaccine hesitancy and vaccine refusal.87 This has been linked to higher death rates in certain groups.88 Vaccine-hesitant individuals have had their health severely impacted by contracting COVID-19, or in the worst cases, died. In the UK, this has created pressure on the NHS.89 COVID-19 misinformation has led individuals to engage in risky behaviour such as using ineffective drugs as home remedies,90 or drinking poisonous disinfectant.91

35.We heard that disinformation has the potential to harm democracy and national security.92 Ms Ressa told us that disinformation can affect the integrity of elections: “we will not have integrity of elections if we do not have integrity of facts.”93 Disinformation relating to democratic processes can affect social cohesion94 with societal divides having been exploited by malicious foreign actors to undermine democratic processes in the US and the UK.95 We have heard that inauthentic accounts created by real people can give fake legitimacy to political candidates or spread mistrust.96 Meanwhile, the creation and sharing of manipulated videos and messages such as deepfakes can be used to target political candidates.97 We received evidence that service providers are aware of these threats, including statements from service providers themselves.98

Factors exacerbating harms: business models and system design

36.Many service providers collect data about people who use their platforms for commercial benefit.99 We heard that service providers are incentivised to maximise users’ engagement so that they can collect more data about them and show them more and better targeted adverts.100 Facebook’s most recent quarterly report showed 99 per cent of their income was from advertising.101 Quarterly reports from Alphabet Inc.102 and Twitter showed 92 per cent and 88 per cent of their respective profits were from advertising revenue.103

37.Metrics concerning time spent on the platform and interaction with content form the basis of key performance indicators (KPIs).104 We heard that KPIs focused on engagement are maximised regardless of the nature of that engagement or quality of the content that is being engaged with.105 This can be problematic, as Guillaume Chaslot, ex-YouTube employee and founder of AlgoTransparency, told us:

“You have cases where engagement is good for the user. When I listen to music, the longer I listen, the better it is for me. [But] When there was a problem with paedophile content on YouTube, they spent a lot of time on the platform, so the algorithm was trying to maximise the amount of paedophile content that was shown to users.”106

38.We heard evidence from a range of sources that content that creates a risk of harm or factually inaccurate content is many times more engaging than innocuous or accurate content.107 By making design choices that maximise engagement, service providers therefore exacerbate the presence, spread, and effect of harms.108 Algorithmic design choices have been heavily implicated in the evidence we have received. The Anti-Defamation League told us:

“When a user interacts with a piece of content, algorithmic systems recognise signals, like popularity, and then amplify that content. If content is forwarded, commented on, or replied to, social media algorithms almost immediately show such content to more users, prompting increased user engagement, and thus increasing advertising revenue. Research shows that controversial, hateful, and polarizing information and misinformation are often more engaging than other types of content and, therefore, receive wider circulation.”109

39.Multiple witnesses told us that people who are not searching for misinformation, conspiracist content, and extremism will be recommended such content if their behaviour indicates they may be interested in it.110 For example, someone interested in wellness may be shown anti-vaccination content.111 If they interact with this, they could be recommended far-right conspiracist content or antisemitic content.112 Ms Haugen told us that service providers’ algorithms currently “[make] hate worse” because of the way they amplify and recommend hateful content.113

40.People, including children, can be vulnerable to being targeted with content that creates a risk of harm, as algorithms collect data about their interests and serve them with progressively more extreme content to keep them engaged.114 For example, the Wall Street Journal investigated TikTok’s algorithms. They found that within 40 minutes of using the platform, 93 per cent of videos recommended to a user who showed an interest in videos about depression and anxiety would be depression-related.115 Targeting users with content in this way can reinforce addictive behaviour, where people feel compelled to use the platform even though they may not enjoy doing so.116

41.Some people are also served disproportionately high amounts of content that creates a risk of harm by algorithms due to their personal characteristics, as inferred by the platform’s algorithms.117 The Information Commissioner Elizabeth Denham CBE told us that “inferred data is personal data”, and that she had concerns about the way platforms use inferred data to direct content to people using their platforms and questioned if this was compliant with data protection law.118

The “Prevalence Paradox”

42.All the evidence so far would suggest that a high proportion of online material is hateful, false or creates a risk of harm. Yet, academic research which has systematically examined the prevalence of online content that creates a risk of harm consistently finds that its prevalence is low. Abusive content, for example, made up less than one per cent of overall content online according to a 2019 study.119 However, 13 per cent of adult respondents to Ofcom’s pilot harms survey had experienced trolling in the previous month; six per cent of those respondents had experienced bullying, abusive behaviour, or threats;120 and 46 per cent of women and non-binary people surveyed by Glitch reported experiencing online abuse during the COVID-19 pandemic.121 In football, 71 per cent of fans reported having seen racist comments on social media122 despite only 0.03 per cent of posts being identified as discriminatory abuse.123 In other words, some abusive posts, which make up a minority of content, are seen by a vastly disproportionately number of people.

43.Some of these differences may result from inconsistency between methodological approaches. It is, however, improbable this would account for all, or even most, of the gap. Other explanations are:

a)It may be a consequence of the easy dissemination and algorithmic promotion of content that creates a risk of harm, the “boosting” effect we discuss above.

b)The studies may not be using comparable definitions of harm, for example some reports focus on abuse specifically,124 whereas others may include content which is discriminatory but not directly abusive.125

c)There may be an element of reporting bias or self-selection bias in polling studies.

d)Some groups are more likely to receive online abuse than others, so that whilst overall prevalence of content that creates a risk of harm may be low, people in these groups will report experiencing proportionately more harmful material.126

e)Activity on engagement-based platforms can often “snowball”, meaning that people can be targeted for abuse by large groups of other users. Where individuals are the focus of such a “pile-on” attack they are the singular target of vast quantities of abusive material.

The “black box”

44.One of the challenges of establishing exactly why content and activity that is abusive, false or creates a risk of harm is so overexposed is that the systems underlying platforms are like a “black box”. Users, researchers, and regulators often have limited understanding of their internal workings or the risks posed by them.127 Currently researchers do not have access to high-quality data from service providers which would allow them to conduct systematic, longitudinal, trustworthy research, despite, as we heard from many witnesses, requests for it.128 We discuss this further in Chapter 9.

45.For people using service providers’ platforms, a lack of transparency can lead to frustration with systems when they do not appear to be working—for example when activity that creates a risk of harm or is abusive is reported but not addressed.129 For researchers and civil society, a lack of transparency around data and the algorithms that platforms use is a barrier to being able to understand and tackle content that creates a risk of harm and illegal content online.130 A lack of transparency also means that service providers do not have any accountability, to quote one provider: “Without transparency, there can be no accountability.”131 We heard repeatedly that service providers’ lack of transparency is a key issue for online harms and must be addressed.132

46.Some companies now regularly produce transparency reports detailing information about content and activity that is illegal, creates a risk of harm or is against their terms of service.133 We heard, however, that the information provided in some of these reports can be misleading. Certain metrics can imply high rates of success or low levels of content and activity that create a risk of harm, when that may not be an accurate reflection of what is occurring on platforms. For example, knowing that 90 per cent of policy-violating content that is removed from a service is identified and removed by algorithms, rather than due to user reports, does not give an indication of the overall proportion of policy-violating content that is successfully identified and removed by those algorithms. If only 1 per cent of policy-violating content is ultimately identified, the algorithms would be removing 0.9 per cent of the total amount of policy-violating content that is present on the service.134 These metrics are therefore insufficient for achieving the transparency and accountability that is needed to understand and mitigate the presence and spread of online content and activity that creates a risk of harm.

47.These metrics also hide the human impact of content and activity that creates a risk of harm. Statistics about the prevalence of policy-violating content do not capture the people in urgent conditions in hospital who have taken fake medical cures. They do not show the children who have harmed themselves and who suffer from severe mental health difficulties because of what they have experienced online, or the enduring impact on people who have lost their life savings to online scams.

The draft Bill

48.The draft Bill introduced by the Government aims make the UK “the safest place in the world to be online”. To achieve this aim, it proposes a new regulatory regime with Ofcom as an independent regulator for providers of online user-to-user and search services.135

49.Online service providers are broadly supportive of the Government introducing regulation that aims to enhance online safety.136 Facebook themselves have said that they feel they are currently making societal decisions that are better made by Government and regulators.137

50.We have, however, heard numerous concerns about the Online Safety Bill as currently drafted.138 Briefly:

a)The Bill is overly complex, which has the potential to create legislative gaps and loopholes.139 The “Duty(ies) of Care” framework is particularly complex and confusing.140

b)The Bill lacks clarity around several key aspects, making it more at risk of legal challenge:

i)What would constitute content that is harmful, and associated definitions such as a “person of ordinary sensibilities”;141

ii)The definitions of “journalistic content” and “content of democratic importance”, and how service providers would be expected to identify these types of content;142

iii)Which types of content would be designated “priority harms”;143

iv)Which service providers would be in scope of the “Category 1” requirements;144 and

v)Some of the requirements of the Bill will undermine, conflict with or are misaligned to the standards in the Age Appropriate Design Code/existing regulation.145

c)The provisions in the draft Bill on content that is harmful to adults could have a “chilling effect” on freedom of expression and give too much power to service providers.146

d)Ofcom’s powers may not be sufficient for them to achieve success in their role as a regulator.147

e)The transparency requirements placed on service providers may not go far enough.148

f)The Secretary of State’s powers are extensive and may undermine Ofcom’s independence with no effective accountability to Parliament.149

g)The Bill does not provide sufficient protections for children, including failure to capture all pornography sites.150

51.Self-regulation of online platforms has failed. Our recommendations will strengthen the Bill so that it can pass successfully into legislation. To achieve success, the Bill must be clear from the beginning about its objectives. These objectives must reflect the nature of the harm experienced online and the values of UK society. Online services are not neutral repositories for information. Most are advertising businesses. Service providers in scope of the Bill must be held liable for failure to take reasonable steps to combat reasonably foreseeable harm resulting from the operation of their services.

52.We recommend the Bill is restructured. It should set out its core objectives clearly at the beginning. This will ensure clarity to users and regulators about what the Bill is trying to achieve and inform the detailed duties set out later in the legislation. These objectives should be that Ofcom should aim to improve online safety for UK citizens by ensuring that service providers:

a)comply with UK law and do not endanger public health or national security;

b)provide a higher level of protection for children than for adults;

c)identify and mitigate the risk of reasonably foreseeable harm arising from the operation and design of their platforms;

d)recognise and respond to the disproportionate level of harms experienced by people on the basis of protected characteristics;

e)apply the overarching principle that systems should be safe by design whilst complying with the Bill;

f)safeguard freedom of expression and privacy; and

g)operate with transparency and accountability in respect of online safety.

An overarching duty of care?

53.The 2019 White Paper promised to introduce a “new duty of care” for service providers towards the people using their platforms.151 This language and proposal drew on the work of Professor Lorna Woods and William Perrin OBE at Carnegie UK Trust. They proposed that service providers should be held responsible for a public space in the same way that property owners are responsible for physical spaces, and that service providers should have a duty of care in respect of the people using their platforms. Prof Woods and Mr Perrin also argued that a statutory duty of care would be “simple, broadly based and largely future-proof”, much like the long-enduring Health and Safety at Work Act 1974.152 The language of a duty of care for service providers has persisted, with the draft Bill setting out several “duties of care” and “safety duties”. These “duties of care” however, operate in a fundamentally different way to the duty of care laid out in the Health and Safety at Work Act 1974.

54.Some submissions we received noted that the draft Bill had moved away from the White Paper in the duties it places on service providers. The draft Bill places duties on service providers to do particular things, such as undertake risk assessments, to comply with safety duties in respect of illegal content, content that is harmful to children and content that is harmful to adults and other duties, for example in respect of journalistic content. It does not propose “a [singular] new duty of care” as set out in the Government’s response to its White Paper.153 Nor do these new duties constitute a duty of care in the legal sense. They are things that providers are required to do to satisfy the regulator. They are not duties to people who use their platforms, and they are not designed to create new grounds for individuals to take providers to court.

55.For children’s rights charities and Carnegie UK Trust themselves, this was a significant step backwards. They were concerned that the lack of an overarching duty to address “foreseeable risks”, might lead to emerging issues falling between the cracks of the various duties in the legislation.154 The complexity of the interlocking series of duties were also a common theme in evidence, cutting across many of the different groups we took evidence from.155

56.The Government explained that the structure adopted in the draft Bill seeks to cover the same scope as the duty of care envisaged in the White Paper, but the move to “more specific duties will give companies and Ofcom greater legal certainty and direction about the regime. In turn this will make it easier for Ofcom to effectively enforce against non-compliance.”156 The Secretary of State was more explicit. She told us that a single duty of care:

“ … does not work … The definitions within that duty of care are huge, onerous and difficult legally to make tight and applicable. I am not going to tell you what to do, but I would probably put your efforts into other parts of the Bill, because we have already been there and we know that it would be almost impossible to get that into the Bill. … That is why the Bill is so long. It is a technical, long Bill, but in order to meet the criteria of watertight it has to be.”157

57.The Secretary of State’s concerns about the workability of a duty of care approach aligned with that of a few of our witnesses. Mr Ahmed told us that he wanted to see as much as possible defined on the face of the Bill, because: “The less clarity there is, the harder it is on those companies to do the right thing, and the more wriggle room there is for them to escape from it.”158 Gavin Millar QC, specialist in media law at Matrix Chambers, told us that the draft Bill as it stood was open to legal challenge. He saw a fundamental problem with transposing a duty of care approach into the regulation of online platforms being that the law of negligence is an unqualified duty, whereas duties on service providers involve balancing the fundamental rights of different groups of people against each other.159 He, along with other witnesses who had similar concerns, wanted to see the Bill go further in the direction of specifying exactly which risks of harm it intends to address and what service providers should be doing about it.160

58.Towards the end of our inquiry, Carnegie UK Trust produced a series of revisions to the draft Bill. They proposed a restructuring with an overarching set of objectives, underpinned by a “Foundation duty”, in turn underpinned by specific duties, along the lines of those that can be found in the draft Bill.161 A possible model that also offers a similar structure may be provided by the Financial Conduct Authority’s (FCA’s) current consultation on a “consumer principle”. The FCA’s model proposes an overarching principle—that firms act in the best interests of consumers—followed by a set of cross-cutting rules and then detailed rules and guidance. In the same way, we envisage a Bill with an overarching set of core safety objectives on Ofcom, and a series of statutory requirements on providers to implement detailed mandatory Codes of Practice.162

59.The draft Bill creates an entirely new regulatory structure and deals with difficult issues around rights and safety. In seeking to regulate large multinational companies with the resources to undertake legal challenges, it has to be comprehensive and robust. At the same time, a common theme in the evidence we received is that the draft Bill is too complex, and this may harm public acceptance and make it harder for those service providers who are willing to comply to do so.

60.We recommend that the Bill be restructured to contain a clear statement of its core safety objectives—as recommended in paragraph 52. Everything flows from these: the requirement for Ofcom to meet those objectives, its power to produce mandatory codes of practice and minimum quality standards for risk assessments in order to do so, and the requirements on service providers to address and mitigate reasonably foreseeable risks, follow those codes of practice and meet those minimum standards.  Together, these measures amount to a robust framework of enforceable measures that can leave no doubt that the intentions of the Bill will be secured.

61.We believe there is a need to clarify that providers are required to comply with all mandatory Codes of Practice as well as the requirement to include reasonably foreseeable risks in their risk assessments. Combined with the requirements for system design we discuss in the next chapter, these measures will ensure that regulated services continue to comply with the overall objectives of the Bill—and that the Regulator is afforded maximum flexibility to respond to a rapidly changing online world.

Figure 1: how the Online Safety Bill will work under our recommendations

Safety objectives to Requirement for Ofcom to achieve objectives to Ofcom produces mandatory codes of practice and risk assessment standards to Service providers meet requirements to Improved user safety


16 Oral evidence taken on 28 October 2021 (Session 2021–2022), QQ 200-222, QQ 223-232, QQ 233-249

17 Written evidence from Google (OSB0175)

18 Written evidence from Snap Inc (OSB0012)

19 Written evidence from Reddit (OSB0058)

20 Written evidence from TikTok (OSB0181); examples of announcements of safety measures made following the introduction of the Code included: from Microsoft, ‘Introducing Microsoft Edge Kids Mode, a safer space for your child to discover the web’: https://blogs.windows.com/windowsexperience/2021/04/15/introducing-microsoft-edge-kids-mode-a-safer-space-for-your-child-to-discover-the-web/ [accessed 9 December 2021]; TikTok, ‘Strengthening privacy and safety for youth on TikTok’: https://newsroom.tiktok.com/en-us/strengthening-privacy-and-safety-for-youth [accessed 9 December 2021]; Instagram, ‘Continuing to Make Instagram Safer for the Youngest Members of Our Community’: https://about.instagram.com/blog/announcements/continuing-to-make-instagram-safer-for-the-youngest-members-of-our-community [accessed 9 December 2021]

21 Q 67; 14, Q 3, Q 16, Q 31, Q 59, Q 178, Q 186, Written evidence from: Centre for Countering Digital Hate (OSB0009); Compassion in Politics (OSB0050); Full Fact (OSB0056); Dr Elly Hanson (OSB0078); Association of British Insurers (OSB0079); Dame Margaret Hodge MP (OBS0201)

22 BBC News, ‘Frankie Thomas: Coroner rules school failed teen who took own life’: https://www.bbc.co.uk/news/uk-england-surrey-58817821 [accessed 30 November 2021]

23 BBC News, Sarah Everard: ‘Gross misconduct probe into Couzens WhatsApp group’: https://www.bbc.co.uk/news/uk-58760933; [accessed 15 November 2021]; Care, ‘Everard Killer viewed ‘brutal pornography’: https://care.org.uk/news/2021/09/everard-killer-viewed-brutal-pornography [accessed 15 November 2021]

24 Internet Watch Foundation, Face the Facts: Annual Report 2020 (2020): https://www.iwf.org.uk/sites/default/files/inline-files/PDF%20of%20IWF%20Annual%20Report%202020%20FINAL%20reduced%20file%20size.pdf [accessed 15 November 2021]

25 5Rights, Pathways: How digital design puts children at risk (July 2021): https://5rightsfoundation.com/uploads/Pathways-how-digital-design-puts-children-at-risk.pdf [accessed 6 December 2021]

26 Channel 4 News, ‘Nearly 2,000 abusive tweets targeted Marcus Rashford, Jadon Sancho, Bukayo Saka and Raheem Stirling after Euro 2020 final, research shows’: https://www.channel4.com/news/nearly-2000-abusive-tweets-targeted-marcus-rashford-jadon-sancho-bukayo-saka-and-raheem-sterling-after-euro-2020-final-research-shows [accessed 15 November 2021]

27 Community Security Trust, The Month of Hate: Antisemitism and extremism during the Israel-Gaza conflict (2021): https://cst.org.uk/data/file/4/a/The_Month_of_Hate.1626263072.pdf [accessed 15 November 2021]

28 Ibid.

29 Facebook renamed itself to “Meta” during our inquiry, in fact on the very that day that they gave oral evidence to us. We refer to the company as “Facebook” throughout this report as this is how they are referred to in most of the sources we cite.

30 BBC News, ‘UN: Facebook has turned into a beast in Myanmar’: https://www.bbc.co.uk/news/technology-43385677 [accessed 15 November 2021]

31 Digital, Culture, Media, and Sport Committee, Disinformation and ‘fake news’: Final Report (Eighth Report, Session 2017–19, HC 1791)

32 Intelligence and Security Committee, Russia (Report, Session 2021–22, HC 632)

33 United States Senate, Select Committee on Intelligence, Russian Active Measures Campaigns and Interference in the 2016 US Election, Volume 5: Counter Intelligence Threats and Vulnerabilities (2020): https://www.intelligence.senate.gov/sites/default/files/documents/report_volume5.pdf [accessed 15 November 2021]

34 Twitter ‘Permanent Suspension of @realDonaldTrump’: https://blog.twitter.com/en_us/topics/company/2020/suspension [accessed 15 November 2021]

36 Written evidence from Parent Kind (OSB0207)

38 NSPCC, ‘Police record over 10,000 online sex crimes in a year for the first time’: https://www.nspcc.org.uk/about-us/news-opinion/2020/2020–09-03-cybercrimes-during-lockdown/ [accessed 15 November 2021]

39 NSPCC, ‘Record high number of reported grooming crimes lead to calls for stronger online safety legislation’: https://www.nspcc.org.uk/about-us/news-opinion/2021/online-grooming-record-high/ [accessed 15 November 2021]

40 Written evidence from The Office of the Children’s Commissioner (OSB0019)

41 Written evidence from Barnardo’s (OSB0017); The Office of The Children’s Commissioner (OSB0019); Care (OSB0085)

42 Written evidence from Premier Christian Communications Ltd (OSB0093); COST Action - European Network for Problematic Usage of the Internet (OSB0038); CEASE (Centre to End All Sexual Exploitation) (OSB0104); Dignify (OSB0196)

44 Written evidence from The Samaritans (OSB0182)

45 Written evidence from SWGfL (OSB0054)

46 Ofsted, Review of sexual abuse in schools and colleges (June 2021): https://www.gov.uk/government/publications/review-of-sexual-abuse-in-schools-and-colleges/review-of-sexual-abuse-in-schools-and-colleges [accessed 15 November 2021]

47 Written evidence from Girlguiding (OSB0081)

49 Ofcom, ‘Online Nation 2021 Report’: https://www.ofcom.org.uk/research-and-data/internet-and-on-demand-research/online-nation [accessed 30 November 2021]

50 Written evidence from: Glitch (OSB0097); Centenary Action Group, Glitch, Antisemitism Policy Trust, Stonewall, Women’s Aid, Compassion in Politics, End Violence Against Women Coalition, Imkaan, Inclusion London, The Traveller Movement, Stonewall (OSB0047); Antisemitism Policy Trust (OSB0005); Mencap (OSB0075); and Royal Mencap Society oral evidence 13 September QQ 52–68 and Dame Margaret Hodge (Member of Parliament for Barking and Dagenham at House of Commons) (OSB0201)

51 The five most prevalent types of harms reported by adult users In Ofcom’s pilot Online Harms Survey were: spam emails, scams/fraud/phishing, misinformation, content encouraging gambling, and “alternative viewpoints”

52 Q 110; Written evidence from: UK Finance (OSB0088); Match Group (OSB0053); Glitch (OSB0097)

53 Professional Footballers’ Association, ‘Online Abuse’: (2021), https://www.thepfa.com/news/2021/8/4/online-abuse-ai-research-study-season-2020–21 [accessed 16 November 2021]

57 Stonewall, LGBT Hate Crime in Britain: Hate and Discrimination (2017): https://www.stonewall.org.uk/system/files/lgbt_in_britain_hate_crime.pdf [accessed 16 November 2021]

59 Written evidence submitted by the LGBT Foundation (OSB0045); LGBT Foundation (OSB0046)

60 Written evidence from Dr Kim Barker and Dr Olga Jurasz (OSB0071)

61 Written evidence from Glitch (OSB0097)

62 Written evidence from Refuge (OSB0084)

63 Written evidence from Girlguiding (OSB0081)

65 Written evidence from Centenary Action Group (OSB0047)

67 Written evidence from Centenary Action Group (OSB0047); Refuge (OSB0084)

68 The Law Commission, ‘Modernising Communications Offences: A Final Report’, Law Com No 399, HC 547, July 2021: https://www.lawcom.gov.uk/project/reform-of-the-communications-offences [accessed 22 November 2021]

69 Written evidence from Professor Clare McGlynn (OSB0014)

70 HC Deb, 2 December 2021, col 1154–1162

71 Written evidence from The Antisemitism Policy Trust (OSB0005)

72 CST, ‘Antisemitic Incidents Report 2019’: https://cst.org.uk/news/blog/2020/02/06/antisemitic-incidents-report-2019 [accessed 22 November 2021]; Written evidence from the Board of Deputies of British Jews (OSB0043)

73 Community Security Trust, Antisemitic incidents January-June 2021 (2021): https://cst.org.uk/data/file/f/c/Incidents%20Report%20Jan-Jun%202021.1627901074.pdf [accessed 15 November 2021]

75 Home Office, Official Statistics: Hate Crime, England and Wales 2020 to 2021 (October 2021): https://www.gov.uk/government/statistics/hate-crime-england-and-wales-2020-to-2021/hate-crime-england-and-wales-2020-to-2021 [accessed 9 December 2021]

76 Newsweek, ‘Muslims Falsely Blamed for COVID-19 Spread as Hate Crime Increase’: https://www.newsweek.com/islam-muslims-coronavirus-islamophobia-social-media-twitter-facebook-1523346 [accessed 9 December 2021]

77 Antisemitism Policy Trust, Policy Briefing (August 2020): https://antisemitism.org.uk/wp-content/uploads/2020/08/Online-Harms-Offline-Harms-August-2020-V4.pdf [accessed 9 December 2021]

78 Written evidence from Reset (OSB0138)

79 House of Commons Petitions Committee, Online abuse and the experience of disabled people (First Report, Session 2017–19, HC 759)

83 Written evidence from Compassion in Politics (OSB0050)

85 Written evidence from DMG Media (OSB0133)

86 Written evidence from: LSE Department of Media and Communications (OSB0001); Conscious Advertising Network (OSB0180)

88 Brit Trogen and Liise-anne Pirofski, ‘Understanding Vaccine Hesitancy in COVID-19’ Elsevier Public Health Emergency Collection, vol.2, (2021), pp 498–501: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8030992/ [accessed 30 November 2021]

90 Written evidence from the Center for Countering Digital Hate (OSB0009)

91 Digital, Culture, Media and Sport Committee, Misinformation in the COVID-19 Infodemic (Second Report, Session 2019–21, HC 234)

92 Written evidence from Full Fact (OSB0056)

94 Written evidence from: IMPRESS (OSB0092); Polis Analysis (OSB0108); Mr Hadley Newman (OSB0125); Henry Jackson Society (OSB0028)

95 Q 56; Intelligence and Security Committee, Russia (Report, Session 2021–22, HC 632); United States Senate, Select Committee on Intelligence, Russian Active Measures Campaigns and Interference in the 2016 US Election, Volume 5: Counter Intelligence Threats and Vulnerabilities (2020): https://www.intelligence.senate.gov/sites/default/files/documents/report_volume5.pdf; written evidence from Reset (OSB0138)

97 Written evidence from Reset (OSB0138)

98 CBS News, ‘Whistleblower: Facebook is misleading the public on progress against hate speech, violence, misinformation’: https://www.cbsnews.com/news/facebook-whistleblower-frances-haugen-misinformation-public-60-minutes-2021-10-03/ [accessed 15 November 2021]; Twitter, ‘Permanent suspension of @realDonaldTrump’: https://blog.twitter.com/en_us/topics/company/2020/suspension [accessed 15 November 2021]; Q 105, Q 178, Q 207, Q 222, Q 249, written evidence from Reset (OSB0138)

101 Facebook, Earnings Presentation Q2 2021 (2021): https://s21.q4cdn.com/399680738/files/doc_financials/2021/q2/Q2-2021_Earnings-Presentation.pdf [accessed 16 November 2021]

102 Owners of Google and YouTube

103 Alphabet, ‘Alphabet Announces Second Quarter 2021 Results’: https://abc.xyz/investor/static/pdf/2021Q2_alphabet_earnings_release.pdf; [accessed 22 November 2021]; Twitter, Q2 2021: Letter to Shareholders (July 2021): https://s22.q4cdn.com/826641620/files/doc_financials/2021/q2/Q2’21-Shareholder-Letter.pdf [accessed 22 November 2021]

104 KPIs are a type of performance measurement. KPIs evaluate the success of an organisation or of a particular activity (such as projects, programmes, products and other initiatives) in which it engages [from https://en.wikipedia.org/wiki/Performance_indicator]; Q 92

109 Written evidence from Anti-Defamation League (ADL) (OSB0030)

111 Written evidence from the Center for Countering Digital Hate (OSB0009)

116 Q 150, Q 166, QQ 168–169; Written evidence from: COST Action CA16207 - European Network for Problematic Usage of the Internet (OSB0038); ITV (OSB0204); 5Rights, Pathways, (September 2021), p 12: 5Rights Foundation, Key findings and recommendations from Pathways: How digital design puts children at risk (September 2021): https://5rightsfoundation.com/uploads/PathwaysSummary.pdf [accessed 9 December 2021]

119 The Alan Turing Institute, How much online abuse is there? A systematic review of evidence for the UK: Policy Briefing – Summary (2019): https://www.turing.ac.uk/sites/default/files/2019-11/online_abuse_prevalence_summary_24.11.2019_-_formatted_0.pdf [accessed 16 November 2021]

120 Ofcom, Pilot Online Harms Survey 2020/21 (2021): https://www.ofcom.org.uk/__data/assets/pdf_file/0014/220622/online-harms-survey-waves-1-4-2021.pdf [accessed 16 November 2021]

121 Glitch, The Ripple Effect: COVID-19 And The Epidemic Of Online Abuse (September 2020): https://glitchcharity.co.uk/wp-content/uploads/2021/04/Glitch-The-Ripple-Effect-Report-COVID-19-online-abuse.pdf [accessed 16 November 2021]

122 Kick It Out, ‘Reporting Statistics’: https://www.kickitout.org/Pages/FAQs/Category/reporting-statistics [accessed 16 November 2021]

124 The Alan Turing Institute, How much online abuse is there? A systematic review of evidence for the UK: Policy Briefing – Summary (2021): https://www.turing.ac.uk/sites/default/files/2019-11/online_abuse_prevalence_summary_24.11.2019_-_formatted_0.pdf [accessed 16 November 2021]

125 Kick It Out, ‘Reporting Statistics’: https://www.kickitout.org/Pages/FAQs/Category/reporting-statistics [accessed 16 November 2021]

126 Glitch UK and End Violence Against Women Coalition , The Ripple Effect: COVID-19 and the Epidemic of Online Abuse (2020): https://glitchcharity.co.uk/wp-content/uploads/2021/04/Glitch-The-Ripple-Effect-Report-COVID-19-online-abuse.pdf [accessed 16 November 2021]

127 136, Q 146, Written evidence from: the Ada Lovelace Institute (OSB0101); ITV (OSB0204); 72

128 Written evidence from Dr Amy Orben (College Research Fellow at Emmanuel College, University of Cambridge) (OSB0131)

130 Written evidence from Dr Amy Orben (College Research Fellow at Emmanuel College, University of Cambridge) (OSB0131)

131 Q 178; Twitter, Protecting The Open Internet: Regulatory principles for policy makers: https://cdn.cms-twdigitalassets.com/content/dam/about-twitter/en/our-priorities/open-internet.pdf [accessed 16 November 2021]

133 For example: Twitter, ‘Transparency Reports’: https://transparency.twitter.com/en/reports.html [accessed 16 November 2021]; Meta, ‘Community Standards Enforcement Report’: https://transparency.fb.com/data/community-standards-enforcement/ [accessed 16 November 2021]

135 Written evidence from the Department of Digital, Culture, Media and Sport and Home Office (OSB0011)

136 Written evidence from: Snap Inc. (OSB0012); Mumsnet (OSB0031); Match Group (OSB0053); Bumble Inc. (OSB0055); Twitter (OSB0072); Microsoft (OSB0076); Patreon Inc. (OSB0123); Facebook (OSB0147); Google (OSB0175); TikTok (OSB0181)

137 ‘Opinion: Mark Zuckerberg: The Internet needs new rules. Let’s start in these four areas.’ The Washington Post (30 March 2019): https://www.washingtonpost.com/opinions/mark-zuckerberg-the-internet-needs-new-rules-lets-start-in-these-four-areas/2019/03/29/9e6f0504–521a-11e9-a3f7-78b7525a8d5f_story.html [accessed 16 November 2021]

138 Please note that this is not an exhaustive list

140 Written evidence from Snap Inc. (OSB0012)

141 Written evidence from Gavin Millar QC (OSB0221)

142 Written evidence from Dr Martin Moore (Senior Lecturer at King’s College London) (OSB0063)

143 Written evidence from Care (OSB0085)

144 Written evidence from Barnardo’s (OSB0017)

145 Written evidence from: Common Sense Media (OSB0018), 5Rights Foundation (OSB0096)

146 Written evidence from Dr Edina Harbinja (Senior lecturer in law at Aston University, Aston Law School) (OSB0145)

149 Q 72; Written evidence from Ofcom (OSB0021)

150 Written evidence from: NSPCC (OSB0228), The Office of the Children’s Commissioner (OSB0019)

151 Department for Digital, Culture, Media and Sport and The Home Office, Online Harms White Paper, CP 57, April 2019, p 8: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/973939/Online_Harms_White_Paper_V2.pdf [accessed 7 December 2021]

152 Carnegie UK , Online harm reduction – a statutory duty of care and regulator (April 2019): https://d1ssu070pg2v9i.cloudfront.net/pex/pex_carnegie2021/2019/04/06084627/Online-harm-reduction-a-statutory-duty-of-care-and-regulator.pdf [accessed 9 December 2019]

153 Department for Digital, Culture, Media and Sport and The Home Office, Online Harms White Paper: Full government response to the consultation, CP 354, December 2020: https://www.gov.uk/government/consultations/online-harms-white-paper/outcome/online-harms-white-paper-full-government-response [accessed 12 November 2021]

154 For example, Q 66 (Izzy Wick), Q 69 (William Perrin), Q 70 (Professor Sonia Livingstone); Written evidence from: NSPCC (OSB0109); Mr John Carr (Secretary at Children’s Charities’ Coalition on Internet Safety) (OSB0167)

155 For example, written evidence from: Snap Inc. (OSB0012); Internet Watch Foundation (IWF) (OSB0110); Parent Zone (OSB0124); Dr Martin Moore (Senior Lecturer at King’s College London) (OSB0063); Damian Tambini (Distinguished Policy Fellow and Associate Professor at London School of Economics and Political Science) (OSB0066); Twitter (OSB0072); BBC (OSB0074); Care (OSB0085); Carnegie UK (OSB0095); techUK (OSB0098); NSPCC (OSB0109); Parent Zone (OSB0124); Facebook (OSB0147); Google (OSB0175); Confederation of British Industry (CBI) (OSB0186); TalkTalk (OSB0200)

156 Written evidence from Department of Digital, Culture, Media and Sport and Home Office (OSB0011)

157 Q 286 (Rt Hon Nadine Dorries MP)

158 Q 17 (Imran Ahmed)

159 Q 143 (Gavin Millar QC)

160 Q 143 (Gavin Millar QC); for example, Q 60 (Dr Edina Harbinger).

161 Carnegie UK, ‘Amendments and Explanatory Notes: Carnegie UK Revised Online Safety Bill - Nov 2021’: https://www.carnegieuktrust.org.uk/publications/amendments-explanatory-notes-carnegie-uk-revised-online-safety-bill-nov-2021/ [accessed 18 November 2021]

162 Financial Conduct Authority, ‘FCA proposes stronger protection for consumers in financial markets’: https://www.fca.org.uk/news/press-releases/fca-proposes-stronger-protection-consumers-financial-markets [accessed 18 November 2021]




© Parliamentary copyright 2021