Draft Online Safety Bill Contents

3Societal harm and the role of platform design

Content and activity

62.One of the most common criticisms we heard of the draft Bill was that it focused too heavily on content and not enough on system design or broader “activity”.163 The Government has said that the draft Bill is a “systems and processes” bill—aimed at addressing systemic issues with online platforms rather than seeking to regulate individual content. At the same time, it defines multiple different types of content and specifies how platforms should address them. The Bill should be clear on this.

63.Service providers can, and should, be held accountable for carelessly hosting content that creates a risk of harm. That requires clear definitions. At the same time, in many cases we heard it is the virality, the aggregation, and the frictionless nature of sharing that determines how much harm is caused by any individual piece of content. As Jimmy Wales, founder of Wikipedia, put it:

“I do not have a crazy racist uncle, but we all know the stereotype, down at the pub spouting off nonsense to his mates. That is a problem, but it is not a problem requiring parliamentary scrutiny. When it becomes a problem is not that my crazy uncle posts his racist thoughts on Facebook, but that he ends up with 5,000 or 10,000 followers, because everyone in the family yells at him and the algorithm detects, “Ooh, engagement”, and chases after that, and begins to promote it. That is a problem, and it is a really serious problem that is new and different.”164

64.One of the changes that the Government made in the draft Bill, compared to the White Paper, was to replace references to “content and activity” with references solely to “content”. This has reinforced the sense among many of our witnesses that the draft Bill is concerned solely with content moderation. 5Rights called for a return to the “content and activity” language of the White Paper, arguing that “content” alone does not reflect the full range of risks that children are exposed to online.165 Interestingly the Government’s own written evidence referred to “content and activity” when talking about provisions in the draft Bill.166

65.Activity that creates a risk of harm can take many forms and can originate from people using the platforms or from platforms themselves. Examples of people’s activity that can create a risk of harm are the mass reporting of individuals to platforms for spurious breaches of terms and conditions as a form of harassment, adults initiating unsupervised contact with children, excluding individuals from online groups to harass them, and control of technology in domestic abuse cases.

66.As discussed in Chapter 3, platforms’ activity can itself create a risk of harm, such as when unsafe content is promoted virally, people are automatically invited to join groups167 which share extreme views or where recommendation tools prioritise content that creates a risk of harm.

67.We are also concerned that “content” may prove too limiting in a rapidly developing online world. We heard during our inquiry about the need to ensure that the Bill keeps up with changes in the online world, the increasing use of virtual and augmented reality and, of course, Facebook’s launch of the “metaverse”.168

68.We recommend that references to harmful “content” in the Bill should be amended to “regulated content and activity”. This would better reflect the range of online risks people face and cover new forms of interaction that may emerge as technology advances. It also better reflects the fact that online safety is not just about moderating content. It is also about the design of platforms and the ways people interact with content and features on services and with one another online.

Algorithmic design

69.Platform design is central to what people see and experience on social media. Platforms do not neutrally present content. For most user-to-user platforms, algorithms are used to curate a unique personalised environment for each user. 169 To create these environments, algorithms use detailed information about the user such as their behaviour on the platform (how long they have watched a certain video or what content they have interacted with), and their geographical location.170 As Laura Edelson, a researcher at New York University, said:

“In any Category 1 platform that I know of … there is no action that a user can take in the public news feed or in a public Twitter feed that will guarantee that another user will see that piece of content. Every action that you take in an interaction only feeds into the likelihood that a recommendation algorithm will then show that to another user.”171

70.Designing curated environments for individual people can give them content that they are interested in and want to engage with, enhancing their experience on the platform. The commercial imperative behind this is to hold people’s attention and maximise engagement.172 However, the choice to design platforms for engagement can be problematic:

“Engagement is maximised by (1) strong emotion, (2) rabbit holes that lead to a warren of conspiracy, (3) misinformation that gets engagement from detractors and supporters, and (4) … algorithmic reinforcement of prior beliefs.”173

71.Algorithms designed to maximise engagement can directly result in the amplification of content that creates a risk of harm.174 For example, the CCDH found that 714 posts manually identified as antisemitic across five social media platforms reached 7.3 million impressions over a six-week period.175 By maximising engagement, algorithms can also hyper-expose individual people to content which exposes them to a high risk of harm.176 In showing people content that is engaging, algorithms can lead them down a “rabbit hole” whereby content that creates a risk of harm becomes normalised and they are exposed to progressively more extreme material.177 As Mr Ahmed told us, people are more likely to believe things they see more often and news feeds and recommendation tools are a powerful way to influence a person’s worldview.178 ITV told us: “show an interest in a topic, even one that is potentially harmful, and their core business model and algorithms will find more of it for you.”179

72.Ms Haugen explained how recommendation systems can be designed to continuously serve content to people: “Instead of you choosing what you want to engage with, [YouTube] Autoplay chooses for you, and it keeps you in … a flow, where it just keeps you going.”180 This can be dangerous181 as “the AI (artificial intelligence) isn’t built to help you get what you want—it’s built to get you addicted … ”182 and “there is no conscious action of continuing or picking things, or whether or not to stop. That is where the rabbit holes come from.”183 In 2018 YouTube said that over 70 per cent of videos were viewed in response to recommendations.184 In written evidence to us they said the figure “fluctuates” but remains a majority.185 Continually serving content can create a risk of addictive behaviours in some people, and we heard particular concerns from witnesses about the susceptibility of children to addictive behaviours and “problematic use”.186

Frictionless activity

73.Platforms are often designed to minimise friction for users, maximising their ability to interact with one another and diversify their communications through multiple different services with minimal effort. Autoplay, discussed above, is an example of a friction-reducing design feature—making it easier for the person using the system to watch another piece of content chosen for them by the platform, rather than choosing their own content or switching off.187

74.We heard that “ … safety measures frequently come into conflict with the ‘maximise engagement, minimise friction’ incentives of the surveillance advertising business model.”188 Mr Ahmed told us that platforms as currently designed allowed twelve individuals to produce two thirds of the anti-COVID vaccine disinformation that their organisation identified online.189 Witnesses told us about one case where the ability to easily invite large volumes of people to join Facebook groups resulted in an individual user sending invites to 300,000 other users to a group which proliferated extreme views.190

75.Where platform design allows communication between adults and children, frictionless interaction and movement between platforms means there is a particular risk of facilitating child sexual exploitation and abuse (CSEA). The NSPCC was particularly concerned about cross-platform abuse: “abusers exploit the design features of social networks to make effortless contact with children, before the process of coercion and control over them is migrated to encrypted messaging or live streaming sites.”191 Private spaces such as chat rooms, closed groups, and encrypted messages were of particular concern to our witnesses.192

Safety by design as a mitigation measure

76.In June 2020, DCMS published guidance on how service providers can mitigate the risk of harmful and illegal activity by integrating safety into the design of platforms. They describe safety by design as: “the process of designing an online platform to reduce the risk of harm to those who use it … It considers user safety throughout the development of a service, rather than in response to harms that have occurred.”193

77.Ms Haugen gave us an example of how non-content-based interventions can be effective in reducing the virality of content:

“Let us imagine that Alice posts something and Bob reshares it and Carol reshares it, and it lands in Dan’s news feed. If Dan had to copy and paste that to continue to share it, if the share button was greyed out, that is a two-hop reshare chain, and it has the same impact as the entire third party fact-checking system … ”194

78.We heard similar ideas from Renée DiResta, Technical Director at the Stanford Internet Observatory, who told us that you could implement a circuit-breaker whereby “when content reaches a particular threshold of velocity or virality” you could send it to “the relevant teams within the platform so that they can assess what is happening”.195 We heard that you could also “throttle its distribution while that is happening if it falls into a particular type of content that has the potential to harm.”196 Ms DiResta explained that this concept is like one already used in financial markets.197

79.Some service providers are already implementing some safety by design measures on their platforms. In response to the Age Appropriate Design Code, YouTube recently changed the settings for Autoplay so that it is turned off by default for people using their platform who are aged 13–17.198 Twitter told us that they had introduced a “nudge” for people to read articles before they share them, 199 and Snap Inc. told us that Snapchat has no open news feeds where “unvetted publishers or individuals have an opportunity to broadcast hate or misinformation, and [that it doesn’t] offer public comments that may amplify harmful behaviour.”200

80.Reset argued that rather than asking companies to write rules for content, the Online Safety Bill “should require them to improve their systems and designs: mandating practical solutions to minimise the spread of harmful material by focusing on preventative measures such as reduced amplification, demonetisation and strict limits on targeting.” They outlined how incorporating safety by design principles could affect people’s experiences of using platforms:

“… abusive tweets sent in the heat of the moment to a footballer who had a bad game aren’t promoted to other disappointed fans, causing an abusive pile on. Before telling a Love Island heartthrob who has fallen from grace to kill themselves, users are asked to think twice. Having the option to delay when your comment is posted, becomes the norm. Being directed to authoritative, fact-checked sites about climate change or coronavirus before you watch a conspiracy theory video might give pause for thought.” 201

81.We heard throughout our inquiry that there are design features specific to online services that create and exacerbate risks of harm. Those risks are always present, regardless of the content involved, but only materialise when the content concerned is harmful. For example, the same system that allows a joke to go viral in a matter of minutes also does the same for disinformation about drinking bleach as a cure for COVID-19. An algorithm that constantly recommends pictures of cats to a cat-lover is the same algorithm that might constantly recommend pictures of self-harm to a vulnerable teenager. Tackling these design risks is more effective than just trying to take down individual pieces of content (though that is necessary in the worst cases). Online services should be identifying these design risks and putting in place systems and process to mitigate them before people are harmed. The Bill should recognise this. Where online services are not tackling these design risks, the regulator should be able to take that into account in enforcement action.

82.We recommend that the Bill includes a specific responsibility on service providers to have in place systems and processes to identify reasonably foreseeable risks of harm arising from the design of their platforms and take proportionate steps to mitigate those risks of harm. The Bill should set out a non-exhaustive list of design features and risks associated with them to provide clarity to service providers and the regulator which could be amended by Parliament in response to the development of new technologies. Ofcom should be required to produce a mandatory Safety by Design Code of Practice, setting out the steps providers will need to take to properly consider and mitigate these risks. We envisage that the risks, features and mitigations might include (but not be limited to):

a)Risks created by algorithms to create “rabbit holes”, with possible mitigations including transparent information about the nature of recommendation algorithms and user control over the priorities they set, measures to introduce diversity of content and approach into recommendations and to allow people to deactivate recommendations from users they have not chosen to engage with;

b)Risks created by auto-playing content, mitigated through limits on auto-play and auto-recommendation;

c)Risks created by frictionless cross-platform activity, with mitigations including warnings before following a link to another platform and ensuring consistent minimum standards for age assurance;

d)Risks created through data collection and the microtargeting of adverts, mitigated through minimum requirements for transparency around the placement and content of such adverts;

e)Risks created by virality and the frictionless sharing of content at scale, mitigated by measures to create friction, slow down sharing whilst viral content is moderated, require active moderation in groups over a certain size, limit the number of times content can be shared on a “one click” basis, especially on encrypted platforms, have in place special arrangements during periods of heightened risk (such as elections, major sporting events or terrorist attacks); and

f)Risks created by default settings on geolocation, photo identification/sharing and other functionality leading to victims of domestic violence or VAWG being locatable by their abusers, mitigated through default strong privacy settings and accessible guidance to victims of abuse on how to secure their devices and online services.

83.We recommend that the Bill includes a requirement for service providers to co-operate to address cross-platform risks and on the regulator to facilitate such co-operation.

Anonymity and traceability

84.A common design feature across many user-to-user services is allowing anonymous and pseudonymous accounts, where users are either publicly unidentifiable or partly identifiable (e.g. identifiable by their first name only). The role of anonymity in facilitating abuse was a key theme in the evidence we received from sporting bodies. Mr Ferdinand told us that “the fact that you can be anonymous online is an absolute problem for everybody in society.”202 We also heard about anonymous abuse or abuse from fake or disposable accounts directed against politicians, Jewish people, women, victims of domestic abuse, journalists in repressive regimes and in the UK, as well as to organise extremist activity and threats203, and during our roundtable on age assurance and anonymity about the “disinhibition” effect associated with posting online and when posting anonymously. Most studies suggest that anonymity is likely to lead to more abusive or “uncivil” behaviour—though it is important to note there is at least one well-known study that points the other way.204

85.Ms Ressa noted that the ease of creation and disposal of anonymous online accounts made them key tools in the disinformation and harassment campaign against her. She told us that the exponential attacks she had experienced “came from anonymous accounts because they are easy to make and easy to throw out”. She said that, as a consequence of activity from anonymous accounts, she has “watched [her] credibility get whittled away. You cannot respond. If you are the journalist or if you are a government official, your hands are tied. You are responding to a no-name account. You just do not do things like that. The attacks are horrendous.”205

86. Ms Haugen and Mr Perrin both questioned whether ending anonymity would be effective or proportionate at achieving the desired outcome of ending online abuse.206 We heard that being identifiable did not prevent abuse: much of the misogynistic abuse Ms Jancowicz and other prominent women received came from identifiable accounts, and she had received abuse on LinkedIn where the abuser’s employer or prospective employer may see it.207 We also heard of the importance of anonymity to marginalised groups, victims of violence, whistleblowers, and children.208 Nancy Kelley, Chief Executive of Stonewall, explained:

“I know people have suggested things like names being visible. Even in progressive countries that are accepting, we know that will expose LGBTQ people to harm. …

If we look at that in the global context, we know from research that Article 19 has done that almost 90 per cent of LGBTQ users in Egypt, Lebanon and Iran said they are incredibly frightened of mentioning even their name in any kind of private messaging online. We know that over 50 per cent of the men charged in Egypt in recent years with homosexual ‘offences’—because it is indeed illegal to be gay there, as it still is in 71 countries around the world—were the subject of online stings.”209

87.One suggestion we heard to address the risks posed by anonymity would be to require people to provide an “anchor” to their real-world identity when creating an account, so that people can be held accountable. Such an approach has also been recommended by Siobhan Baillie MP in her Social Media Platforms (Identity Verification) Bill, and in her written evidence to the Committee.210 This would provide traceability in the event of their posting illegal content or engaging in illegal activity, but without requiring them to post under their real names.211 Crucially, we heard that traceability would have to meet minimum standards on quality in order to be effective. The FA told us that “the ability to trace back to an IP address or a location does not provide proof on the person operating behind the account” and that there are many tools that can be used “to cloud traceability”.212

88.We heard, however, that service providers already have the ability to trace people online. Ms Haugen told us: “Platforms have far more information about accounts than I think people are aware of … It is a question of Facebook’s willingness to act to protect people more than a question of whether those people are anonymous on Facebook.”213 Other witnesses, including ministers, agreed that platforms and law enforcement often do have the information and powers to identify people who act illegally online.214 The House of Commons Petitions Committee noted that the capacity of law enforcement bodies to act was a major factor.215

89.Some raised the possibility that verification did not have to be a mandatory process. They suggested numerous system design features that could address the risks posed by anonymous accounts. Clean Up the Internet argued that all users should have the option to verify their account and the option to control the level of interaction they have with unverified accounts on a sliding scale.216 Hope not Hate argued that anonymity didn’t have to mean a lack of accountability. They noted that anonymous accounts can be banned just the same as identifiable ones. They called for measures to introduce “friction” into the process of creating and removing accounts, requiring accounts to build up evidence of rules adherence and compliance before being able to access the full functionality of a platform.217 Ms Ressa agreed that the mass creation of new accounts was a core part of the problem. She and Ms Haugen both noted that, for an engagement and advertising-based business model, there was a financial incentive on platforms to facilitate the mass creation of duplicate and disposable accounts and conceal the scale of it.218

90.Responding to the evidence we heard, the Secretary of State said the first priority of the draft Bill was to end all online abuse—not just that from anonymous accounts. She recognised the concerns and importance of anonymity to groups like whistleblowers and domestic abuse victims. She indicated she was looking into proposals along the lines of those proposed by Clean Up the Internet around giving people the option to limit their interaction with anonymous or non-verified accounts. Finally, she talked about the importance of traceability in the context of her own experiences of online abuse, noting as mentioned above that platforms often do have access to the information required by law enforcement.219

91.Anonymous abuse online is a serious area of concern that the Bill needs to do more to address. The core safety objectives apply to anonymous accounts as much as identifiable ones. At the same time, anonymity and pseudonymity are crucial to online safety for marginalised groups, for whistleblowers, and for victims of domestic abuse and other forms of offline violence. Anonymity and pseudonymity themselves are not the problem and ending them would not be a proportionate response. The problems are a lack of traceability by law enforcement, the frictionless creation and disposal of accounts at scale, a lack of user control over the types of accounts they engage with and a failure of online platforms to deal comprehensively with abuse on their platforms.

92.We recommend that platforms that allow anonymous and pseudonymous accounts should be required to include the resulting risks as a specific category in the risk assessment on safety by design. In particular, we would expect them to cover, where appropriate: the risk of regulated activity taking place on their platform without law enforcement being able to tie it to a perpetrator, the risk of ‘disposable’ accounts being created for the purpose of undertaking illegal or harmful activity, and the risk of increased online abuse due to the disinhibition effect.

93.We recommend that Ofcom be required to include proportionate steps to mitigate these risks as part of the mandatory Code of Practice required to support the safety by design requirement we recommended in paragraph 82. It would be for them to decide what steps would be suitable for each of the risk profiles for online services. Options they could consider might include (but would not be limited to):

a)Design measures to identify rapidly patterns of large quantities of identical content being posted from anonymous accounts or large numbers of posts being directed at a single account from anonymous accounts;

b)A clear governance process to ensure such patterns are quickly escalated to a human moderator and for swiftly resolving properly authorised requests from UK law enforcement for identifying information relating to suspected illegal activity conducted through the platform, within timescales agreed with the regulator;

c) A requirement for the largest and highest risk platforms to offer the choice of verified or unverified status and user options on how they interact with accounts in either category;

d)Measures to prevent individuals who have been previously banned or suspended for breaches of terms and conditions from creating new accounts; and

e)Measures to limit the speed with which new accounts can be created and achieve full functionality on the platform.

94.We recommend that the Code of Practice also sets out clear minimum standards to ensure identification processes used for verification protect people’s privacy—including from repressive regimes or those that outlaw homosexuality. These should be developed in conjunction with the Information Commissioner’s Office and following consultation with groups including representatives of the LGBTQ+ community, victims of domestic abuse, journalists, and freedom of expression organisations. Enforcement of people’s data privacy and data rights would remain with the Information Commissioner’s Office, with clarity on information sharing and responsibilities.

Societal harm and the role of safety by design

95.As set out in Chapter 3, the spread of disinformation online has been associated with extensive real-world harm, from mass killings to riots and unnecessary deaths during the COVID-19 pandemic. Ms Ressa told us of research which illustrates the risk that inauthentic content poses to society: “cheap armies on social media are rolling back democracy in 81 countries around the world.”220

96.Disinformation can inflict harm on individuals, as well as groups (called “collective harms”) and wider society (called “societal harms”). Collective and societal harms were frequently discussed in relation to disinformation, but they can also refer to the cumulative effect of many instances of forms of undesirable online content and activity.

97. Examples include persistent racism or misogyny online, where the cumulative effect and frequency of attacks can make people feel less safe. Glitch told us that “the current status quo is driving women and particularly marginalised and racialised women and non-binary people to censor themselves online or remove themselves completely.”221 They also highlight how this abuse has implications for democracy, giving abuse as one reason why many women MPs choose not to run for re-election.222 This intersects with race, with research by Amnesty International analysing tweets that mentioned women MPs in the run up to the 2017 General Election and finding the 20 Black, Asian and Minority Ethnic MPs received 41 per cent of the abuse, despite making up less than 12 per cent of the those in the study.223 Ms Jankowicz told us:

“[Misogynistic abuse online] is not just a democratic concern; it is a national security concern, which should make it of interest to everybody in government. It is not just about hurt feelings. It really affects the way our countries operate.”224

98.The White Paper identified disinformation, misinformation and online manipulation as harms that were “threats to our way of life” and proposed a regulatory regime that focused on the “harms that have the greatest impact on individuals or wider society.”225 When the draft Bill was published however, the Government had noted concerns from stakeholders about the impact this might have on freedom of expression, and it was made clear that it would “cover content and activity that could cause harm to individuals rather than harms to society more broadly.”226 Calls remain to include collective, or societal harms in the scope of the Bill, with Reset noting that “the impact of disinformation is absolutely collective in nature.”227 Others have noted the difficulty in defining and attributing harm, particularly with misinformation, and supported the Government’s decision to remove societal harm.228

99.BT Group described in their submission the impact on their staff and subcontractors of the 5G conspiracies which led to arson attacks on infrastructure.229 While there are offline offences for the results of these attacks, it is less clear whether simply sharing an article hypothesising a link between 5G and COVID-19 would meet the threshold of harmful to an individual and it may be genuinely believed by the person sharing it, meaning it may not meet the threshold for the criminal offence. The Government gave the example of people with genuine concerns about vaccines, saying we “have good answers to those questions, and should educate people rather than silencing them, as some have called for in trying to legislate against vaccine misinformation.”230 We heard in one session that removing content could also have the unwanted effect of stoking conspiracies, adding a “censorship dynamic”, when it may be better to reduce and inform.231

100.Later in this report we discuss new offences proposed by the Law Commission around harm-based or knowingly false communications. These may be helpful in some instances in tackling disinformation, but they also have limitations. The harm-based offence relates specifically to psychological harm, so may not be applicable to vaccine disinformation, and knowingly false means just that—the person sending the communication must know it is untrue. It is also unclear whether the latter offence would assist in cases of disinformation trying to disrupt elections, as the harm is based on psychological or physical harm, rather than harm to an institution, process, state, or society.232 The Elections Bill, which is currently making its way through Parliament with the intention “to strengthen the integrity of the electoral process”233, should address the issue of disinformation which aims to disrupt elections.

101.We asked the Government why they had chosen not to include societal harms and were told “this could incentivise excessive takedown of legal material due to the lack of consensus about what might result in societal harm.”234 In our final session the Secretary of State said:

“If we put societal harms into the Bill, I am afraid we would not be able to make it work. We have looked at it. We have explored it. We have probed it. Legally, it is just a non-starter, I am afraid.”235

102.The Government instead aims to tackle the problem of disinformation through strengthened media literacy which we consider in Chapter 8 on the role of the regulator. The draft Bill also includes a requirement for Ofcom to establish an advisory committee.236 It will include representatives of service providers, experts, and platform users, and will provide advice and oversee Ofcom’s exercise of their media literacy duties. The Government also established the Cross-Whitehall Counter Disinformation Unit (CDU) at the start of the pandemic and indicated in evidence that it would continue. Carnegie UK Trust expressed concerns about a lack of accountability for the CDU and called for it to be put on a statutory footing.237

103.Although we have been unable to see the Government’s view on the draft Bill’s compliance with the European Convention on Human Rights (ECHR) during our inquiry, we have heard in evidence that it may be open to legal challenge, including on the grounds of interference with people’s right to freedom of expression.238 The inclusion of societal, as well as individual harms, would likely raise further concerns about the extent of this interference. As Reset note in their evidence however, the European Union’s Digital Services Act goes further than the draft Bill, “by recognising that the use of ‘VLOPs’ (Very Large Online Platforms) poses ‘systemic risks’ to individuals and to societies.”239

104.As with other types of content, much of the risk of harm from disinformation lies not in the individual pieces of content but in their amplification, in the cumulative effect of large numbers of people seeing them, and in individual people being repeatedly exposed. Mr Chaslot told us that “algorithms create filter bubbles, where some people get to see the same type of content all the time”, and that by being exposed to disinformation “over and over again” these people “[get] very disinformed” without realising it.240

105.Ms Haugen said that to tackle disinformation and the harm it risks causing, the systems that allow virality and encourage amplification need to be addressed, rather than focusing on individual pieces of content, which is when you “run into freedom of speech issues”.241 Sophie Zhang, another former Facebook employee, also made a similar point, describing how fact-checking could only be of limited value as often it only took place once content had already been shared widely. She continued:

“… fundamentally, companies cannot adjudicate every piece of content … that is why my proposals and suggestions have fallen more along the lines of reducing virality in general by reducing reshares—for instance, by requiring people to go to the initial post to reshare a piece of content rather than its being reshared and resharing it again and going to chronological news feed rankings. The problem at hand is not that the content is being made in the first place, but that it is being seen and widely distributed, and people have an incentive to make potentially sensationalist claims.”242

106.We recognise the difficulties with legislating for societal harms in the abstract. At the same time, the draft Bill’s focus on individuals potentially means some content and activity that is illegal may not be regulated. We discuss this further in Chapter 4.

107.The viral spread of misinformation and disinformation poses a serious threat to societies around the world. Media literacy is not a standalone solution. We have heard how small numbers of people are able to leverage online services’ functionality to spread disinformation virally and use recommendation tools to attract people to ever more extreme behaviour. This has resulted in large scale harm, including deaths from COVID-19, from fake medical cures, and from violence. We recommend content-neutral safety by design requirements, set out as minimum standards in mandatory codes of practice. These will be a vital part of tackling regulated content and activity that creates a risk of societal harm, especially the spread of disinformation. For example, we heard that a simple change, introducing more friction into sharing on Facebook, would have the same effect on the spread of mis- and disinformation as the entire third-party fact checking system.

108.Later in this report we also recommend far greater transparency around system design, and particularly automated content recommendation. This will ensure the regulator and researchers can see what the platforms are doing, assess the impact it has and, in the case of users, make informed decisions about how they use platforms. Online services being required to publish data on the most viral pieces of content on their platform would be a powerful transparency tool, as it will rapidly highlight platforms where misinformation and disinformation is drowning out other content.

109.Many online services have terms and conditions about disinformation, though they are often inconsistently applied. We recommend later a statutory requirement on service providers to apply their terms and conditions consistently, and to produce a clear and concise online safety policy. Later, we identify two areas of disinformation—public health and election administration—which are or will soon be covered in the criminal law and that we believe should be tackled directly by the Bill.

110.As a result of recommendations made in this report, regulation by Ofcom should reduce misinformation and disinformation by:

The Joint Committee that we recommend later in this report should take forward work to define and make recommendations on how to address other areas of disinformation and emerging threats.

111.Disinformation and misinformation surrounding elections are a risk to democracy. Disinformation which aims to disrupt elections must be addressed by legislation. If the Government decides that the Online Safety Bill is not the appropriate place to do so, then it should use the Elections Bill which is currently making its way through Parliament.

112.The Information Commissioner, Elizabeth Denham, has stated that the use of inferred data relating to users’ special characteristics as defined in data protection legislation, including data relating to sexual orientation, and religious and political beliefs, would not be compliant with the law. This would include, for example, where a social media company has decided to allow users to be targeted with content based on their data special characteristics without their knowledge or consent. Data profiling plays an important part in building audiences for disinformation, but also has legitimate and valuable uses. Ofcom should consult with the Information Commissioner’s Office to determine the best course of action to be taken to investigate this and make recommendations on its legality.


163 For example, written evidence from: Reset (OSB0138); Dr Edina Harbinja (Senior lecturer in law at Aston University, Aston Law School) (OSB0145); LGBT Foundation (OSB0191)

164 Q 80 (Jimmy Wales)

165 Written evidence from 5Rights Foundation (OSB0096); although not explicitly discussed in the evidence we heard, many children’s groups use the “four C’s” of content, contact, contract and conduct to describe online risks. See for example UK Safer Internet Centre, ‘What are the issues?’: https://saferinternet.org.uk/guide-and-resource/what-are-the-issues [accessed 15 November 2021]

166 Written evidence from the Department of Digital, Culture, Media and Sport and Home Office (OSB0011)

167 A 2016 report from Facebook showed that 64% of the time when Facebook users joined extremist groups, the groups had been recommended by the site’s algorithms. Study: ‘Facebook Allows And Recommends White Supremacist, Anti-Semitic And QAnon Groups With Thousands Of Members’ Forbes (4 August 2020): https://www.forbes.com/sites/jemimamcevoy/2020/08/04/study-facebook-allows-and-recommends-white-supremacist-anti-semitic-and-qanon-groups-with-thousands-of-members [accessed 9 December 2021]

168 Q 77 (Dr Edina Harbinja); Q 271 (Dame Melanie Dawes)

170 For example, Q93; Q92; For an example of the sorts of information used see written evidence from Elizabeth Kanter (Director of Government Relations at TikTok) (OSB0219)

172 Q 136; Q 92, Q 95, Q 101, Q 102; Written evidence from Global Action Plan (OSB0027)

173 Written evidence from Center for Countering Digital Hate (OSB0009)

174 Written evidence from: Center for Countering Digital Hate (OSB0009); 5Rights Foundation (OSB0096); Q156; Common Sense (OSB0018); Anti-Defamation League (ADL) (OSB0030); Glitch (OSB0097); 5Rights Foundation (OSB0206)

175 Center for Countering Digital Hate : Failure to Protect: How tech giants fail to act on user reports of antisemitism (2021): https://252f2edd-1c8b-49f5-9bb2-cb57bb47e4ba.filesusr.com/ugd/f4d9b9_cac47c87633247869bda54fb35399668.pdf [accessed 16 November 2021]

179 Written evidence from ITV (OSB0204)

181 Written evidence from COST Action CA16207 - European Network for Problematic Usage of the Internet (OSB0038)

182 The Next Web News, ‘”YouTube recommendations are toxic” says dev who worked on the algorithm’: https://thenextweb.com/news/youtube-recommendations-toxic-algorithm-google-ai [accessed 9 December 2021]; Written evidence from ITV (OSB0204)

184 Cnet,’YouTube’s AI is the puppet master over most of what you watch’: https://www.cnet.com/news/YouTube-ces-2018-neal-mohan/ [accessed 30 November 2021]; Q 92

185 Written evidence from Google UK Limited (OSB0218)

187 The Age Appropriate Design Code encourages services to introduce wellbeing enhancing behaviours such as taking breaks, many of which have now been introduced by some services.

188 Written evidence from Global Action Plan (OSB0027)

190 Q 153; Written evidence from Google UK Limited (OSB0218)

191 Written evidence from NSPCC (OSB0109)

192 Written evidence from: NSPCC (OSB0109); Barnardo’s (OSB0017); Mrs Gina Miller (OSB0112); Dame Margaret Hodge (Member of Parliament for Barking and Dagenham at House of Commons) (OSB0201); Information Commissioner’s Office (OSB0211)

193 Department for Digital, Culture, Media and Sport, Principles of safer online platform design (June 2021): https://www.gov.uk/guidance/principles-of-safer-online-platform-design [accessed 19 November 2021]

198 Vox, ‘YouTube’s kids app has a rabbit hole problem’,: https://www.vox.com/recode/22412232/youtube-kids-autoplay [accessed 22 November 2021]; Fatherly,’YouTube Finally Turns off Autoplay for Kids. Here’s the Catch’: https://www.fatherly.com/news/youtube-autoplay-kids [accessed 22 November 2021]; Alphabet Inc., ‘YouTube Help: Autoplay Videos’: https://support.google.com/youtube/answer/6327615?hl=en [accessed 22 November 2021]

200 Written evidence from Snap Inc. (OSB0012)

201 Written evidence from Reset (OSB0138)

202 Q 19 (Rio Ferdinand); see also for example written evidence from: The Football Association, The Premier League, EFL, Kick It Out (OSB0007); Sport and Recreation Alliance (OSB0090); 5 Sports: The Football Association, England and Wales Cricket Board, Rugby Football Union, Rugby Football League and Lawn Tennis Association, The FA (OSB0111)

203 For example, written evidence from: Compassion in Politics (OSB0050); Dame Margaret Hodge (Member of Parliament for Barking and Dagenham at House of Commons) (OSB0201); Antisemitism Policy Trust (OSB0005); Centenary Action Group, Glitch, Antisemitism Policy Trust, Stonewall, Women’s Aid, Compassion in Politics, End Violence Against Women Coalition, Imkaan, Inclusion London, The Traveller Movement (OSB0047); Refuge (OSB0084); Q 194 (Maria Ressa); The National Union of Journalists (NUJ) (OSB0166); HOPE not hate (OSB0048); Mrs Gina Miller (OSB0112)

204 For a summary of some of the key research discussed see, Clean Up the Internet, ‘Academic Research about online disinhibition, anonymity and online harms’: https://www.cleanuptheinternet.org.uk/post/some-useful-scholarly-articles-about-online-disinhibition-anonymity-and-online-harms [accessed 18 November 2021]

205 Q 194 (Maria Ressa)

206 Q 73 (William Perrin); Q 171 (Frances Haugen)

207 Q 62 (Nina Jankowicz)

208 For example, Q 73 (Dr Edina Harbinja); written evidence from: Demos (OSB0159); Glassdoor (OSB0033); HOPE not hate (OSB0048)

209 Q 44 (Nancy Kelley)

210 Social Media Platforms (Identity Verification) Bill; written evidence from Siobhan Baillie Member of Parliament for Stroud (OSB0242)

211 Q 194 (Maria Ressa); written evidence from: Antisemitism Policy Trust (OSB0005); Sport and Recreation Alliance (OSB0090)

212 Written evidence from The Football Association, Kick It Out (OSB0234)

213 Q 171 (Frances Haugen)

214 Q 219 (Rt Hon Nadine Dorries MP, Chris Philp MP, Rt Hon Damian Hinds MP)

215 Written evidence from Mr John Carr (Secretary at Children’s Charities’ Coalition for Internet Safety) (OSB0216); see for example Petitions Committee, Online Abuse and the Experience of Disabled People, (First Report, Session 2017–19, HC 759) paras 123–137, which detailed the problems disabled people often have in getting law enforcement to investigate potentially illegal online abuse.

216 Written evidence from Clean up the Internet (OSB0026)

217 Written evidence from HOPE not hate (OSB0048)

218 QQ 193–194 (Maria Ressa); Q 129 (Frances Haugen)

219 Q 291 (Rt Hon Nadine Dorries MP)

221 Written evidence from Glitch (OSB0097)

222 Written evidence from Glitch (OSB0097)

223 Amnesty International, ‘Black and Asian women MPs abused more online’: https://www.amnesty.org.uk/online-violence-women-mps [accessed 30 November 2021]

225 Department for Digital, Culture, Media and Sport and The Home Office, Online Harms White Paper, CP 59, April 2019, p 54: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/973939/Online_Harms_White_Paper_V2.pdf [accessed 22 November 2021]

226 Department for Digital, Culture, Media and Sport and The Home Office, Impact Assessment, April 2021, p 116, p 123: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/985283/Draft_Online_Safety_Bill_-_Impact_Assessment_Web_Accessible.pdf [accessed 22 November 2021]

227 Written evidence from Reset (OSB0138)

228 Written evidence from UKRI Trustworthy Autonomous Systems Hub (OSB0060)

229 Written evidence from BT Group (OSB0163)

230 Written evidence from Department of Digital, Culture, Media and Sport and Home Office (OSB0011)

232 Law Commission, Modernising Communications Offences Law Com No 399, HC 547 (July 2021), pp 224–225: https://s3-eu-west-2.amazonaws.com/lawcom-prod-storage-11jsxou24uy7q/uploads/2021/07/Modernising-Communications-Offences-2021-Law-Com-No-399.pdf [accessed 22 November 2021]

233 Elections Bill [Bill 178 (2021–220]

234 Written evidence from Department of Digital, Culture, Media and Sport and Home Office (OSB0011)

236 Draft Online Safety Bill, CP 405, May 2021, Clause 98

238 Written evidence from Gavin Millar QC (OSB0221)

239 Written evidence from Reset (OSB0138)




© Parliamentary copyright 2021