238.The draft Bill covers user-to-user services and search services.419 It includes messaging apps (such as Facebook Messenger, WhatsApp, Telegram, etc) but excludes “one to one aural communication”, SMS messages and email as well as particular types of content, discussed later.420
239.The draft Bill divides regulated services into three categories:—Category 1 (likely to include the largest user-to-user services), Category 2A (search services) and Category 2B (user-to-user services that do not meet the threshold to be Category 1).421 It is ambiguous as to whether some user-to-user and search engines may not meet the threshold for Category 2 and may therefore be outside the regulator regime entirely. The draft Bill requires Ofcom to maintain a register of services in each category, determined by thresholds set by the Secretary of State in accordance with Schedule 4. Certain duties of the draft Bill only apply to Category 1 services—for example, the safety duties for adults and the duties to protect content of democratic importance and journalistic content.422
240.As set out above, the draft Bill divides service providers into categories based on size and functionality. The categories in the draft Bill restrict some duties to only the largest and most high-risk services—primarily the safety duty in respect of content that is harmful to adults and the duties to protect journalistic content and content of democratic importance. The Impact Assessment for the draft Bill states:
“ … the current estimate based on the policy intention is that only up to 20 of the largest and highest risk services will meet the Category 1 thresholds, likely to be large social media platforms and potentially some gaming platforms and online adult services.”423
241.The idea of categorisation was welcomed by some witnesses. Mr Ahmed told us: “There are rational reasons for splitting out the larger platforms from the smaller platforms. … Like The Disinformation Dozen, you should focus as much as possible on where the greatest quantum of harm is caused.”424
242.Whilst the draft Bill sets out the principle of thresholds, the setting of them is left to secondary legislation and the Government has given little guidance on where they might end up, especially the thresholds for the 2A and 2B categories. Uncertainty over categorisation was a significant theme in the evidence we had from service providers. TechUK noted that the regulation was likely to cover around 24,000 in-scope services, with purposes ranging from educational to professional to social.425 They called for clarity on the thresholds for Category 1 and 2 services and how different services operating from the same company might be treated. They argued there was a real challenge for companies to prepare for the regulation without this information.426
243.The categorisation in the draft Bill has also been criticised as underestimating the impact of small, high-risk companies. Some providers, like Microsoft, argued for a more nuanced approach to categorisation, taking account of risk factors such as the provider taking specific actions to boost the virality of content, a history of illegal content on the service or the ability to use it anonymously.427 Large services like Facebook argue that much of the activity that creates a risk of harm on their services comes originally from smaller services.428 Organisations like Hope not Hate, the Antisemitism Policy Trust, and Stonewall stressed the role of “alternative” services such as 4Chan and BitChute in hosting and spreading extremist content or misinformation.429 The Samaritans told us they are regularly contacted by members of the public, including the bereaved parents of children who have died by suicide, concerned about children as young as 12 accessing smaller services that promote and assist suicide.430
244.During our visit to Brussels, we had a useful discussion about the work of the Centre on Regulation in Europe and, particularly, Dr Sally Broughton Micova’s (Lecturer in Communications Policy and Politics at the University of East Anglia) work on the relationship of size to risk for online services. That work highlights that the “public” nature of a service and the risk of it aggregating private harm into societal harm does not just depend on size. It also depends on the service’s interconnectedness to other services, its impact on media plurality and its impact within a relatively constrained geographic area such as a Member State in an EU context.431
245.We put the idea of a more nuanced approach to categorisation to Ofcom. They agreed that the Bill should not lose sight of “smaller but extremely risky” services, though they reminded us that most of the draft Bill (including the illegal content and content harmful to children duties) apply to Category 2B providers.432
246.We recommend that the categorisation of services in the draft Bill be overhauled. It should adopt a more nuanced approach, based not just on size and high-level functionality, but factors such as risk, reach, user base, safety performance, and business model. The draft Bill already has a mechanism to do this: the risk profiles that Ofcom is required to draw up. We make recommendations in Chapter 8 about how the role of the risk profiles could be enhanced. We recommend that the risk profiles replace the “categories” in the Bill as the main way to determine the statutory requirements that will fall on different online services. This will ensure that small, but high risk, services are appropriately regulated; whilst guaranteeing that low risk services, large or small, are not subject to unnecessary regulatory requirements.
247.As noted above, the draft Bill includes search engines in a separate category—Category 2A—from user-to-user services. This means they cannot come into Category 1 and cannot be subject to the safety duties relating to adults or the protections about journalistic content or content of democratic importance. In many other respects, however, the duties and responsibilities that the draft Bill places on them are broadly similar to those for user-to-user services.
248.Search providers felt that their duties under the draft Bill were not sufficiently differentiated from the requirements on user-to-user services. Google told us:
“search services do not host user-generated content. They are an index of trillions of web pages. We provide users with the ability to find relevant information relative to those index pages. The broad definition of online harms would require us potentially to have to make contextual decisions about the nature of content that we do not host, which is on another service, and require us to monitor those billions of web pages in doing so.”433
249.Groups representing the targets of online hate rejected the idea of search engines as passive “indexes”. Mr Stone told us:
“The search companies are having a laugh at the Bill’s expense if they are not included in Category 1. Google was directing people to the search “Are Jews evil?” Microsoft Bing was directing people to “Jews are”, and then a rude word about Jews. Currently, if you were to search the word “goyim”—originally a Yiddish word for non-Jews, which is being used in a pejorative sense now—on Microsoft Bing, you will get an antisemitic website and a suspended Twitter account as the top search results. Alexa and Siri are completely outside the bounds of responsibility of this Bill. It would be a travesty if they are left out of Category 1. They should absolutely be in there.”434
250.For groups like these, the exclusion of search engines and smaller services from the duties in respect of content harmful to adults (in particular) leaves a significant gap in the draft Bill.
251.We recognise that search engines operate differently from social media and that the systems and processes required to meet the separate duties that the draft Bill places on them are different. The codes of practice drawn up by Ofcom will need to recognise the specific circumstances of search engines to meet Ofcom’s duties on proportionality. Search engines are more than passive indexes. They rely on algorithmic ranking and often include automatic design features like autocomplete and voice activated searches that can steer people in the direction of content that puts them or others at risk of harm. Most search engines already have systems and processes in place to address these and comply with other legislation. It is reasonable to expect them to come under the Bill’s requirements and, in particular, for them to conduct risk assessments of their system design to ensure it mitigates rather exacerbates risks of harm. We anticipate that they will have their own risk profiles.
252.End-to-end encryption (E2EE) allows messages to be transferred securely between devices by preventing service providers, and unassociated third parties, from viewing communications’ content. A range of messenger services, including the Facebook-owned WhatsApp as well as Telegram and Signal, use encryption to ensure people’s privacy in their private messages. However, this can present challenges for ensuring online safety because services are technologically unable to access people’s content and cannot apply their usual moderation processes.
253.Currently, most E2EE services identify activity that creates a risk of harm by either using metadata signals (such as the number of messages being sent) or relying on people to submit reports. Both approaches have substantial limitations. For instance, metadata signals may help to identify fraud and inauthentic behaviour but are unlikely to identify hateful content or some forms of content that creates a risk of harm, for example, content related to eating disorders. User reports are unlikely to work for interactions between likeminded individuals, such as terrorists or their sympathisers, child abusers or extremists. The risk presented by E2EE services which lack appropriate safeguards is particularly acute when the communications involve more than just two individuals (as with 1-to-1 services). Some encrypted services allow the creation of groups involving hundreds or thousands of members. We heard concerns from those who believed that the draft Bill would undermine the privacy offered by E2EE.435 We also heard from other witnesses who drew attention to the use of encrypted services in illegal or harmful activity, including CSEA and fraud.436
254.In their evidence submission, the ICO stated that E2EE and online safety should not be seen as “a false dichotomy”, emphasising that proportionate responses need to be developed which do not “unduly interfer[e]” with the benefits presented by E2EE.437 Balancing the benefits of E2EE in terms of protecting individuals’ freedom and privacy, with online safety requirements is essential for the Bill to be effective. However, it is unclear how this will be achieved, and mature technical solutions which enable content on E2EE services to be moderated are not widely available. If the challenges presented by E2EE are not resolved then, in the extreme, there are two potential negative outcomes: (1) E2EE becomes infeasible because services cannot meet online safety requirements; or (2) E2EE is overapplied because it lets services avoid their regulatory obligations.
255.We heard during our visit to Brussels that there are ways that risks on E2EE services can be mitigated without breaking encryption or compromising the privacy of genuinely private communications. These included design features being built into the services themselves such as limiting the frictionless mass sharing of material, age assurance designed to prevent abusers from luring children into conversations on encrypted services, and media literacy campaigns on the message “report don’t share” in respect of illegal content.
256.Concerns have been raised about the use of privacy-protecting and encrypted services to access the Internet, including proxies and VPNs. We heard from Mobile UK, the trade association for UK’s mobile network operators, about Apple Private Relay.438 Introduced in “beta” mode in 2021, Private Relay uses separate internet relays to process who is using the Internet and what they are searching for. Apple claims that this means it is technologically unable to break privacy protection, similar to a VPN. In contrast to a VPN, Private Relay is intended to be easier to use, cheaper and add less of a time delay. It only works for Apple’s Safari browser but because it is included with an iCloud+ subscription, Private Relay could become widely used. The use of privacy-protecting services to access the Internet could reduce the number of points at which users are protected from unsafe content. We heard from BT Group that they “render ineffective the software that ISPs and mobile companies currently use to block images of child abuse and other extreme or illegal content.”439
257.The Government needs to provide more clarity on how providers with encrypted services should comply with the safety duties ahead of the Bill being introduced into Parliament.
258.We recommend that end-to-end encryption should be identified as a specific risk factor in risk profiles and risk assessments. Providers should be required to identify and address risks arising from the encrypted nature of their services under the Safety by Design requirements.
259.Paid-for adverts are not included in the scope of the draft Bill. We heard that this exclusion creates a gateway for various harms to be spread online. The FCA told us that: “the problem [of online fraud] is most manifest in the paid-for space, so it does not make sense for the Bill not to deal with the very heart of the problem, which is the paid-for advertising space.”440 Similarly, Which? noted: “Paid-for advertising on online platforms is a primary method used by criminals to target consumers and engage them in a [financial] scam, as it gives them instant access to large numbers of target audiences.”441
260.Several witnesses suggested that if paid-for advertising remained excluded from scope, criminals might switch to paying for fraudulent content to be disseminated.442 Which? also told us they had investigated how easy it was to advertise a scam by creating a fake drinking water and hydration service. They were able to have it advertised on Facebook and Google with minimum checks, as well as pay for it to appear above the NHS website in Google searches about hydration.443 The Advertising Standards Authority confirmed that it knew “from recent research that increasing concerns about scams are influencing the public’s trust in online ads.”444
261.The exclusion of paid-for adverts from the scope of the draft Bill leaves little incentive for operators to remove scam adverts. Regardless of their legitimacy, they generate revenue for platforms—and as we heard from Dame Margaret Hodge MP: “This will continue to benefit the fraudsters and the chief executives.” Dame Margaret added that some platforms currently benefit from the FCA paying them to place legitimate adverts above scam ones, making it “a ‘win-win’ for them.”445 For example, the FCA invested £600,000 in preventing scam adverts on Google in 2020 through public information advertising.446
262.We also received evidence of the risk of non-financial harms if paid-for advertisements were excluded from the scope of the Bill. The Center for Countering Digital Hate told us that: “Facebook routinely accepted money to advertise anti-vaccine messages to its users until announcing it would end the practice in October 2020. Even then, our research showed that Facebook continued to broadcast anti-vaccine adverts worth at least $10,000 to its users.”447 Who Targets Me called for paid-for advertising to be brought into scope so that political advertising was subject to regulation. They hope that this would mitigate problems such as foreign powers interfering in UK elections via advertising and the spread of harmful political disinformation.448
263.Ms Edelson told us that: “ad tech [advertising technology] can be really powerful in helping scammers to identify … vulnerable populations.”449 For example, she told us: “We saw anti-vax [anti-vaccine] content being promoted to pregnant women in the United States … as a means to sell their vaccine harm reduction supplements.”450 Global Action Plan highlighted to us that studies have “found that it was possible, using Facebook’s admanager, to target children on Facebook aged between 13 and 17 based on such interests as alcohol, smoking and vaping, gambling, extreme weight loss, fast foods and online dating services.”451
264.Most of the evidence we received called for paid-for advertising to be brought into scope of the Bill.452 We were also told that the public was supportive of such a move in relation to financial scams. Research by Aviva found that 87 per cent of people felt that the Government should introduce legislation to ensure that search engines and social media platforms do not promote financial scams through advertising.453
265.Google disagreed, telling us that:
“Advertising and financial fraud involve a complex and highly specialised set of issues and existing rules, and requires consideration of the implications of regulation for legitimate competition and innovation in the UK’s dynamic fintech sector. The Advertising Standards Authority and the Financial Conduct Authority are best placed to address these issues.”454
They also wrote that such an extension in scope: “would significantly add to the 24,000 companies the Government estimates will be affected by the Bill.”455 At the same time, Google are a good example of what can be achieved when a platform decides to co-operate with a regulator. By changing their terms and conditions to only allow adverts for financial services from FCA regulated firms they reduced the number of scam adverts on their platform considerably. Other service providers have not yet implemented this measure.456
266.We heard from Ofcom that if paid-for advertisements are brought into scope, “retaining the focus on systems and processes rather than individual fraudulent or other content” should remain their priority.457 This is consistent with the approach that Ofcom has already taken to the regulation of advertising on video sharing platforms, where they have a duty to ensure that standards around advertising on those platforms are met. Here Ofcom has designated the ASA as co-regulator with day-to-day responsibility for advertising with Ofcom acting as the statutory backstop regulator.458 They also underlined that “the onus and strategy [should] be clearly owned by the criminal enforcement agencies” when it comes to dealing with individual cases of fraud.459
267.Rt Hon Nadine Dorries MP, the Secretary of State for DCMS, told us that paid-for advertising and scams were being considered as part of DCMS’s online advertising programme rather than being included in the Online Safety Bill. She also told us that her legal advice is that extending the scope to include paid-for adverts: “would not work and it would extend the scope of the Bill in a way that would not be appropriate.”460 Nonetheless, she did invite us to examine “highly targeted” amendments.461 Mr Philp wrote to us that bringing paid-for advertising into scope would difficult. Firstly, because “doing so would require a reconsideration of the services in scope of the Bill”, since online advertising can involve companies different to the ones the Government has aimed to regulate in the draft Bill.462 Secondly, he told us that:
“Safety duties on user-to-user and search services may not be appropriate, and in some cases not feasible, for application to advertisers given the very different way in which they operate. Advertising actors rely on very different contractual agreements to publish and disseminate content, in comparison to the user-to-user and search services in scope of the Online Safety Bill. A whole new set of duties would be required to comprehensively address the range of actors involved in the advertising market.”463
268.The exclusion of paid-for advertising from the scope of the Online Safety Bill would obstruct the Government’s stated aim of tackling online fraud and activity that creates a risk of harm more generally. Excluding paid-for advertising will leave service providers with little incentive to remove harmful adverts, and risks encouraging further proliferation of such content.
269.We therefore recommend that clause 39(2) is amended to remove “(d) paid-for advertisements” to bring such adverts into scope. Clause 39(7) and clause 134(5) would therefore also have to be removed.
270.Ofcom should be responsible for acting against service providers who consistently allow paid-for advertisements that create a risk of harm to be placed on their platform. However, we agree that regulating advertisers themselves (except insofar as they come under other provisions of the Bill), individual cases of advertising that are illegal, and pursuing the criminals behind illegal adverts should remain matters for the existing regulatory bodies and the police.
271.We recommend that the Bill make clear Ofcom’s role will be to enforce the safety duties on providers covered by the online safety regulation, not regulate the day-to-day content of adverts or the actions of advertisers. That is the role of the Advertising Standards Authority. The Bill should set out this division of regulatory responsibility.
272.Clause 41(6) excludes online consumer harms from the scope of the draft Bill, such as the sale of goods that are of an unsafe standard or the services of someone not qualified to perform the service. Clause 39(2)(d) sees reviews of products appearing on retailers’ websites placed out of scope. We heard opposition to this from Sky and other media and creative businesses, who argued that: “the exclusion of these harms on the face of the legislation stands in contrast to the Government’s stated ambition of a ‘coherent, single regulatory framework’ for online platforms.”464 These businesses were also concerned about the exclusion of intellectual property infringements from the draft Bill.465
273.In contrast, the British Retail Consortium supported the exclusion, claiming that its removal “could have an adverse impact on retail, not least the smaller retailers”466 because of the regulatory burden it would place on them. They believe that “as it stands customer reviews on products that are sold by a third party to a customer via a marketplace are in scope”,467 unlike reviews on products retailers sell themselves. They urged the Government to exclude all reviews from scope to avoid bringing more retail companies into scope, and because reviews help consumers make informed choices.
274.The CMA told us they “consider that existing consumer law requires platform operators to take reasonable and proportionate steps to effectively protect consumers from economically harmful illegal content.”468 As such, they have already acted to, for example, “tackle the trading of fake and misleading online reviews on Facebook and eBay”.469 They therefore argue that online consumer harms should remain excluded from scope and that the Government should “use an alternative or existing legislative initiative to ensure the necessary protections for consumers”.470 The CMA told us that bringing consumer harms into scope could mean:
“Many platform operators are likely to remain unclear about the full extent of their legal responsibilities in connection with economically harmful content … those operators may fail to take steps to implement appropriate systems and processes to effectively tackle such content until regulatory action is taken.”471
275.We recognise that economic harms other than fraud, such as those impacting consumers, and infringement of intellectual property rights, are an online problem that must be tackled. However, the Online Safety Bill is not the best piece of legislation to achieve this. Economic harms should be addressed in the upcoming Digital Competition Bill. We urge the Government to ensure this legislation is brought forward as soon as possible.
419 Draft Online Safety Bill, CP 405, May 2021, Clause 2
420 Draft Online Safety Bill, CP 405, May 2021, Clause 39(2)
421 Draft Online Safety Bill, CP 405, May 2021, Clause 59
422 Draft Online Safety Bill, CP 405, May 2021, Clauses 11, 13, 14
423 Department of Digital, Culture, Media and Sport, The Online Safety Bill: Impact Assessment (May 2021) RPC-DCMS-4347(2), para 116: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/985283/Draft_Online_Safety_Bill_-_Impact_Assessment_Web_Accessible.pdf [accessed 18 November 2021]
426 Written evidence from: techUK (OSB0098); others who raised similar concerns included Mumsnet (OSB0031); and Glassdoor (OSB0033)
429 For example, written evidence from: HOPE not hate (OSB0048); Q 38 (Danny Stone); Q 38 (Nancy Kelley); Q 57 (Nina Jankowicz)
431 The Centre for Regulation in Europe (CERRE), Issue Paper: Sally Broughton, Micova; What is the Harm in Size? Very Large Online Platforms in the Digital Services Act (October 2021): https://cerre.eu/wp-content/uploads/2021/10/211019_CERRE_IP_What-is-the-harm-in-size_FINAL.pdf [accessed 18 November 2021]
435 Written evidence from: Tech Against Terrorism (OSB0052); British & Irish Law, Education & Technology Association (OSB0073); Sophie Zhang (Former Facebook Employee) (OSB0214)
438 Mobile UK, Digital, Culture, Media and Sport Sub-Committee: Call for Evidence on Online Safety and Online Harms (September 2021): https://uploads-ssl.webflow.com/5b7ab54b285deca6a63ee27b/61680f8d16cc3e792c4c2c1e_MobileUK_Online_safety_draft_bill_020921.pdf [accessed 9 December 2021]
442 This concern is raised in written evidence from: Reset (OSB0138) and Dame Margaret Hodge (Member of Parliament for Barking and Dagenham at House of Commons) (OSB0201), and oral evidence by the FCA (Q 120), Which? (Q 112 (Rocio Concha)), Martin Lewis (Q 112 (Martin Lewis)), and Ofcom (Q 263).
445 Written evidence from Dame Margaret Hodge (Member of Parliament for Barking and Dagenham at House of Commons) (OSB0201)
446 Written evidence from Dame Margaret Hodge (Member of Parliament for Barking and Dagenham at House of Commons) (OSB0201)
452 See written evidence from: Keoghs LLP (OSB0003); Somerset Bridge Group Ltd (OSB0004); Work and Pensions Committee (OSB0020); Quilter (OSB0024); Money and Mental Health Policy Institute (OSB0036); Aviva Plc (OSB0042); Financial Conduct Authority (OSB0044); CIFAS (OSB0051); Mr Mark Taber (Consumer Finance Expert, Campaigner & Media Contributor at Mark Taber) (OSB0077); Association of British Insurers (OSB0079); Direct Line Group (OSB0082); UK Finance (OSB0088); Barclays Bank (OSB0106); MoneySavingExpert (OSB0113); Which? (OSB0115); Innovate Finance (OSB0116); Revolut (OSB0117); Lloyds Banking Group plc (OSB0135); Office of the City Remembrancer, City of London Corporation (OSB0148); The Investment Association (OSB0162); BT Group (OSB0163); Paul Davis (Director of Fraud at TSB Bank Plc) (OSB0164); Sky, BT, Channel 4, COBA, ITV, NBC Universal, TalkTalk, Virgin Media O2, Warner Media (OSB0177); Rt Hon. Mel Stride MP (Chair at House of Commons Treasury Select Committee) (OSB0209); Dame Margaret Hodge (Member of Parliament for Barking and Dagenham at House of Commons) (OSB0201); Hargreaves Lansdown (OSB0197)
456 ‘Instagram favourite site for scammers’, The Times (26 November 2021): https://www.thetimes.co.uk/article/instagram-favourite-site-for-scammers-8qdmq0ffh [accessed 9 December 2021]; Q 223
458 Ofcom, The regulation of advertising on video-sharing platforms (7 December 2021): https://www.ofcom.org.uk/__data/assets/pdf_file/0022/229009/vsp-advertising-statement.pdf [accessed 9 December 2021]
465 For example, written evidence from Sky, BT, Channel 4, COBA, ITV, NBC Universal, TalkTalk, Virgin Media O2, Warner Media (OSB0177); Alliance for Intellectual Property (OSB0016)
467 Ibid.
469 Ibid.
470 Ibid.
471 Ibid.