Regulating in a digital world Contents

Chapter 5: Online platforms

180.Intermediaries, including online platforms, are the gatekeepers to the internet. In this chapter we consider their role in mediating content-based online harms, including: bullying, threats and abusive language (including hate speech), economic harms (including fraud and intellectual property infringement), harms to national security (including violent extremism) and harms to democracy.

181.Although online content is subject to civil and criminal law, including the law of defamation and public order offences, there is no systematic regulation of content analogous to that which applies to the use of data and market competition.201 Mark Bunting, a partner at Communications Chambers, told us that content regulation represented the “the most obvious gap” in the regulatory landscape.202 In particular he felt that there was a gap in “regulatory capacity to engage with platforms’ role in managing access to … content”. The Government agreed that the lack of enforcement in the online environment had allowed some forms of unacceptable behaviour to flourish online. The Government is developing an Internet Safety Strategy to address these gaps.203

182.It must be stressed that not all online harms are illegal. For example, instances of bullying, online abuse and disseminating extremist content or political misinformation may not cross the threshold of illegality. In this chapter we first consider the existing model for regulating illegal content and then how online harms in general can be better regulated. Table 3 sets out the different categories of online content and our approach to regulating them.

Illegal content

183.The European e-Commerce Directive provides that online intermediaries are not liable for illegal content found on their services unless they have specific knowledge of it.204 If they become aware of such content, they must act expeditiously to remove it. This model is known as ‘notice and takedown’. It enables platforms to intermediate large volumes of content from different sources without scrutinising its legality before publication. As set out in box 4, the liability exemption applies only to specific types of activity, not to the platforms themselves.

184.Article 15 of the directive provides that member states may not impose a general responsibility on service providers to monitor content (see box 5). In practice service providers frequently monitor content, often using specially designed software, and they work with designated organisations (called ‘trusted flaggers’) to identify illegal content. They also rely heavily on users to report content.

Table 3: Categories of online content

Illegal

Harmful

Anti-Social

Examples

Terrorism-related, child sexual abuse material, threats of violence, infringement of intellectual property rights

Content which is not illegal but is inappropriate for children, content which promotes violence or self-harm and cyberbullying.

Indecent, disturbing or misleading content and swearing.

Current status

Governed by criminal and civil law and the e-Commerce Directive.

The e-Commerce directive is evolving through case law.

Subject to Terms of Service.

Subject to Terms of Service.

Recommendations

The Directive is under review and will require further consideration after the UK leaves the European Union.

A duty of care should be imposed on platforms and regulated by Ofcom.

Terms of service must be compatible with age policies.

Platforms should provide their terms of service in plain English and Ofcom should be empowered to ensure that they are upheld.

Platforms should work with Ofcom to devise a classification framework

Platforms should invest in their moderation systems to remove content which breaks the law or community standards more quickly and to provide a fair means of challenging moderation decisions.

Box 4: The e-Commerce Directive: articles 12–14

The directive excludes liability for ‘Information Society Service Providers’ (which include internet service providers and most online platforms) where they are acting for the content in question as:

  • Access providers (“mere conduits”) which enable the transmission of information automatically and transiently, or provide access to a communication network. To qualify for this limitation, an intermediary must not (1) initiate the transmission, (2) select the receiver of information or the actual information in the transmission, or (3) modify it. The information transmitted must take place for the sole purpose of carrying out the transmission only, and not be stored for a period longer than reasonably necessary for the purposes of the transmission. Telecommunications operators such as mobile networks perform this function. (Article 12)
  • Cache providers which store transmitted information automatically and temporarily “for the sole purpose of making more efficient the information’s onward transmission to other recipients of the service upon their request”. The intermediary must not modify the information. Internet service providers, such as BT, Sky Broadband and Virgin Media, perform this function. (Article 13)
  • Hosting providers which store data specifically selected and uploaded by a user of the service, and intended to be stored (“hosted”) for an unlimited amount of time. Hosting providers can benefit from the liability exemption only when they are “not aware of facts or circumstances from which the illegal activity or information is apparent” (when it concerns civil claims for damages) or when they “do not have actual knowledge of illegal activity or information.” This can apply to some but not all activities of social media companies, search engines and other online platforms. (Article 14)

Source: Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (‘Directive on electronic commerce’) 2000/31/EC (OJ l87 17 July 2000). The directive refers to ‘information society service providers’. It was transposed into UK law by The Electronic Commerce (EC Directive) Regulations 2002 (SI 2002/2013)

185.EU member states have applied different standards in implementing this rule. For example, Germany’s Network Enforcement Act (NetzDG) requires platforms with more than 2 million subscribers to remove “manifestly unlawful” content within 24 hours, with fines of up to €50 million for non-compliance. However, the law has been widely criticised for incentivising platforms to take down legal content.205 On the first day of the law’s coming into force, Twitter temporarily suspended the account of Beatrix von Storch, the deputy leader of Alternative für Deutschland (AfD), after she posted anti-Muslim tweets, and deleted tweets from other AfD politicians. It subsequently suspended for two days the account of Titanic, a German satirical magazine, for parodying von Storch. The German newspaper Bild said that the law made AfD “opinion martyrs” and called for it to be repealed.206 techUK warned that the chilling effect of overly strict take-down obligations “could have untold consequences on the availability of legitimate content.”207

Box 5: The e-Commerce Directive: article 15

208 209

Article 15 of the e-Commerce Directive prevents EU member states from imposing on intermediaries a general obligation to monitor information which they transmit or store, and provides that intermediaries cannot be generally obliged “actively to seek facts or circumstances indicating illegal activity”. Article 15 does not prevent member states from setting up reporting mechanisms which require intermediaries to report illegal content once they are made aware of it. Graham Smith, a partner in Bird & Bird, has called article 15 “a strong candidate for the most significant piece of internet law in the UK and continental Europe”.208

Whereas other provisions of the e-Commerce Directive were implemented by the Electronic Commerce (EC Directive) Regulations 2002, article 15 was not specifically implemented through UK domestic legislation. Under section 2 of the European Union (Withdrawal) Act 2018 directives are not in themselves “retained EU law”, only the domestic legislation made to implement them. However, under section 4 of the Act any prior obligations or restrictions of EU law which are “recognised and available in domestic law” will continue after Brexit. As article 15 has been recognised by domestic courts, including the Supreme Court in Cartier International AG and others v British Telecommunications Plc,209 it is likely to be considered retained law, but uncertainty may remain until the matter is tested by the courts.

186.Some witnesses argued that the directive, which is nearly 20 years old, is no longer adequate for today’s internet. They argue that it was created when service providers did not have a role in curating content for users, as many do now. The Children’s Media Foundation said: “Whether by accident or design, search engine algorithms are the de-facto curators for most people’s access to content online. The platforms are using this curation to drive their revenues.”210 It disputed that online platforms had a passive role, arguing that they should be considered as publishers. All Rise agreed:

“The e-Commerce Directive was introduced in what now feels like a bygone era … One of the biggest winners … has been the online platforms. They can provide services to millions of people worldwide, harvest their data and make millions in revenue, and yet have zero responsibility for what their customers see and experience and the harm they suffer whilst under their care. Yes, the platforms have to remove illegal content once they are notified, but they have no obligation proactively to stop that content from reaching our eyes and ears, even if they know their sites are full of it.”211

187.The Northumbria Internet & Society Research Interest Group explained how the liability might be extended beyond ‘notice and takedown’:

“A platform should be liable if it has knowledge of the unlawful content or it has the technical means and resources to ensure the legality of the activities carried out on the platform while striking a balance between the different interests involved, including freedom of expression. Platforms which de facto or de jure monitor users cannot invoke immunity.”212

188.By contrast Oath, a tech group, argued that the e-Commerce Directive framework was not “superannuated” but was “deliberately forward-looking and prescient by design”.213 According to Oath, it has several key strengths including that it is technology neutral and so can be adapted to “complex and fast-evolving business models”.

189.The courts already take account of the fact that the platforms of today are very different from the providers that were around when the directive was introduced. In L’Oréal v eBay the court found that if the operator of a marketplace platform optimised the presentation of, or promoted, trademark-infringing goods listed for sale or promoted those offers, it could not rely on the exemption from liability under article 14. It could not be considered to have taken “a neutral position between the customer–seller concerned and potential buyers but to have played an active role of such a kind as to give it knowledge of, or control over, the data relating to those offers for sale”.214 This judgment means that platforms which curate content may find that the safe harbour is not available to them.

190.Global Partners Digital cautioned against the introduction of “inappropriate legislation” which:

“attaches liability to online platforms for content which is available on them, can lead to a ‘chilling effect’ in which platforms either become reluctant to host or otherwise make available content, or are overly zealous in removing content which might be harmful. It can also result in online platforms being forced to make decisions about the legality of content which they are ill-equipped to make, a problem exacerbated due to the minimal transparency that exists regarding online platforms’ decision-making, and the absence of due process, safeguards for affected users, and oversight.”215

191.The Internet Society warned that imposing specific legal liability might undermine competition, as “large platforms are more likely than smaller platforms to be able to invest in resources to (a) fight litigation, (b) develop tools and algorithms to police their platform and (c) actively employ people to police their platform”.216 The Internet Society also said that a change in the UK’s rules would probably cause hosting providers to move their servers based in the UK to “more lenient regulation regimes.”

192.Some have argued that the conditional exemption from liability should be abolished altogether. It has been suggested that using artificial intelligence to identify illegal content could allow companies to comply with strict liability. However, such technology is not capable of identifying illegal content accurately and can have a discriminatory effect. Imposing strict liability would therefore have a chilling effect on freedom of speech. These concerns would need to be addressed before the ‘safe harbour’ provisions of the e-Commerce Directive are repealed.

193.Online platforms have developed new services which were not envisaged when the e-Commerce Directive was introduced. They now play a key role in curating content for users, going beyond the role of a simple hosting platform. As such, they can facilitate the propagation of illegal content online. ‘Notice and takedown’ is not an adequate model for content regulation. Case law has already developed on situations where the conditional exemption from liability under the e-Commerce Directive should not apply. Nevertheless, the directive may need to be revised or replaced to reflect better its original purpose.

Harmful and anti-social content

194.Online platforms, especially social media platforms, are under fire for enabling other online harms which may not cross the threshold of illegality, as well as for doing too little to prevent people, including children, from accessing inappropriate content. The case of 14-year-old Molly Russell, who took her own life in 2017 after being exposed to graphic self-harm images on Instagram, has prompted a wider debate about the safety of young people online.217 Rebecca Stimson of Facebook told us that her company considered itself “responsible for the content of that platform to ensure that what people are seeing is not harmful, it is not hate speech, bullying and so forth, or containing fake adverts”.218

195.Many witnesses were concerned that content regulation would be detrimental to freedom of speech and expression. Dr Paul Bernal said:

“It is important to understand that it is a very slippery slope, and that there could easily be a chilling effect on freedom of speech if it is taken too far. A platform may be cautious about hosting, reducing the opportunities for people to find places to host their material, if it is in any way controversial.”219

196.However, All Rise Say No to Cyber Abuse painted a picture of widespread abuse and linked social media to increasing rates of anxiety and depression.220 It noted that, while freedom of speech was critical to modern society, it “is by no means freedom to abuse, nor does it mean freedom to harm—an inalienable right to say what you want with no constraint or accountability.”221 Cyber abuse was “killing free speech and itself bringing about the ‘chilling effect’ so often feared when we consider free speech. Voices are crushed and people stop speaking their truth, many too hurt and afraid even to be online.” This position was supported in the review by the Committee on Standards in Public Life on Intimidation in Public Life, which found that technology had drastically increased the volume and frequency of abuse while the brevity of messages and lack of face-to-face contact made discussion more extreme.222 Women and individuals from LGBT and black and minority ethnic groups received a disproportionate volume and degree of abuse.

197.The Government is seeking to address problems of online safety through its Internet Safety Strategy. In May 2018 it published a response to its consultation on this subject and it is expected to publish a white paper soon.223 The main regulatory responses that it is considering are a code of practice for social media providers, required under the Digital Economy Act 2017, transparency reporting and a social media levy. The response included a draft of the code, which will be voluntary, and additional guidance.

A duty of care

198.Recital 48 of the e-Commerce Directive provides that the safe harbour provisions do not preclude member states from developing a duty of care. Professor Lorna Woods of the University of Essex and William Perrin of the Carnegie UK Trust have developed a detailed proposal to introduce a duty of care. Whereas debates around intermediary liability have been framed by the question of whether intermediaries should be treated as a publisher or a ‘mere conduit’, Professor Woods and Mr Perrin suggested that a better analogy would be to see them as a public space “like an office, bar or theme park”.224 Millions of users visit intermediary sites. In the offline world the owners of physical spaces owe a duty of care to visitors. In line with the parity principle which was considered in chapter 2, Professor Woods and Mr Perrin argue that owners of online services should also be required to “take reasonable measures to prevent harm”. Professor Woods told us that this approach avoided “some of the questions about making platforms liable for the content of others”.225 In particular, action against online service providers “should only be in respect of systemic failures” rather than individual instances of speech.226

199.Professor Woods and Mr Perrin recommended that a regulator should be established to act against online service providers for breach of duty of care. They argued that this was necessary to redress the inherent inequality of arms between an individual user and a large social media company.227 They envisaged that the regulator would promote a ‘harm reduction cycle’, whereby it would collaborate with the industry and with civil society to monitor harms and establish best practice. This would be an ongoing process that would be “transparent, proportionate, measurable and risk-based”.

200.In chapter 2 we argued that principles-based regulation which is focused on achieving the right outcomes is desirable in the fast-changing digital world. Duties of care are also “expressed in terms of what they want to achieve—a desired outcome (i.e. the prevention of harm) rather than necessarily regulating the steps—the process—of how to get there”.228 This generality allows the approach to work across different types of services and would be largely future-proof.

201.The duty of care approach emphasises the design of services. Professor Woods and Mr Perrin cited the concept of privacy by design in the GDPR and safety by design in the Health and Safety at Work Act etc. 1974 as models for risk-based regulation which addresses all stages of product development. This approach draws on the “precautionary principle”, which is:

“applied in situations where there are reasonable grounds for concern that an activity is causing harm, but the scale and risk of these issues is unproven. The onus is then on organisations to prove that their practices are safe to a reasonable level.”229

202.Professor Woods and Mr Perrin suggested that the duty of care should apply to social media services of all sizes and that it should apply to messaging services which permit large or public groups.230 In doing so they noted that in another context food safety standards apply to all types of food producers, not just the largest. The issue of proportionality could be dealt with by a competent regulator. They left open the question of whether search engines, including YouTube, might also be covered.

203.There many different types of online platforms of different sizes: “for example websites operated by sporting groups or from community interest, which will also operate their own moderation policies”. NINSO told us: “Given that such groups will rarely be able to benefit from the legal advice available to large corporations, a tailored approach to regulation or at least guidance for such groups would undoubtedly be helpful.”231

204.In Australia an e-Safety Commissioner regulates social media platforms (see box 6). It operates a system which is voluntary for smaller platforms but mandatory for the largest, which are designated ‘Tier 2’ services through the exercise of ministerial powers.232 The commissioner handles complaints if they are not dealt with by the companies themselves.

Box 6: Office of the e-Safety Commissioner of Australia

The Australian government established the Office of the e-Safety Commissioner in July 2015 to help keep children safe from online abuse. Two years later its remit was expanded to include the online safety of adults. The Office provides a mechanism through which Australians can report illegal or abusive content which social media companies have failed to remove within 48 hours. Companies can then be formally directed to remove content. In addition, the Office runs campaigns to educate citizens on the dangers of cyber abuse and to improve digital skills. One campaign, eSafetyWomen, helps women at risk of emotional abuse or violence to stay safe when using the internet. The Office also co-ordinates between organisations, including referring victims for counselling where appropriate.

The Office enjoys a range of discretionary powers, including power to impose civil penalties. It can impose fines of up to $21,000 a day for Tier 2 social media sites that do not comply with take-down notices. The Office notes that, while this may not be much for some of the big tech companies, the reputational impact on social media companies of being fined should not be underestimated. The Office has received full compliance from industry to date.

Source: Written evidence from the Office of the e-Safety Commissioner (IRN0016)

205.Technology companies provide venues for illegal content and other forms of online abuse, bullying and fake news. Although they acknowledge some responsibility, their responses are not commensurate with the scale of the problem. We recommend that a duty of care should be imposed on online services which host and curate content which can openly be uploaded and accessed by the public. This would aim to create a culture of risk management at all stages of the design and delivery of services.

206.To be effective, a duty of care would have to be upheld by a regulator with a full set of enforcement powers. Given the urgency of the need to address online harms, we believe that in the first instance the remit of Ofcom should be expanded to include responsibility for enforcing the duty of care. Ofcom has experience of surveying digital literacy and consumption, and experience in assessing inappropriate content and balancing it against other rights, including freedom of expression. It may be that in time a new regulator is required.

Moderation processes

207.The moderation of content is likely to be key to reducing online harm. This refers to the process of dealing with content which does not comply with ‘community standards’. Katie O’Donovan of Google told us that the company has community standards which go further than merely ensuring that users abide by the law: “We enforce those and people will be removed from our platform if they break them. It is difficult, complicated and resource intensive but for us it preserves the free internet.”233

208.Big Brother Watch told us platforms were “enforcing systems of governance that are constantly changing, unaccountable, and opaque”.234 Users cannot always easily find guidance about the policies online platforms use. Even when platforms provide an accessible policy it may not be helpful to the ordinary individual and may be considered misleading. As a result users may not know what they can expect and what is expected of them.235

209.Moderation mechanisms need to balance the interests of different parties. The Open Rights Group were clear that “all sides of a dispute need to have the ability to assert their rights”.236 However, McEvedys said that this was not the experience of many internet users. McEvedys concluded: “there are many issues with leaving the matter wholly to the private sector where they get to mark their own homework and/or are self-interested”, suggesting “that the lack of [an effective] remedy is the real issue.”237

210.There was a lack of transparency of those involved in moderation processes themselves. Matt Reynolds, a journalist with Wired UK, told us of his experience reporting far-right content to Facebook. At first Facebook took no action against content which Mr Reynolds believed violated community standards. He subsequently discovered that the content had been taken down after all. He found it “very hard to get an answer from Facebook” about its decision-making process.238 He argued that there was no transparency in Facebook’s moderation practices.

211.In December 2018 Facebook’s moderation guidelines were leaked by an employee concerned that the company was acting inconsistently and without proper oversight.239 The secrecy of the document highlighted the lack of transparency in Facebook’s content policies. It also appears that the content of the guidelines was inadequate. The document revealed inconsistencies and errors in Facebook’s approach, including factual errors on what was legal in different countries.

212.Jenny Afia, a partner at Schillings, agreed that moderation processes are not transparent: “You do not know if a human has made a decision on your complaint or it has just been determined by an algorithm.”240 She noted that: “Most platforms do not have dedicated ‘legal’ email addresses where complaints can be sent to or phone numbers to speak to people … The experience feels like dealing with a brick wall built by an algorithm.”241

213.We heard that platforms have not dedicated sufficient resources to moderation. Alex Hern of The Guardian thought it “slightly unbelievable that any platform with users measured in the billions can count its human moderators in the thousands. That seems to be a scale error.”242 Facebook has over 2 billion users and Instagram has around 1 billion, yet there are only around 30,000 moderators between the two sites.243 The volume of content each moderator must examine means that they often do not have sufficient understanding of particular contexts.

214.Facebook’s public guidance on community standards states that they “try to consider the language, context and details in order to distinguish casual statements from content that constitutes a credible threat to public or personal safety.”244 However, The Guardian revealed that Facebook’s internal guidance for moderators advised that ‘casual statements’ could include: “To snap a bitch’s neck, make sure to apply all your pressure to the middle of her throat” and “fuck off and die”.245 Facebook suggested that saying these things could be permissible because they were not regarded as credible threats.

215.In December 2018 a report from two Israeli NGOs revealed WhatsApp’s reliance on ineffective automated systems to police child sexual abuse material.246 The study showed how third-party apps for discovering WhatsApp groups allowed for the trading of images of child exploitation. WhatsApp responded by stating that it had a zero-tolerance approach to images of child exploitation and that its systems had banned a further 130,000 accounts in a recent 10-day period for violating this policy.

216.Many legal experts felt that too much power had been delegated to private companies to act in effect as censors. NINSO told us that platforms do not use this power consistently and “are often over-effective when it comes to intellectual property infringement and non-effective when it comes to other forms of content, for example in relation to terrorism”.247

217.In 2016 Facebook deleted a post by Norwegian writer Tom Egeland that featured ‘The Terror of War’, a Pulitzer prize-winning photograph by Nick Ut that showed children—including the naked nine-year-old Kim Phúc running away from a napalm attack during the Vietnam war. Egeland’s post discussed “seven photographs that changed the history of warfare” a group to which the “napalm girl” image certainly seemed to belong. Facebook deleted the image for being in breach of its rules on nudity. The image was reinstated after Norway’s largest newspaper published a front-page open letter to Mark Zuckerberg, the CEO of Facebook, criticising the company’s decision to censor the historic photograph. While this may have been a sensible outcome, few individual users can rely on a national newspaper to challenge a decision which goes against them.

218.Platforms’ role as gatekeepers to the internet can limit freedom of expression but is not accompanied by the safeguards for human rights which the UK is expected to observe.248 Dr Nicolo Zingales, lecturer in Competition and Information Law at the University of Sussex, told us the UN special rapporteur on freedom of expression was concerned about the lack of transparency. While platforms were taking small steps, he felt that “it would be good if they had specific procedures in place to show that they have accountability by design”.249

219.Mark Stephens, a partner at Howard Kennedy, suggested that the UN Guiding Principles on Business and Human Rights (‘Ruggie principles’) should be used to develop better moderation systems. These principles were designed to be used for businesses carrying out activities which affect human rights. Box 7 outlines principle 31, which applies to non-state actors operating a grievance mechanism.

Box 7: The Ruggie principles: principle 31

In order to ensure their effectiveness, non-judicial grievance mechanisms, both state-based and non-state-based, should be:

(a)Legitimate: enabling trust from the stakeholder groups for whose use they are intended, and being accountable for the fair conduct of grievance processes;

(b)Accessible: being known to all stakeholder groups for whose use they are intended, and providing adequate assistance for those who may face particular barriers to access;

(c)Predictable: providing a clear and known procedure with an indicative time frame for each stage, and clarity on the types of process and outcome available and means of monitoring implementation;

(d)Equitable: seeking to ensure that aggrieved parties have reasonable access to sources of information, advice and expertise necessary to engage in a grievance process on fair, informed and respectful terms;

(e)Transparent: keeping parties to a grievance informed about its progress, and providing sufficient information about the mechanism’s performance to build confidence in its effectiveness and meet any public interest at stake;

(f)Rights-compatible: ensuring that outcomes and remedies accord with internationally recognised human rights;

(g)A source of continuous learning: drawing on relevant measures to identify lessons for improving the mechanism and preventing future grievances and harms.

Operational-level mechanisms should also be:

(h)based on engagement and dialogue: consulting the stakeholder groups for whose use they are intended on their design and performance, and focusing on dialogue as means to address and resolve grievances.

Source: Guiding Principles on Business and Human Rights

220.The Internet Watch Foundation believed that moderation decisions should be “quality assured through a rigorous internal process and externally audited. Ultimately, any challenge to the legality of content should be subject to judicial review.”250

221.All Rise suggested establishing “an independent body to set the standard and ensure it is maintained, as well as to adjudicate on complex cases, undertake regular audits, preside over appeals and to provide transparency as to the state of play and progress”.251

222.Content moderation is often ineffective in removing content which is either illegal or breaks community standards. Major platforms have failed to invest in their moderation systems, leaving moderators overstretched and inadequately trained. There is little clarity about the expected standard of behaviour and little recourse for a user to seek to reverse a moderation decision against them. In cases where a user’s content is blocked or removed this can impinge their right to freedom of expression.

223.Community standards should be easily accessible to users and written in plain English. Ofcom should have power to investigate whether the standards are being upheld and to consider appeals against moderation decisions. Ofcom should be empowered to impose fines against a company if it finds that the company persistently breaches its terms of use.

224.The sector should collaborate with Ofcom to devise a labelling scheme for social media websites and apps. A classification framework similar to that of the British Board of Film Classification would help users to identify more quickly the risks of using a platform. This would allow sites which wish to allow unfettered conversation or legal adult material to do so. Users could then more easily choose between platforms with stricter or more relaxed community standards.

225.Community standards are not always consistent with platforms’ age policies. For example, Twitter says that it “allows some forms of graphic violence and/or adult content in Tweets marked as containing sensitive media.”252 However, a user does not have to state that they are over 18 or enter a date of birth to view such content. Indeed, although Twitter has a nominal minimum age of 13, entering a date of birth is optional.

226.Community standards and classifications should be consistent with a platform’s age policy.

201 There are certain exceptions such as ‘TV-like’ content which is regulated by Ofcom.

203 Written evidence from Her Majesty’s Government (IRN0109)

204 The e-Commerce Directive 2000/31/EC (OJ l87 17 July 2000)

205 Written evidence from Global Partners Digital (IRN0099)

206 Julian Reichelt, ‘Bitte keine Meinugspolizei’, Bild (3 January 2018): https://www.bild.de/politik/inland/gesetze/kommt-jetzt-die-meinungspolizei-54367844.bild.html [accessed 13 February 2019]

207 Written evidence from techUK (IRN0086)

208 Graham Smith, Time to speak up for Article 15 (21 May 2017): https://www.cyberleagle.com/2017/05/time-to-speak-up-for-article-15.html [accessed 26 February 2019]

209 Cartier International AG and others (Respondents) v British Telecommunications Plc and another (Appellants) [2018] UKSC 28

210 Written evidence from CMF (IRN0033)

211 Written evidence from All Rise Say No to Cyber Abuse (IRN0037)

212 Written evidence from NINSO (IRN0035)

213 Written evidence from Oath (IRN0107)

214 See paragraph 116

215 Written evidence from Global Partners Digital (IRN0099)

216 Written evidence from the Internet Society (IRN0076)

217 Richard Adams, ‘Social media urged to take “moment to reflect” after girl’s death’, The Guardian (30 January 2019): https://www.theguardian.com/media/2019/jan/30/social-media-urged-to-take-moment-to-reflect-after-girls-death [accessed 13 February 2019]

219 Written evidence from Dr Paul Bernal (IRN0019)

220 Written evidence from All Rise Say No to Cyber Abuse (IRN0037)

221 Ibid.

224 Written evidence from Professor Lorna Woods and William Perrin (IRN0047)

226 Written evidence from Professor Lorna Woods and William Perrin (IRN0047)

227 Written evidence from Professor Lorna Woods and William Perrin (IRN0047). In their updated proposal, published in January 2019, they suggest that enforcement of the duty of care should be the sole preserve of the regulator and that it should not give rise to an individual right of action: Professor Lorna Woods and William Perrin, ‘Internet Harms reduction: An updated proposal’, p 12, Carnegie UK Trust (January 2019).

228 Written evidence from Professor Lorna Woods and William Perrin (IRN0047)

229 Written evidence from Doteveryone (IRN0028)

230 Professor Lorna Woods and William Perrin, ‘Internet Harms reduction: An updated proposal’, Carnegie UK Trust (January 2019): https://www.carnegieuktrust.org.uk/blog/internet-harm-reduction-a-proposal/ [accessed 26 February 2019]

231 Written evidence from NINSO (IRN0035)

232 Written evidence from the e-Safety Commissioner (IRN0016)

234 Written evidence from Big Brother Watch (IRN0115)

235 Written evidence from NINSO (IRN0035)

236 Written Evidence from the Open Rights Group (IRN0090)

237 Written Evidence from McEvedys Solicitors & Attorneys Ltd (IRN0065)

239 Max Fisher, ‘Inside Facebook’s Secret Rulebook for Global Political Speech’ The New York Times (27 December 2018) https://www.nytimes.com/2018/12/27/world/facebook-moderators.html [accessed 14 February 2019]

241 Written evidence from Jenny Afia, Partner, Schillings (IRN0032)

243 Mike Wright, ‘Parental settings online failing to block violence, pornography and self-harm’ The Telegraph (18 February 2019) https://www.telegraph.co.uk/news/2019/02/18/parental-settings-online-failing-block-violence-pornography/ [accessed 22 February 2019]

244 Facebook, ‘Community Standards’: https://m.facebook.com/communitystandards/violence_criminal_behavior/ [accessed 26 February 2019]

245 Nick Hopkins, ‘Revealed: Facebook’s internal rulebook on sex, terrorism and violence’ The Guardian (21 May 2017) https://www.theguardian.com/news/2017/may/21/revealed-facebook-internal-rulebook-sex-terrorism-violence [accessed 15 February 2019]

246 Priya Pathak, ‘WhatsApp has a big child pornography problem, NGOs find many groups spreading it on chat app’ India Today (21 December 2018) https://www.indiatoday.in/technology/news/story/whatsapp-has-a-big-child-pornography-problem-ngos-find-details-of-many-groups-spreading-it-on-chat-app-1414326–2018–12-21 [accessed 14 February 2019]

247 Written evidence from NINSO (IRN0035)

250 Written evidence from Internet Watch Foundation (IRN0034)

251 Written evidence from All Rise Say No to Cyber Abuse (IRN0037)

252 Twitter, ‘Media Policy’: https://help.twitter.com/en/rules-and-policies/media-policy [accessed 26 February 2019]




© Parliamentary copyright 2019