166.Childhood, even without social media and the Internet, is not risk-free. While it is important that we teach children how to reduce risk, and be digitally literate and resilient, the overall ‘burden’ should not be placed on children. As far as possible, online risks must be managed, minimised and, ideally, prevented. Legislative and non-legislative responses—alongside possible technical solutions—to the harms we have heard about are set out in this Chapter. While we have focused throughout our Report on children, many of the proposals we make in this Chapter could equally apply to adults.
167.Throughout our inquiry, witnesses repeated the same general point that there was a lack of regulation covering social media sites.279 A 2018 report by Doteveryone, a think tank founded by Baroness Lane Fox, described the current “regulatory landscape” as one that had organically “evolved over time to cover aspects of digital technologies”. The report went on to stress that this evolution had “resulted in a patchwork of regulation and legislation, […] an inconsistent and fragmented system and […] some significant gaps in ensuring comprehensive oversight and accountability” where the Internet was concerned.280
168.Ofcom’s recent paper, Addressing Harmful Content Online, noted that while the “regulatory regime covering online content has evolved in recent years” there were “still significant disparities in whether and how online content is regulated”.281 With the exceptions of BBC online material, and on-demand streaming services (like Amazon Prime and ITV Hub), “most online content is subject to little or no specific regulation”. Such disparities, we heard, had produced a “standards lottery”.282 Key areas that are not currently the subject of specific regulation, identified by Ofcom, are:
Without direction from Parliament, however, Ofcom cannot expand its remit to cover any of these areas.
169.The liability of social media companies (and others) for the content they host is currently limited by the the 2000 European e-Commerce Directive.284 Under the Directive, ‘intermediaries’ (like social media companies) are exempt from liability for the content they host, so long as they “play a neutral, merely technical and passive role towards the hosted content”.285 Once they become aware of the illegal nature of any hosted content, the Directive states that “they need to remove it or disable access to it expeditiously”. An exact timeframe for removal is not specified.
170.For content that is not illegal but could be deemed inappropriate and harmful, platforms self-regulate, usually by making their content rules explicit through ‘community standards’ and ‘terms of use’ which users sign up to when joining a social media platform. Violation of those standards may result in the content being removed and/or access to the site being revoked, either temporally or indefinitely. YoungMinds and The Children’s Society described the status quo as akin to social media companies “marking their own homework”.286 Others have likened the situation to the lawlessness of the “Wild West”.287 The NSPCC told us that:
Thirteen previous self-regulatory Codes of Practice and other self-regulatory approaches have failed to result in any meaningful reduction in the exposure of children to online harms, because there has been no mechanism to force companies to do more, nor to hold them publicly hold them to account.288
171.Mark Bunting, a Partner at Communications Chambers, however, has stressed that even the “wild west had rules”. The problem, he went on to explain, was that “today’s online sheriffs are private firms whose policies, decision processes and enforcement actions can be opaque and subject to little external accountability”.289 For example, Tumblr was removed from Apple’s App Store in November 2018 because it let some users post “media featuring child sexual exploitation and abuse”.290 The Daily Telegraph reported, however, that the app was still available through Google’s app store.291 It is unclear whether the Android version did not have the same problem as the Apple version of the app, or if the Google App Store applies different criteria as to the apps it makes available. By December, the Tumblr iOS app was available again in the Apple App Store, though no details were provided on why it was reinstated and if the problem had been resolved.
172.There is a growing consensus that the status quo is not working and that a new liability regime for social media companies, and the content on their sites, is required. How this should be achieved, however, is a subject of ongoing debate.292 The Children’s Commissioner for England told us how she had:
been pushing the tech companies for a couple of years now, with limited success, about them taking more responsibility for their platforms being a positive environment […] The notion that platforms need to take responsibility for content is much discussed. If it was an area of the community, there would be no doubt that that community needed some framework that protected but also enabled children within it.293
173.Much of the debate has been framed in terms of whether social media companies are publishers or platforms. Giving evidence to the DCMS Committee in October 2017, the then Chair of Ofcom, Dame Patricia Hodgson, stated that, her “personal view” was that social media companies “are publishers” though stressed that this was “not an Ofcom view”.294
174.The evidence we received, however, has not advocated for social media companies to be treated as publishers. As the DCMS Committee put it in July 2018, social media is “significantly different” from the traditional model of a ‘publisher’, which commissions, pays for, edits and takes responsibility for the content it disseminates.295 The Government also stated, in its Response to the Internet Safety Strategy, that “applying publisher standards of liability to all online platforms could risk real damage to the digital economy, which would be to the detriment of the public who benefit from them”.296
175.The practicalities and ‘fit’ of the publisher model have similarly been called into question. Ofcom noted that the sheer scale of the material uploaded by users (e.g. 400 hours of video are uploaded to YouTube every 60 seconds) meant that the regulatory model used for traditional broadcasting could not readily be transferred, wholesale, and applied to social media sites.297 William Perrin, a trustee of Carnegie UK Trust, also emphasised that the publisher model was an “ill-fit” for current practice, while Mark Bunting, an expert in telecommunications and the law, argued that “shoehorning them [social media companies] into legal frameworks from another technological era [was] a mistake”.298
176.Witnesses did not agree, however, that social media companies ought to continue to be treated as “neutral” platforms. Speaking to the Lords Communications Committee, Mark Bunting highlighted how social media companies were not just a “conduit for content” but that they “actively” curated content: “I mean that they select which content is presented to users; they rank that content; they recommend content; and they moderate content. You cannot do that in a purely neutral way”.299
177.The notion of social media companies being ‘platforms’, in other words, is also an inadequate way of capturing their responsibilities. The DCMS Committee recommended that “a new category of tech company is formulated, which tightens tech companies’ liabilities, and which is not necessarily either a ‘platform’ or a ‘publisher’.”300 Dr Damian Tambini, from the Department of Media and Communications at the London School of Economics, has similarly argued that “the law needs to catch up in some way, and there needs to be an intermediate category between publishers and mere conduits”.301 He added, however, that this was “easier to say than it is to do”.302 The Government stated that it was:
working with our European and international partners, as well as the businesses themselves, to understand how we can make the existing frameworks and definitions work better, and what a liability regime of the future should look like.303
178.At present, the onus is on a user to identify and ‘report’ to the social media company any content that the user deems to be problematic. Sometimes there is a clear ‘button’ to click near the offending material while on other sites reporting takes more effort. Giving evidence to the House of Lords Communications Committee, Lorna Woods, Professor of Internet Law, University of Essex, explained that it is often “the victim” who has to “keep an eye out for problem content and then persuade the platform to do something about it. That is a problem […]. It is really hurtful to expect someone to have to monitor”.304
179.Becca, one of the young people who gave evidence to us, noted that even when a report is made, it does not guarantee the content will be removed: “I report things, things which are quite clearly completely inappropriate or go against all the guidelines. It often comes back saying, “We have not found it breaches any guidelines””.305 Orlaith, another young person who gave evidence to us, recalled that she knew of “people who have reported adults messaging young girls and have reported Nazis. None of the content gets removed”.306 Becca’s and Orlaith’s experiences are not isolated incidents. Sue Jones from Ditch the Label told us that they hear from young people “all the time” that “I reported, and nothing happened.”307 She added that sometimes young people have been trying for “trying for weeks and months”, with no success, to get content removed.308 At the moment, there is not consistently produced, UK-focused data to quantify the scale of the problem, a point we examine further in paragraphs 194–196.
180.Some witnesses agreed that an “industry standard” for reporting content was needed.309 Witnesses also suggested that the reporting process should be demystified. Matt Blow from YoungMinds, a charity aimed at improving the mental health of children, explained that social media companies needed to improve their communications, “so that young people can understand what will happen if they report”.310 Dustin Hutchinson from the National Children’s Bureau also highlighted the lack of feedback after a report was logged: “Often young people say that they report something, but they do not know what happens as a result. There should be some feedback mechanism”.311
181.Some progress appears to have been made. Notably Facebook has introduced a ‘support inbox’, so that if a user has reported something for not following Facebook’s Community Standards, the status of the report can be viewed in the inbox.312 Similarly, Google highlighted how it had launched a “user dashboard” for YouTube where “if you make a report, you will now get information about what has happened to that report, which did not happen previously”.313
182.Social media companies have also stated their intention to be more proactive about identifying and removing inappropriate content. Jack Dorsey, CEO of Twitter, told a US Congressional Committee in September 2018 that:
we can’t place the burden on the victims and that means we need to build technology so that we are not waiting for reports [but rather] are actively looking for instances […] while we are making those changes and building that technology, we need to do a better job at prioritizing, especially any sort of violent or threatening information.314
Sinéad McSweeney from Twitter told us that the social media company would be:
the first to put our hands up—in fact, we have done so—to say that we did not do enough, particularly in the early years, to address users’ concerns about the ways in which people could report content and about people’s understanding of the rules. There was a lack of clarity and usability. I have seen a sea change, thankfully, in all that. Our rules are more accessible, it is far easier to report and we are much more transparent about how and when we action reports.315
183.When individuals have reported content and failed to achieve their desired response, some turn to organisations to help them further. We heard particularly about the Trusted Flagger Programme, in which volunteers (often organisations), who have been accepted through an application process, are given the authority to flag content that violates the terms and conditions of a social media platform. Users can highlight content to a trusted flagger, who will assess it, and then take it forward with the relevant social media company.
184.Sue Jones from Ditch the Label stated that the programme “really helps the platforms, because they are overwhelmed by reports”.316 She added that the programme often led to content being removed in “a couple of hours”.317 Claire Lilley from Google UK, which participates in the scheme, noted that “eighty-eight per cent of what they [flaggers] report will be taken down, compared with an overall rate of 32%”.318 According to Ms Lilley, there were currently “30 [trusted flaggers—specialist organisations] in the UK, including NSPCC ChildLine and the members of the UK Safer Internet Centre”.319
185.Assistant Commissioner Martin Hewitt explained that the Metropolitan Police was about to trial “some trusted flaggers from the police service”. He emphasised that the police had a role to play in:
translating, because something that appears innocuous may be a very direct threat between individuals or groups, but if you do not understand the language, the context and the names in an area it is really difficult.320
Both Facebook and Twitter stated that they worked with a range of organisations but did not have a trusted flagger programme per se.321
186.Barnardo’s and Catch22 (a social business that delivers a range of social services) told us that they were trusted flaggers, though they had different views on the resources needed to perform the role. Emily Cherry from Barnardo’s explained that Google offered a “voluntary grant” to trusted flaggers which “you have to ask for […] to take it up”.322 Google UK confirmed that the grant was for $10,000 and was available to organisations who were flaggers but not to individuals.323 Beth Murray, however, stated that while Catch22 was part of the trusted flagger programme, its:
1,300 frontline workers—teachers, social workers, youth workers, gang violence workers and prison workers—who are working incredibly hard […] do not have the time or resource to spend on doing the job of policing social media platforms […] We are happy to do it […] but there needs to be resourcing.324
187.In addition to the resourcing of the scheme, a further problem raised by Barnardo’s was the lack of feedback they received. Emily Cherry described the programme as “quite a one-way process”:
We will share in context intelligence on what is happening in individual cases. Aside from our knowing that action has been taken, there is very little coming back out of the companies. They are aggregating across the UK different harms, new trends and things that are happening to children, but they do not share that back with the trusted flagger community. We then have to play catch-up. New terms […] should be shared across all flaggers, so that they can look out for that kind of thing.325
188.In an attempt to increase the speed in which certain content is reviewed and potentially taken down, the Network Enforcement Law (NetzDG) has been introduced in Germany. The law came into full effect on 1 January 2018 and applies to social media platforms with over two million users. It enables Germany to fine social media companies up to €50 million if they do not delete posts contravening German hate speech law within 24 hours of receiving a complaint. Where the illegality is not obvious, the provider has up to seven days to decide on the case.
189.Commenting on the NetzDG law, Ofcom noted that “fines will not be applied for one-off infractions, only for “repeated neglect”, [such as] systemic failure, where the complaint system is not adequately established, managed or observed”.326 Enforcement action, meanwhile, is taken by the courts. Ofcom told us “that no cases have reached this stage yet, and therefore there have not been any fines”.327
190.Facebook’s transparency report, published in July 2018, showed that, in the period between 1 January 2018 and 30 June 2018, there were “886 NetzDG reports identifying a total of 1,704 pieces of content”, with “218 NetzDG reports” resulting in the deletion or blocking of content. This, Facebook noted, “amounted to a total of 362 deleted or blocked pieces of content” (since a single report may flag more than one piece of content).328 Twitter’s transparency report, covering the same period, indicated that they received a total of 264,818 complaints of which “action” was taken on 28,645. “Action”, Twitter explained, involved either completely removing it from the platform, due to it breaching its terms and conditions, or withdrawing it specifically in Germany because it breached the NetzDG law.329 Google, meanwhile, received reports relating to 214,827 ‘items’ on YouTube (where one item is a video or a comment posted beneath a video), of which 56,297 resulted in action, either the item being removed or blocked.330
191.Concerns have been raised by civil rights groups in Germany that the new law has ‘privatised’ law enforcement and that the courts, rather than social media companies, should continue to determine what speech contravenes German law.331 Karim Palant from Facebook told us that the German approach had a number of risks and that he did not think it would work in the UK:
Under the German legislation, there is a real risk of requiring companies that do not necessarily have the resources legally to review every piece of content to remove, on a precautionary principle, a huge amount of content that would be perfectly legitimate. It is not a regulatory model that I would say the UK is even looking at, for very understandable reasons, especially given that the UK hate speech laws are far less fitted to that kind of model.332
He added that “Germany has a very prescriptive set of things that are very clearly defined in law as constituting hate speech. That is not the way in which UK hate speech law works, so some of the downsides of the German law would be magnified here”.333
192.To address concerns about the current state of UK law, the Prime Minister announced in February 2018 that the Law Commission was to review the current law around abusive and offensive online communications. It was also asked to highlight any gaps in criminal law which cause problems in tackling this abuse. In its initial report, published on 1 November 2018, the Commission stated that they did “not consider there to be major gaps in the current state of the criminal law concerning abusive and offensive online communications”.334 The report went on to say, however, that “many of the applicable offences” were “not constructed and targeted in a way that adequately reflects the nature of offending behaviour in the online environment, and the degree of harm that it causes in certain contexts”.335 It concluded that “reform could help ensure that the most harmful conduct is punished appropriately, while maintaining and enhancing protection for freedom of expression”.336 It also recommended that:
As part of the reform of communications offences, the meaning of “obscene” and “indecent” should be reviewed, and further consideration should be given to the meaning of the terms “publish”, “display”, “possession” and “public place” under the applicable offences.337
193.Yih-Choung Teh from Ofcom was clear that the NetzDG law had shown that individual countries were able to take action to address illegal content and that “national law can make a difference”.338 When asked if the Government was considering adopting a similar approach in the UK, the Minister for Digital and Creative Industries, Margot James MP, replied “Yes, indeed, we are”.339 She explained that she was “very interested in the German approach” and, much like Ofcom, highlighted that it was “interesting to note that the German Government have been able to introduce this law […] and that has been deemed compliant with the European e-commerce directive”.340
194.Reported content may be reviewed by human moderators or by machine learning tools. In the past, social media companies have been reluctant to state how many human moderators they employ. To some extent, the NetzDG law appears to have prompted a degree of openness. Twitter stated that “more than 50 people work specifically on NetzDG”, while at Facebook there are “65 individuals […] who process reports submitted through the NetzDG reporting form”.341 Google meanwhile outlined in its NetzDG transparency report that:
Depending on the amount of incoming NetzDG requests, the number of content reviewers supporting the YouTube operation and the legal team can vary. Approximately 100 content reviewers for YouTube and Google+ only working on NetzDG complaints were employed by an external service provider.342
195.Twitter, however, told us that they “have not released figures around the number of moderators” they employ globally on the grounds that “as we use technology more and more, [a focus on moderators] is telling only half the story”.343 Tumblr told us that it had “recently increased the size of its content review team to ensure that it can continue to apply this level of scrutiny to incoming reports” though it too did not give us any figures.344
196.Facebook stressed that it was “the first major platform to confirm the number of reviewers who look at reports from users”.345 Karim Palant from Facebook told us that “by the end of 2017, [it] had increased the number from about 3,500 to 8,000”. He added that Facebook had “made a commitment this year to double overall the teams working on safety and security at Facebook, so that number is changing rapidly and upwards”.346 Google UK also told us that its goal was to “bring the total number of people across Google working to address content that might violate our policies to over 10,000 by next month”.347 Both sets of numbers referred to those employed globally (ie not only in the UK) to review content.
197.Failures by some social media companies to disclose the number of human moderators they employ were symptomatic of a broader lack of transparency around how they operate and the processes through which (reported) content was monitored, prioritised and, in some instances, removed. As the charities YoungMinds and The Children’s Society explained:
It is particularly difficult to assess the success rate of social media platforms in tackling […] digital harms, as companies do not consistently record and report on the nature, volume and outcomes of such complaints made within their systems. There is also poor transparency regarding moderation processes, including: details about the number of moderators, how decisions are made, their training and the tools available to them.348
198.Speaking to the House of Lords Communications Committee in July 2018, Adam Kinsey, Director of Policy at Sky, acknowledged that platforms already policed content, albeit to “differing extents”, but stressed that there was “no accountability”:
For example, how are they doing it? What is the split between moderators and AI? How are they doing it across different content classes? What does it look like when they are considering reports from children? None of that is transparent. Transparency is only available when the platforms decide to do it, on a global basis, at a time of their choosing.349
199.One proposal to address these problems was ‘transparency reporting’. In her speech on ‘Standards in Public Life’ in February 2018, the Prime Minister described social media as one of the “defining technologies of our age” and committed to establishing “a new Annual Internet Safety Transparency Report”. This, she explained, would “provide UK-level data on what offensive online content is being reported, how social media companies are responding to complaints, and what material is removed”.350 Further details have been provided in the Government Response to its Internet Safety Strategy Green Paper, which included a “draft transparency reporting template”. The template detailed:
the metrics that we [the Government] expect companies to report on. The template includes basic, but vital and hitherto unavailable, information on the total number of UK users, total number of UK posts and total number of reports, as well as what information companies signpost users to when they have reported an issue […] We are also seeking information about the company’s processes for handling reports, as well as specific information relating to the types of reports which are made and how quickly they are resolved.
200.This approach has been widely welcomed. As Carolyn Bunting from Internet Matters put it, “if you do not measure stuff, you cannot possibly manage this. The very first step in this is to get to grips with what is actually going on for UK children on social media” through transparency reporting.351 The NSPCC also stressed that:
Transparency reports must be a key part of any regulatory solution, allowing Parliament, civil society and users to fully understand industry processes and outcomes. As a minimum, regulatory reporting should set out how sites resource their moderation and reporting processes; and the specific outcomes that result from reports being made by children or in relation to child abuse.352
201.While the major social media companies have started to produce transparency reports, ahead of any formal requirement to do so, Duncan Stephenson from the Royal Society for Public Health pointed to a “lack of consistency in how different platforms [were] approaching and embracing this”353 including how frequently such reports were published, and what information was, or was not, included. We heard that Twitter’s transparency report focused only on illegal content, with reports on cyberbullying, for example, not included.354 Since November 2018, in contrast, Facebook has included “Bullying and Harassment and Child Nudity and Sexual Exploitation of Children” in its transparency report.355 As Yih-Choung Teh from Ofcom also noted, “different platforms have different community standards” with some offering a “greater degrees of protection than others”.356 This, in turn, may impact on what they consider including in their transparency reports.
202.The Government has indicated that transparency reporting is one of the “potential areas where the Government will legislate”.357
203.The Government committed to introduce a code of practice for social media platforms under section 103 of the Digital Economy Act 2017.358 The Act requires that the code addresses conduct that involves bullying or insulting an individual online, or other behaviour likely to intimidate or humiliate the individual. The Government has since confirmed that the code will “apply to conduct directed at groups and businesses, as users can be upset by content even if it’s not directed towards them individually”.359
204.Details of the code were outlined in the Government’s Green Paper on the Internet Safety Strategy, as well as in the Government Response to the Green Paper. The Rt Hon Matt Hancock MP, then Secretary of State for Digital, Culture, Media and Sport, described the code of practice as providing “guidance to social media providers on appropriate reporting mechanisms and moderation processes to tackle abusive content”.360 The code is also identified as a “potential area” for legislation.361 According to the Government, the code will cover the following broad areas:
205.Emily Cherry from Barnardo’s told the Committee that she was pleased with the Government’s commitment to looking at regulation. She added that Barnardo’s:
has been calling for some time for a statutory code of practice, to apply to all social media sites, and an independent regulator with the teeth to hold social media companies to account. That means bringing them to the table, issuing fines if they are unable to comply with the code.363
206.At present, however, it is unclear whether/how the code of conduct will be enforced. The Government stated in May 2018 that it “will encourage all social media platforms to sign up to our code of practice and transparency reporting” (our emphasis).364 Reflecting on the code, Professor Lorna Woods and William Perrin commented that:
while the Government has put forward a draft Code of Practice for social media companies, as required under the Digital Economy Act 2017, we believe that such a voluntary Code is now no longer sufficient on its own to pre-empt and reduce the current level of harms that can be experienced by users of social media.365
The NSPCC also stated that it was:
essential that the Government commits to statutory regulation of social networks. Since a voluntary Code of Practice was first proposed in the Byron Review ten years ago, social networks have consistently failed to prioritise child protection and safeguarding practices.366
The Royal Society for Public Health, in contrast, thought that industry “should be given the chance to regulate themselves in line with a voluntary code of practice”.367
207.When asked if the code would be voluntary or statutory the Minister indicated that the Government’s thinking on the matter was evolving:
At the point of the Green Paper published last year, there was an expectation that, although rooted in the Digital Economy Act and, therefore, a statutory code, it would be undertaken on a voluntary basis. However, in our response to the consultation that followed from the Green Paper, which we published in May, we announced that we would work on a White Paper that would produce recommendations to enforce the code of conduct and transparency reporting by a means of legislative and non-legislative measures. Our thinking is developing towards the view that some level of statutory legal regulation will be required.368
208.Access to some content online is only available after the user has verified that they are over a certain age. Under section 14 of the Digital Economy Act 2017, there is a requirement to prevent access to Internet pornography “by persons under 18”.369 Though the Act received Royal Assent in April 2017, section 14 of the Act has yet to come fully into force. The British Board of Film Classification (BBFC) has, however, been appointed as the regulator.
209.Pornographic material is readily available through some social media platforms, which host accounts that promote the publishers and stars of pornography. The Digital Policy Alliance highlighted how there was:
still no clarity regarding the extent to which Virtual Private Networks (VPNs), search engines and social media platforms will be captured as ancillary service providers [under the Act] and held to account for pornography accessed by children via these paths.370
210.In October 2018, the Government produced the Online Pornography (Commercial Basis) Regulations.371 The Regulations defined the “Circumstances in which pornographic material is to be regarded as made available on a commercial basis”. The Regulations stated that they did “not apply in a case where it is reasonable for the age-verification regulator to assume that pornographic material makes up less than one-third of the content of the material made available on or via the internet site”. This has been interpreted to mean that social media platforms will not be captured by the age verification requirements of the Digital Economy Act.
211.The regulations were approved in December 2018, with the Minister anticipating that age verification would “be in force by Easter next year”.372 She added that the Government had “always said that we will permit the industry three months to get up to speed with the practicalities” of delivering the age verification.373 The Minister also acknowledged that the ‘one-third’ rule was a weakness in regulations:
it is well known that certain social media platforms that many people use regularly have pornography freely available. We have decided to start with the commercial operations while we bring in the age verification techniques that have not been widely used to date. But we will keep a watching brief on how effective those age verification procedures turn out to be with commercial providers and will keep a close eye on how social media platforms develop in terms of the extent of pornographic material, particularly if they are platforms that appeal to children—not all are. You point to a legitimate weakness, on which we have a close eye.374
David Austin from the BBFC, however, pointed out that there was a legal obligation on the regulator to:
report back to the Government 12 months after implementation to say what has and has not worked well. If after 12 months social media are an issue in relation to pornography, we will certainly make that clear.375
212.Where non-pornographic content was concerned, the Government also acknowledged a more widespread lack of age verification for social media platforms, stating that it needed “to continue to tackle” the issue “head-on and evolve [its] work on online safety”.376 What this will involve is unclear. The Health Secretary, Rt Hon Matt Hancock MP, told The House Magazine that there “absolutely” should be a minimum and enforced legal age requirement to use social media sites. When asked what the age limits should be, he replied: “Well, the terms and conditions of the main social media sites are that you shouldn’t use it under the age of 13, but the companies do absolutely nothing to enforce against that. And they should, I think that should be a requirement”.377
213.Some of the evidence we received, however, was sceptical about the effectiveness of age verification technology. YMCA England and Wales explained that:
although age restrictions have been put in place on social media sites, young people are continuously evading these. Indeed, young people frequently spoke of having signed up to multiple social media accounts by the age of seven illustrating the protections being put in place to protect young people are currently failing to do so.378
According to Professor Przybylski, Director of Research at the Oxford Internet Institute, introducing age verification could lead to harmful, unintended consequences. He told the Committee that it would teach young people:
how to use proxies, VPN and other technologies. My legitimate concern […] is that many young people will wind up using insecure services to access mature material. They will wind up having viruses or other material infect the browser.379
214.Karim Palant from Facebook also told us that he was “not aware of anywhere where there is an age verification process for people in their teens that would compare with the BBFC process that is still being worked on for 18-year-olds for pornography here in the UK”.380 Both Facebook and Twitter pointed to “tensions”381 relating to the amount of data their companies held on under 18s, with Karim Palant questioning:
how much data [do] you want to keep on 13, 14 or 15-year-olds and how many younger people you wish to restrict from accessing internet products by requiring them to have access to a credit card or a photo ID.382
215.In contrast, both the Digital Policy Alliance and Yoti (an identity verification platform) told us that there were technological solutions to the problem.383 Yoti, for example, highlighted how:
school databases could be used by Government should they wish to allow identity companies to check on a yes/no basis that a child or young person is over 13, 13–17 or 18 and over.384
David Austin from the BBFC also pointed to significant innovation taking place in this field:
A year ago, the industry was saying, “We can’t age-verify at a reasonable cost. It will cost us £1 to £1.50 each time we age-verify.” The progress it has made over the last 12 months means that now it is free or costs only a fraction of a penny to age-verify. We have seen massive technological innovation.385
216.The Minister for Digital and the Creative Industries indicated that establishing a digital identity for children, which confirms their age, may be on the horizon:
At the moment, we think we have a robust means by which to verify people’s age at 18; the challenge is to develop tools that can verify people’s age at a younger age, such as 13. Those techniques are not robust enough yet, but a lot of technological research is going on, and I am reasonably confident that, over the next few years, there will be robust means by which to identify age at younger than 18.386
217.Instead of looking at how to prevent children from accessing certain sites, some witnesses focused on how to ensure platforms were designed, from the outset, to be safe for children. Under section 123 of the Data Protection Act 2018, the Information Commissioner must prepare a “code of practice” which contains guidance on “standards of age-appropriate design of relevant information society services which are likely to be accessed by children”.387 During a debate on the Data Protection Bill in December 2017, the Government committed to supporting the Commissioner in her development of the Code by providing a list of “minimum standards to be taken into account when designing it”. According to the Parliamentary Under-Secretary of State in DCMS, Lord Ashton, the standards included:
default privacy settings, data minimisation standards, the presentation and language of terms and conditions and privacy notices, uses of geolocation technology, automated and semi-automated profiling, transparency of paid-for activity such as product placement and marketing, the sharing and resale of data, the strategies used to encourage extended user engagement, user reporting and resolution processes and systems, the ability to understand and activate a child’s right to erasure, rectification and restriction, the ability to access advice from independent, specialist advocates on all data rights, and any other aspect of design that the commissioner considers relevant.388
218.The Information Commissioner’s Office (ICO) consulted on the design code between June and September 2018.389 A response had not been published at the time of writing. Charities and NGOs broadly welcomed its development. Emily Cherry from Barnardo’s was clear that the UK:
absolutely needs to have […] safety-by-design principles in place […] nobody can launch a new shop children will go into or a new playground where children can play without having health and safety features in place. Why should the online world be any different?390
219.Internet Matters highlighted that some social media sites were currently designed to “keep people online for as long as possible”, adding that this was “the metric of success for many of these companies”. It stressed that currently there was “little to no regulation around this area—especially on apps or devices designed and targeted at children and young people”.391
220.A similar point was made by Dr James Williams, a former product designer at Google. In his book, Stand out of Our Light, he wrote that “success” from the perspective of a major online tech company was typically defined in the form of low-level “engagement” goals which “include things like maximizing the amount of time you spend with their product, keeping you clicking or tapping or scrolling as much as possible, or showing you as many pages or ads as they can”. According to Dr Williams, he soon came to understand that companies like Google were focused on holding the attention of their users for as long as possible.392 Our colleagues on the Digital, Culture, Media and Sport Committee recently launched an inquiry on “Immersive and addictive technologies” and will be examining this aspect of social media in more detail.393
221.Some tech and social media companies, along with Internet Service Providers, have attempted to integrate ‘safety-by-design’ principles into their products. Claire Lilley from Google UK told us about YouTube Kids which she described as a “restricted version of YouTube for younger children” aged under 13. She explained that you “cannot make any comments on it or upload any content. You can turn the search function off completely”394 while algorithms are used “to curate the right kind of age-appropriate content”.395 The UK’s four large fixed-line ISPs (BT, Sky, TalkTalk and Virgin Media) also offer all new Internet customers a family-friendly network-level filtering service.
222.The need for these types of technical controls—including content filtering/blocking, privacy and location settings set, by default, to the strongest available for under 18s, and deactivating features designed to promote extended use—were emphasised by children’s NGOs including the NSPCC and Barnardo’s.396 The Government, in its Response to the Internet Safety Strategy, asserted that a fundamental shift in approach was required: one that moves “the burden away from consumers having to secure their devices and instead ensuring strong security is built into consumer […] products by design”.397
223.In Chapter 3 we recommended that the Government’s forthcoming White Paper on Online Harms should be underpinned by the principle that children must, as far as practicably possible, be protected from harm when accessing and using social media sites. It has been suggested that this could be translated into a statutory requirement for social media companies to have a ‘duty of care’ towards its users. A duty of care, applying to both a person and to companies, has been defined as a requirement to:
take care in relation to a particular activity as it affects particular people or things. If that person does not take care, and someone comes to a harm identified in the relevant regime as a result, there are legal consequences, primarily through a regulatory scheme but also with the option of personal legal redress.398
224.The ‘duty of care’ approach, according to Lorna Woods, Professor of Internet Law, University of Essex, and William Perrin, was “essentially preventative” and aimed at “reducing adverse impact on users before it happens, rather than a system aimed at compensation/redress”. They added that “the categories of harm can be specified at a high level, by Parliament, in statute” which is similar to the approach outlined by Ofcom.399 Building on this point, the NSPCC emphasised that there would need to be a regulator who could assess social media companies progress against: “identified harms, and could instruct that additional measures are taken, or sanctions imposed, if platforms fail to appropriately resource or deliver harm reduction strategies”.400
225.Professor Woods and William Perrin similarly stated that a regulator would be needed to:
provide guidance on the meaning of harms; support best practice (including by recognising good practice in industry codes); gather evidence; encourage media literacy; monitor compliance; and take enforcement action where necessary.401
In January 2019, Professor Woods and William Perrin updated their ‘duty of care’ approach. Notably, they broadened the scope of their original proposals to apply to “all relevant service providers irrespective of size”. To strengthen the “enforcement mechanisms”, they also suggested that “directors should be liable to fines personally” for non-compliance with the regulatory regime, though added that this was “a preliminary view”.402 The Minister for Digital and Creative Industries, Margot James MP, told us that establishing a ‘duty of care’ was “one proposal that we [the Government] are looking at”.403
226.In February 2018, the Prime Minister described social media as one of the “defining technologies of our age”. Like many age-defining technologies, it has brought a raft of benefits to its users, together with a host of unintended consequences; a number of which have been particularly detrimental—and in some instances, dangerous—to the wellbeing of children. Currently, there is a patchwork of regulation and legislation in place, resulting in a “standards lottery” that does little to ensure that children are as safe as possible when they go online, as they are offline. A plethora of public and private initiatives, from digital literacy training to technology ‘solutions’, have attempted to plug the gaps. While the majority of these are to be welcomed, they can only go so far. A comprehensive regulatory framework is urgently needed: one that clearly sets out the responsibilities of social media companies towards their users, alongside a regime for upholding those responsibilities. The Government’s forthcoming Online Harms White Paper, and subsequent legislation, presents a crucial opportunity to put a world-leading regulatory framework in place. Given the international nature of social media platforms the Government should ideally work with those in other jurisdictions to develop an international approach. We are concerned, however, based on the Government Response to its Internet Safety Strategy Green Paper, that it may not be as coherent, and joined-up, as it needs to be. We recommend a package of measures in this Report to form the basis of a comprehensive regulatory framework.
227.To ensure that the boundaries of the law are clear, and that illegal content can be identified and removed, the Government must act on the Law Commission’s findings on Abusive and Offensive Online Communication. The Government should now ask the Law Commission to produce clear recommendations on how to reform existing laws dealing with communication offences so that there is precision and clarity regarding what constitutes illegal online content and behaviour. The scope for enforcing existing laws against those who are posting illegal content must be strengthened to enable appropriate punishment, while also protecting freedom of speech.
228.A principles-based regulatory regime for social media companies should be introduced in the forthcoming parliamentary session. The regime should apply to any site with registered UK users. One of the key principles of the regulatory regime must be to protect children from harm when accessing and using social media sites, while safeguarding freedom of speech (within the bounds of existing law). This principle should be enshrined in legislation as social media companies having a ‘duty of care’ towards its users who are under 18 to act with reasonable care to avoid identified harms. This duty should extend beyond the age of 18 for those groups who are particularly vulnerable, as determined by the Government.
229.While the Government should have the power to set the principles underpinning the new regulatory regime, and identify the harms to be minimised, flexibility should be built into the legislation so that it can straightforwardly adapt and evolve as trends change and new technologies emerge.
230.A statutory code of practice for social media companies, to provide consistency on content reporting practices and moderation mechanisms, must be introduced through new primary legislation, based on the template in the Government Response to its Internet Safety Strategy. The template must, however, be extended to include reports of, and responses to, child sexual abuse and exploitation.
231.A regulator should be appointed by the end of October 2019 to uphold the new regime. It must be incumbent upon the regulator to provide explanatory guidance on the meaning and nature of the harms to be minimised; to monitor compliance with the code of practice; to publish compliance data regularly; and to take enforcement action, when warranted. Enforcement actions must be backed up by a strong and effective sanctions regime, including consideration being given to the case for the personal liability of directors. The regulator must be given the necessary statutory information-gathering powers to enable it to monitor compliance effectively.
232.Those subject to the regulatory regime should be required to publish detailed Transparency Reports every six months. As a minimum, the reports must contain information on the number of registered UK users, the number of human moderators reviewing reports flagged in the UK, the volume of reports received from UK users broken down by age, what harms the reports relate to, the processes by which reports are handled—including information on how they are prioritised, the split between human and machine moderation and any reliance on third parties, such as Trusted Flaggers—the speed at which reports are resolved, data on how it was resolved, and information on how the resolution or response was fed back to the user.
233.The Government should consider implementing new legislation, similar to that introduced in Germany, such that when content that is potentially illegal under UK law is reported to a social media company, it should have to review the content, take a decision on whether to remove, block or flag that item (if appropriate) or take other actions, and relay that decision to the individual/organisation reporting it within 24 hours. Where the illegality of the content is unclear, the social media company should raise the case with the regulator, who has the authority to grant the social media company additional time to investigate further. The Government should consider whether the approach adopted in Germany of allowing an extra seven days, in the first instance, to review and investigate further should be introduced in the UK.
234.Given the innovation of new technologies such as “deep fake videos” which cannot be easily identified by human moderators, social media companies should put in place artificial intelligence techniques to identify content that may be fake, and introduce ways in which to “flag” such content to users, or remove (as appropriate).
235.Social media companies must put robust systems in place—that go beyond a simple ‘tick box’ or entering a date of birth—to verify the age of the user. Guidance should be provided, and monitoring undertaken, by the regulator. The Online Pornography (Commercial Basis) Regulations must be immediately revised so that making pornography available on, or via, social media platforms falls within the scope of the regulations.
236.Safety-by-design principles should be integrated into the accounts of those who are under 18 years of age. This includes ensuring strong security and privacy settings are switched on by default, while geo-location settings are turned off. Strategies to prolong user engagement should be prohibited and the Government should consider improvements to ways in which children are given recourse to data erasure where appropriate.
237.We believe that Ofcom, working closely alongside the Information Commissioner’s Office (ICO), is well-placed to perform the regulatory duties and recommend to the Government that it resource Ofcom, and where relevant, the ICO, accordingly to perform the additional functions outlined above.
280 Doteveryone, Regulating for Responsible Technology:Making the case for an Independent Internet Regulator, A Doteveryone Green Paper, May 2018, p8–9
281 Ofcom, Addressing harmful online content A perspective from broadcasting and on-demand standards regulation, September 2018, p3
282 Q584. We found a similar lack of regulatory consistency in our Energy Drinks and Children report regarding the age used to define a child when it comes to the marketing, sale and advertising of energy drinks
283 Ofcom, Addressing harmful online content A perspective from broadcasting and on-demand standards regulation, September 2018, p16
284 Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (‘Directive on electronic commerce’).
285 Transposed into UK law through the Electronic Commerce (EC Directive) Regulations 2002.
289 Mark Bunting, Keeping Consumers Safe Online, Legislating for platform accountability for online content, July 2018, p8
290 Tumblr Help Centre, November 16, 2018: Issues with the iOS app
291 Social network Tumblr removed from Apple’s App Store due to child pornography, The Daily Telegraph, 20 November 2018
292 See, for example, oral evidence taken before the House Lords Communication Committee during its inquiry on The Internet: to regulate or not to regulate, 2018
294 Digital, Culture, Media and Sport Committee Oral evidence: The Work of Ofcom, HC 407, Q50
295 Digital, Communication, Media and Sport Committee, Fifth Report of Session 2017–19, Disinformation and ‘fake news’: Interim Report, HC 363, para 57
296 HM Government, Government response to the Internet Safety Strategy Green Paper, May 2018, p14
297 Ofcom, Addressing harmful online content A perspective from broadcasting and on-demand standards regulation, September 2018
298 William Perrin, Harm Reduction In Social Media – A Proposal, March 2018; Mark Bunting, Keeping Consumers Safe Online: Legislating for platform accountability for online content, July 2018
299 Oral evidence taken before the House of Lords Communications Committee, 1 May 2018, Q13 [Mark Bunting]
300 Digital, Communication, Media and Sport Committee, Fifth Report of Session 2017–19, Disinformation and ‘fake news’: Interim Report, HC 363, para 58
301 Oral evidence taken before the House of Lords Communications Committee, 1 May 2018, Q13 [Dr Damian Tambini]
302 ibid
303 HM Government, Government response to the Internet Safety Strategy Green Paper, May 2018, p 14
304 Oral evidence taken before the House of Lords Communications Committee, 24 April 2018, Q3 [Professor Lorna Woods]
314 US House of Representatives, Committee on Energy and Commerce, Twitter: transparency and accountability, Wednesday 5 September 2018
327 ibid
328 https://fbnewsroomus.files.wordpress.com/2018/07/facebook_netzdg_july_2018_english-1.pdf
329 https://cdn.cms-twdigitalassets.com/content/dam/transparency-twitter/data/download-netzdg-report/netzdg-jan-jun-2018.pdf
330 https://transparencyreport.google.com/netzdg/youtube?hl=en
331 Center for Democracy & Technology, “German Social Media Law Creates Strong Incentives for Censorship”, 7 July 2017
333 ibid
334 The Law Commission, Abusive and Offensive Online Communications: A Scoping Report, HC 1682, November 2018, p328
335 ibid
336 ibid
337 The Law Commission, Abusive and Offensive Online Communications: A Scoping Report, HC 1682, November 2018, p330
342 https://transparencyreport.google.com/netzdg/youtube?hl=en
349 Oral evidence taken before the House of Lords Communications Committee, 10 July 2018, Q106 [Adam Kinsley]
350 PM speech on standards in public life: 6 February 2018, gov.uk
355 https://transparency.facebook.com/community-standards-enforcement
357 HM Government, Government response to the Internet Safety Strategy Green Paper, May 2018, p15; Q646
358 Digital Economy Act 2017, section 103
359 HM Government, Government response to the Internet Safety Strategy Green Paper, May 2018, p24
360 HM Government, Government response to the Internet Safety Strategy Green Paper, May 2018, p2
361 HM Government, Government response to the Internet Safety Strategy Green Paper, May 2018, p15
362 HM Government, Government response to the Internet Safety Strategy Green Paper, May 2018, p23
363 Q274; the Anti-bullying Alliance and UK Safer Internet Centre also stated that they supported the code of practice, see Anti-Bullying Alliance (SMH0102); UK Safer Internet Centre (SMH0110)
364 HM Government, Government response to the Internet Safety Strategy Green Paper, May 2018, p20
369 Digital Economy Act 2017, section 14
370 Digital Policy Alliance, Age Verification & Internet Safety Working Group - Briefing, Online Pornography: Outstanding issues with implementation of the Digital Economy Act 2017, September 2018
373 ibid
376 HM Government, Government response to the Internet Safety Strategy Green Paper, May 2018, p15
377 “We are going to make it a joy to work in the NHS”, interview with Matt Hancock, The House Magazine, 25 October 2018
384 Yoti (SMH0177). Such databases include the centrally-held ‘National Pupil Database’, as well as individual ‘School Information Management System’ (SIMS) databases.
387 Data Protection Act 2018, section 123
389 The consultation results had not been published at the time of writing.
392 James Williams, Stand Out of Our Light: Freedom and Resistance in the Attention Economy, (Cambridge, 2018)
393 https://www.parliament.uk/business/committees/committees-a-z/commons-select/digital-culture-media-and-sport-committee/inquiries/parliament-2017/immersive-technologies/
397 HM Government, Government response to the Internet Safety Strategy Green Paper, May 2018, p28
402 Carnegie Trust UK, Internet Harm Reduction, An updated proposal by Professor Lorna Woods and William Perrin, January 2019
Published: 31 January 2019