Digital Technology and the Resurrection of Trust Contents

Chapter 3: Accountability

Accountability and the technology platforms

74.One of the defining aspects of a developed democracy is the multitude of ways that power is held accountable for its actions. Citizens expect that, when they grant a person or organisation power to act on their behalf, there will be mechanisms to hold them to account for the manner in which that power is used.132 This is why accountability is one of the seven principles of public life that we expect all individuals elected or appointed to public office to uphold.133 In this Chapter we set out the case that platforms have become so dominant that they must be accountable for the power they hold and consider possible accountability mechanisms.

75.It may not always be understood how much power technology platforms hold over our democratic discussion. As those discussions move online it becomes increasingly important to look at how platforms mediate that debate. Much of this is determined by the way in which we are categorised and profiled for advertising purposes. The algorithms that companies use and regularly change for the purposes of advertising have a profound effect on the amount, quality and variety of the news each individual sees. Dr Ana Langer of the University of Glasgow and Dr Luke Temple of the University of Sheffield told us that as large platforms like Facebook and Google control a dominant share of the advertising market, minor changes in their algorithms can have a massive impact on how much news people consume, what news is seen, and whether a news organisation’s business model is commercially viable.134

76.According to eMarketer, in 2019, Facebook and Google made up 68.5 per cent of the UK digital advertising market.135 This increase in platforms’ revenue has been accompanied by a loss of advertising revenue to traditional publishers, which has affected their ability to create original journalism. James Mitchinson, Editor of the Yorkshire Post, argued that social media has shifted the business model to one that is increasingly homogenised and centred around what generates the most ‘clicks’, rather than focused on that which is most in the public interest.136

77.Platforms have achieved a dominant market position. Dr Martin Moore, Director of the Centre for the Study of Media, Communication and Power at King’s College London, has drawn parallels between technology companies and the large monopolies that existed in the US in the late 19th and early 20th centuries. He notes that some of the technology platforms that exist today are in many respects bigger and more dominant than the commercial monopolies of the late nineteenth centuries.137

78.For organisations this powerful to be trusted there must be clear methods of accountability, as Microsoft explained:

“Just as today when consulting a doctor over a medical issue, or a lawyer over a legal challenge, we can seek a second opinion or redress when something goes wrong, in the world of algorithms knowing who is accountable when something goes wrong is equally important. Maintaining public trust will require clear line of sight over who is accountable … in the real world.”138

79.Vint Cerf, Vice-President and Chief Internet Evangelist at Google, told us that one force that holds platforms like his in check is competition. He argued that people can go to other search engines if they want to, and that Google did not force people to use their search engine.139 Whilst this argument is theoretically plausible in relation to Google Search, it fails to cover YouTube (which is owned by Google) or Facebook. YouTube has over two billion users with one billion hours of video watched daily.140 It is used by 92 per cent of the online population in the UK, and Facebook is used by 89 per cent.141 People use Facebook partially because it is where their friends are. People upload videos to YouTube partially because that is where the viewers are; and viewers go to YouTube because it holds the content they are looking for. Whilst there are undoubtedly ways in which these platforms offer positive experiences to users, if a user did wish to use another platform, they are limited in their ability to do so. The Centre for Data Ethics and Innovation (CDEI) found that the public felt that there was not a real choice between online platforms or services, and that it was difficult to avoid the use of Google or Facebook without having a negative online experience.142 Polling from Doteveryone showed that many of the public feel they have no choice but to sign up to services despite their concerns.143 The CMA, which is conducting an investigation into the Digital Advertising Market, has expressed its concern that Google and Facebook are both now so large, and have such extensive access to data, that potential rivals can no longer compete on equal terms.144 This dominance is an important reason why greater accountability is needed to preserve our democracy. It would be wholly insufficient to rely just on competition to keep platforms in check.

80.The public believe that there is a greater role for Government in ensuring accountability for these large companies. The CDEI found that 61 per cent of the public supported Government regulation of online targeting (another term used to describe the algorithmic recommendation of content) compared with only 17 per cent who favoured self-regulation for these companies.145 Christoph Schott, Campaign Director at Avaaz, told us that 81 per cent of the public think that platforms should be held accountable if they recommend fake news to millions of people.146 Doteveryone told us that two thirds of the public think the Government should be helping ensure technology companies treat their customers, staff and society fairly.147 Its polling found that 55 per cent of the public would like more places to seek help online and 52 per cent want a more straightforward procedure for reporting technology companies.148

81.There are a number of smaller platforms which, although they do not currently have the same market dominance, could raise issues for democracy in the future. For example, TikTok is used by approximately one in seven older children and may well become a dominant platform.149 There should be regulation to ensure that, as smaller platforms grow, they come to embody the values the public requires of them.

The Online Harms agenda

82.The Government set out its proposals for making technology platforms more accountable in the Online Harms White Paper.150 The White Paper suggested a statutory duty of care to make companies more responsible for the safety of their users, and to tackle harm caused as a result of content or activity on their services. The proposed regulator for this area would set out how to fulfil this legal duty through mandatory codes of practice. Failure to comply with codes of practice or to provide evidence of how the platform is going beyond the requirements of the code of practice would lead to sanctions for failing in their legal duty of care. The Government has stated that it is minded to appoint Ofcom to regulate this area and in our Report we assume this will be the case.151 The duty of care is proposed to only apply to services that include user generated content and that is also the focus of this Report.152

83.Regulation has failed to keep up with technological innovation. MySociety, a civic technology social enterprise, told us that democratic institutions will struggle to move fast enough to hold social media platforms to account, and that constant innovation would be required just to maintain the status quo.153 A practical example of this can be seen in the Government’s proposals to regulate user generated content. The Government published an Internet Safety Strategy Green Paper in October 2017.154 This in itself would not be regarded as a rapid response to online threats, coming thirteen years after the founding of Facebook and five years after Facebook filed its Initial Public Offering valued at $104 billion with 845 million active users.155 It also came seven years after Dr Tarleton Gillespie first raised concerns about the politics of online platforms which shape the contours of online political discussion.156 There was an eighteen month wait between the green paper and the publishing of the Online Harms White Paper in April 2019. In February 2020 the Government published its initial response to the consultation. Caroline Dinenage MP, Minister for Digital and Culture, told us that it would not publish its full response over the summer as they had previously planned but would publish it before the end of the year. She was unable to confirm that the Government would bring a draft bill to Parliament for scrutiny before the end of 2021.157 This is unacceptable and could mean that the bill may not come into effect until late 2023 or 2024. The Government and the wider policy making process have evidentially failed to get to grips with the pace of change, the urgent challenges and the opportunities of the digital age.

84.The Government should introduce Online Harms legislation within a year of this Report’s publication.

Figure 2: Timeline of progress on the Online Harms White Paper

Source: DCMS and Home Office, Online Harms White Paper, CP 57, (April 2019): https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/793360/Online_Harms_White_Paper.pdf [accessed 13 May 2020]

85.The Online Harms framework, once implemented, is partly designed with this issue in mind. Sarah Connolly, a civil servant at the Department for Digital, Culture, Media and Sport (DCMS), told us that Ministers supported the concept of a duty of care because it allows for flexibility.158 The codes of practice produced by the regulator can be more easily changed than primary legislation.

86.We heard from Sarah Connolly that the Government was taking a three-pronged, risk-based approach. The regulation would look at the size of the company, expecting more from larger companies; the kind of harm, anticipating more serious action to tackle the more serious harms; and the type of company, with companies that promise a restricted safe environment expected to do more than platforms that advertise themselves as a robust conversation.159 This is a sensible approach and our recommendations should be seen as fitting within this framework and the Online Harms work more broadly.

87.The Government has suggested that the Online Harms framework should be focused on harms to individuals. Sarah Connolly told us that Ministers were very keen that it should be narrow in scope and focused on individual harms. However, she stated that tackling individual harms will have a net benefit for the wider public.160 Tony Close, then Director of Content Standards, Licensing and Enforcement at Ofcom, told us that there is a difference between individual harms and societal harms but that it is a spectrum rather than a bright line.161 He cited the harms that come from misinformation about the coronavirus as an example of this. This misinformation might harm the individual but could also have a detrimental impact on society as a whole.

88.Societal harms reflect real damage to the rights of individuals and these are addressed within the White Paper. The White Paper includes cyberbullying and trolling, intimidation and disinformation, as harms within its scope. The regulator is also proposed to have a specific commitment to the protection of freedom of expression.162 A broader interpretation of individual harms that includes the real harm that individuals experience when they are deprived of their democratic rights such as free expression easily fits within this framework and would seem to cover the area we are concerned about.

89.The Online Harms work should make clear that platforms’ duty of care extends to actions which undermine democracy. This means that the duty of care extends to preventing generic harm to our democracy as well as against specific harm to an individual.

90.This Report works within the framework suggested by the Online Harms White Paper. Caroline Dinenage MP, Minister for Digital and Culture, told us that the Online Harms work is an urgent piece of work that should make radical changes to the online world.163 We agree. However, if for some reason the Online Harms work were to cease or if Ofcom were not appointed to oversee this work the recommendations in this Report should be seen as free standing and should be given to an appropriate regulator as soon as is practicable.

Freedom of expression in the online world

91.However, there are concerns over whether the Online Harms framework can bolster rather than undermine democratic activity. Index on Censorship has warned that the Online Harms framework proposes restrictions on speech, between individuals, using criteria that is far broader than current law and so could risk capturing speech that is fundamental to effective democratic functions.164 The Government has suggested that the Online Harms bill will include an obligation on Ofcom to protect freedom of expression but it is important to consider what this actually entails.165

92.Baroness O’Neill of Bengarve told us that protecting freedom of expression is not the only relevant standard that should be used and that we should also look at ethical and epistemic standards including whether it promotes accuracy and honesty.166 She noted that the generic call to protect freedom of expression has been inflated beyond the original argument advanced by John Stuart Mill and others to prioritise the rights of the speaker while ignoring the rights of the listener. Baroness O’Neill of Bengarve explained that Mill’s argument was that that regulation of individuals should only be done when an action causes harm to others.167 Mill famously gave the example that the opinion that “corn dealers are starvers of the poor” should be allowed to circulate through the press but could be justly punished when told to an excited mob outside the house of a corn dealer.168 It is not clear that content on social media platforms should be seen as being more analogous to circulating through the press rather than a speech in front of a mob. It is increasingly apparent that whilst social media can be a site for open discussion it can also be used to incite groups of people to harass and cause harm to others.

93.Professor Cristian Vaccari, Professor of Political Communication at Loughborough University, told us that it was important to distinguish between free speech and free reach. He argued that whilst people should be allowed to circulate things that are distasteful and violate certain norms on social media, it is much more questionable whether that content should be allowed to spread as virally as content that does not violate these norms.169

94.Misinformation, abuse and bullying are the legal but harmful elements that are within the scope of the White Paper and were also identified in our evidence as posing a threat to representative democracy. Misinformation undermines the ability of citizens to have a meaningful conversation about the future shape of society and for this reason we recommend at the outset of this Report that misinformation and disinformation should be within scope.

95.Abuse and hate speech can deter people from taking part in public life. This affects both figures on the national stage and the everyday lives of ordinary members of the public. We heard from Dr Rosalynd Southern and her colleagues at the University of Liverpool that a majority of MPs received abuse online.170 This abuse was often misogynistic or racist in nature and attempted to silence or dismiss the target. As the Joint Committee on Human Rights found in 2019, abuse of MPs is a serious problem and more action is needed to tackle abuse found on social media.171

96.Similar abuse can also deter young people from taking part in democratic discussion online. Professor Peter Hopkins and his colleagues told us about his research with young Muslims in the UK, which showed that young people engaged with democracy through social media also received racist and Islamophobic abuse.172 This had the effect of making them feel more marginalised and less inclined to participate. As discussed above the societal harm to democracy is closely linked to the harm these children experience as individuals.

97.The difficulty with acting on harmful but legal content without unduly affecting freedom of expression has been recognised by Ministers working on Online Harms legislation. They have stated that this work will not prevent adults from accessing or posting legal content, nor will it require companies to remove specific pieces of legal content.173

98.Dr Jennifer Cobbe from the University of Cambridge told us that the content by itself is not the problem.174 She argued that a conspiracy theory video that is only seen by 10 people is not a public policy issue and only becomes a problem when it is disseminated to a large audience and is presented alongside a lot of similar content. Dr Cobbe explained that the problem occurs because of the platforms’ recommendation systems. These are the algorithmically determined processes that platforms use to decide what content to show to users and in what order. She argued that although regulating this algorithmic recommendation would have some freedom of expression effects in deciding that certain communications should not be disseminated as widely as others, the content would remain on the website and could still be found by those who searched for it or if it were shared directly by other people.175 This would be less detrimental to free expression than removing the content. This view was summarised by Alaphia Zoyab, Senior Campaigner at Avaaz who argued that everyone should be free to share whatever they want on social media, but there is a need for intervention where it spreads virally through algorithmic recommendation.176

Platforms’ ultimate responsibility under a duty of care

99.Technology platforms have expressed a preference that regulation should focus on process rather than looking at the impact of their platforms. Facebook’s white paper on regulating online content suggests that regulation which targets specific metrics rather than focusing on getting platforms to improve their processes risks creating perverse incentives.177 Katy Minshall, Head of UK Government, Public Policy and Philanthropy at Twitter told us they thought that Facebook made a compelling point, and that regulators should consider a more holistic picture by looking at systems and processes.

100.However, there is a strong case for platforms being required to take greater responsibility to protect their users from harm. It might not be the case that simply setting out good practice on dealing with certain harms is sufficient to reduce the harms platforms cause. We heard from several witnesses that it is the fundamental business models of technology platforms that directly lead to the harm. Alaphia Zoyab from Avaaz argued that as platforms want to retain your attention for as long as possible there is an incentive to promote triggering and the more outrageous forms of content.178 Dr Jennifer Cobbe told us that platforms prioritise engagement and that in practice means they promote content that is shocking, controversial or extreme.179

101.These concerns about platforms incentivising extreme content are evidenced by the fact that some of the most popular creators on social media have promoted hate speech and misinformation. Felix Kjellberg (known as PewDiePie) operates one of the largest channels on YouTube and has 105 million subscribers, many of whom are children.180 His channel has previously featured him paying people to perform Nazi salutes and hold a sign saying “Death to all Jews” in an attempt at humour.181 Logan Paul, another successful creator on YouTube, invited onto his video podcast Alex Jones, a far-right conspiracy theorist who suggested that a school shooting in the US was orchestrated by the US government, and who has also been banned from YouTube.182 Most recently, the professional boxer Amir Khan and actor Woody Harrelson both posted to their Instagram profiles, to 1.3 and two million followers respectively, material about the fundamentally false conspiracy theory that COVID-19 was caused or amplified by the implementation of 5G.183 A similar effect can be seen with the ‘Plandemic’ COVID-19 conspiracy video; its reach was greatly expanded by being shared by individuals with large numbers of followers including a doctor who had appeared on The Oprah Winfrey show and a Mixed Martial Arts Fighter.184

102.Mark Zuckerberg, the CEO of Facebook, has stated that one of the biggest issues social networks face is that, when left unchecked, people will engage disproportionately with more sensationalist and provocative content.185 However, he suggests that this is a feature of human nature seen just as much in cable news and tabloids as it is on social media. Mr Zuckerberg argues that Facebook is working to minimise sensationalist content and we have heard from Google that it is also trying to do the same with YouTube.186 However, as we explain in Chapter 4 on transparency, platforms have not been clear about what exactly they are doing to minimise such content and the effect of these efforts. There are credible accounts of the internal decisions made inside of these platforms that suggest that despite being aware of the problems platforms have been reluctant to make all the changes necessary to reduce the spread of this type of content.187 Platforms have further undermined this work by exempting elected representatives from many of these policies, as discussed in more detail in the second Chapter on informed citizens.

103.As discussed in the previous chapter, when we asked Vint Cerf why Google was promoting misinformation on its platforms, he told us that algorithmic systems can be brittle.188 Mr Cerf told us that machine learning can make mistakes when small changes that would be imperceptible to humans cause the system to think it is looking at something entirely different. Whilst this is an understandable technical limitation that all platforms face, it is unclear why the responsibility should lie with a regulator to suggest improvements in the processes behind these systems, rather than on the platforms themselves.

104.Karim Palant, UK Public Policy Manager at Facebook, told us that he hoped there would be iterative conversations between platforms and regulators about their expectations and what is proportionate to prevent the spread of harms before fines were given out.189 This may well be the right way forward, but ultimately the responsibility should lie with the platforms to uphold their duty of care to prevent harms even in the absence of a relevant code of practice that advises them on how this must be done.

105.As discussed above, the issue is not that platforms must be responsible for all content on their platforms but rather that they are responsible for content that they spread at scale. This means that platforms should have a greater responsibility to ensure that content that is being widely shared is not harmful before it can do further damage. This is one of the lessons being learned from the COVID-19 pandemic. As well as shouldering greater responsibility for content that they spread peer to peer across a network, platforms must take particular responsibility for users with large audiences on their platforms.

106.Platforms that have content creators with audiences in the tens of millions clearly have a greater responsibility for this content than for content which is produced by creators with audiences in the single digits. These large creators are effectively in business relationships with the platforms they post. In some cases, creators are being paid millions of pounds per year in advertising revenue or tens of millions to create content exclusively for a specific platform.190 However, it is not clear that platforms are taking additional action to ensure that those with larger audiences are not posting content that can be shown to be harmful.

107.What ‘virality’ and large audience mean will differ according to the three-prongs described by Sarah Connolly. Large platforms should be expected to be able to devote more resources to reviewing content and face larger fines for failing to do so. Platforms should have a greater responsibility for more impactful harms and should need to act at lower levels of virality than creators with smaller audiences. Platforms like YouTube that have partnership programmes to pay creators directly should accept a far greater responsibility for the output of those creators.

108.For harmful but legal content, Ofcom’s codes of practice should focus on the principle that platforms should be liable for the content they rank, recommend or target to users.

109.The Government should include as a provision in the Online Harms Bill that Ofcom will hold platforms accountable for content that they recommend to large audiences. Platforms should be held responsible for content that they recommend once it has reached a specific level of virality or is produced by users with large audiences.

110.The Government should empower Ofcom to sanction platforms that fail to comply with their duty of care in the Online Harms Bill. These sanctions should include fines of up to four per cent of global turnover and powers to enforce ISP blocking of serially non-compliant platforms.

Content moderation oversight

111.Dr Tarleton Gillespie, a Senior Principal Researcher at Microsoft Research, explains in his book on the subject that content moderation is the process by which platforms decide what content is or is not allowed on their platform. He argues that although platforms like to suggest that this a peripheral part of their activities, it defines what the platform is and the conversation it allows to take place on it.191

112.Katie O’Donovan, Head of UK Government Affairs and Public Policy at Google, told us that most of their content review is done by machine learning. She stated that humans set the community guidelines and then machine learning is used to identify the content that breaches those guidelines.192 Google told us that 90.7 per cent of the videos it removed from YouTube were first flagged by machines and, of those, 64.7 per cent were removed before the videos had any views.193 Karim Palant from Facebook told us that at Facebook human moderators look at user reports that have been flagged to them, although much of this flagging is done by automated processes. It is unclear to what extent this represents a difference in the way moderation is practiced by the two platforms or merely a difference in emphasis, with Facebook emphasising the human element and Google emphasising the quality of its machine learning.

113.These processes are what determines the de facto boundaries of free expression that users have on these platforms. There is very little transparency and accountability to the public over what they are or are not allowed to say on these platforms. This falls directly within the scope of the proposed Online Harms framework that suggests that platforms should prevent harm whilst protecting free expression online.

114.The evidence we have received suggests that ensuring free expression online is not the priority for content moderation processes. Professor Sarah Roberts, Co-Director of the Centre for Critical Internet Inquiry at UCLA, told us that the main concern behind platforms’ content moderation activities was brand management.194 A member of Facebook’s content policy team told Dr Tarleton Gillespie that Facebook’s moderation process was built around repeated operations at high scale. The employee explained the need for decisions to work at scale rather than focus on being thorough by stating that “it’s not a courtroom. It’s UPS195.”196

115.We heard that the focus of platforms was on moderating inexpensively rather than accurately and that this is reflected in the working conditions of some of the human moderators. Professor Sarah Roberts told us that platforms were constantly seeking sites that will provide labour at the lowest cost. Her research with moderators in the Philippines found that moderators were given approximately half a minute to decide whether certain types of content should be removed.197 Furthermore, Professor Roberts told us that moderators frequently deal with disturbing content including sexual exploitation or abuse of children. As a result, moderators either cease to be good at their job because they are traumatised by what they have seen, or through repetition they become desensitised to these types of extreme content.198

116.It is not clear that automated content moderation makes better decisions than human moderators. The Electronic Frontier Foundation , an American non-profit organisation that seeks to defend civil liberties in a digital world, argues that replacing human moderators with automation creates a more secretive process in which people’s content is removed inaccurately.199 Sarah Connolly from DCMS told us that for subjects like hate speech where there is not a clear line between a slur and the re-appropriation of language by an affected community, human eyes are needed to understand the wider context.200 This is particularly concerning in the context of COVID-19. During this public health crisis where moderators could not attend a physical workplace, platforms decided that rather than allow moderators to access private information from home, they would primarily rely on automation to remove misinformation about the virus.201

117.Even before the pandemic, platforms like Facebook did not empower their moderators to make the best decision possible. Karim Palant from Facebook told us that they reduce the contextual information that is available to moderators in order to protect users’ privacy and to ensure against bias. This approach was criticised by the Reuters Institute which argued that moderators should have greater access to contextual information.202 They gave the example of an image from the holocaust which has a very different significance when posted by a Holocaust survivor or by a Neo-Nazi. A report from Yale Law School commissioned by Facebook to audit its transparency process suggests that Facebook does sometimes use this context for moderation decisions.203 It notes that Facebook routinely reviews moderation decisions using a reviewing panel and that this panel is given more information about the context of the post, including some additional details about the history of the user who posted the content and the user who reported the content.

Appealing platforms’ decisions

118.The Online Harms White Paper states that users should have the right to appeal moderation decisions.204 However, it is unclear what mechanism would be used to ensure the quality of this appeal process. The White Paper makes clear that the regulator’s role would not be to monitor individual decisions and instead only to regulate companies’ policy and processes.

119.Many individuals’ experience of something going wrong online has been mishandled. Polling from Doteveryone found that approximately a quarter of the public say they have reported something and nothing happened as a result.205

120.Facebook is in the process of creating an independent Oversight Board which would act as a final appeals body that could overrule moderation decisions made by Facebook. This could be an effective part of corporate governance. However, this should not be seen as anything beyond that. The Oversight Board provides Facebook advice on content moderation decisions. This helps create an externally visible avenue of appeal. However, the platform is under no legal obligation to take its advice and has only made a commitment to accept its decisions on individual pieces of content. The board is not, as the name might suggest, providing broader oversight of Facebook and does not provide an independent accountability mechanism. Facebook recently announced the international membership of the Oversight Board, but the process is global and seeks to make decisions across cultures,206 which may also prevent it from being effective.

121.A global body will not necessarily be well placed to determine what content should be removed in the UK. We heard from Professor Safiya Noble, Co-Director of the Centre for Critical Internet Inquiry at UCLA, that moderators can fail to understand the social context in which remarks are made. She highlighted the fact that something that might be blatantly racist in one culture could be imperceptible for a person from another culture.207 Dr Ysabel Gerrard from the University of Sheffield told us that platforms should be aiming to moderate at a local, not global level.208

122.Will Moy, the Chief Executive of Full Fact, told us that the idea that technology platforms were best placed to say where the balance lies between harm and free expression in all the different countries in which they operate was laughable. He argued that there should be an open, democratic, transparent process defining where that balance lies in the UK.209 Doteveryone told us that that the UK should develop an ombudsman-style body to adjudicate on content takedown decisions if cases are of sufficient importance, for example if they relate to the abuse of a political figure or have been broadcast to a minimum threshold of users. They argued that content standards must ultimately be defined according to the public’s values founded on rigorous democratic debate.210 In their submission to the Online Harms White Paper consultation, the ICO suggested that in addition to a regulator to oversee policy there should be an ombudsman to deal with complaints that have not been satisfactorily dealt with between the user and the platform.211

123.One of the concerns with improving moderation is the volume of material that could have to be considered. Sarah Connolly from DCMS told us that the sheer number of uploads and amount of material that needs to be looked at makes moderation difficult.212 Karim Palant from Facebook stated that Facebook’s rules are designed and written to be enforced at scale based on millions of reports.213 Caroline Dinenage MP, Minister for Digital and Culture, told us that the size and scale of the online world would mean that the volume of traffic sent to an ombudsman would be massive and could overwhelm an organisation.214 The size of a problem does not remove the necessity of it being tackled. Whilst there would undoubtedly be a large number of potential cases for the ombudsman, this does not mean that there should not be one. Facebook’s Oversight Board will face an even greater challenge in dealing with complaints from Facebook’s users across the globe. Like the Oversight Board this ombudsman should prioritise cases on the basis of where its decisions can have the most impact. In addition, other parts of the Online Harms work should improve moderation processes and reduce the number of cases that would need an ombudsman. However, an ombudsman that only had the power to overturn a single issue would struggle to make a meaningful impact on users’ experience of a platform.

124.Mackenzie Common, a researcher at LSE, has argued that content moderation procedures could be improved by establishing a system of precedent.215 This could be problematic if each case were unique and difficult to judge. However, that is not how it is described by platforms. A content policy manager at Facebook described this to Dr Tarleton Gillespie:

“The huge scale of the problem has robbed anyone who is at all acquainted with the torrent of reports coming in of the illusion that there was any such thing as a unique case. … On any sufficiently large social network everything you could possibly imagine happens every week, right? So there are no hypothetical situations, and there are no cases that are different or really edge. There’s no such thing as a true edge case. There’s just more and less frequent cases, all of which happen all the time.”216

125.There is a case for a greater transparency in content moderation and holding platforms to account for inconsistency in their practice which we discuss in Chapter 2. In this case, establishing clear standards would allow for the decisions of an independent ombudsman to help improve the quality of content moderation. Caroline Dinenage MP told us that Ofcom could use a super complaints system, where there are many complaints about a single issue, to feed into their horizon scanning approach in determining what ‘good’ looks like.217 The ombudsman’s decisions could feed directly into this process. There are important decisions to be made about how widely the ombudsman’s rulings bind future content moderation. 218 The platform will need to work with Ofcom and the ombudsman to determine this and the best way for the platform to implement this.

126.This ombudsman should not have an absolute say over what must be included on a platform. It is not desirable that an ombudsman should be able to overrule the rights of a platform owner if they do not wish to host content that they find objectionable as set out in their terms and conditions or community standards.

127.The Government should establish an independent ombudsman for content moderation decisions to whom the public can appeal should they feel they have been let down by a platform’s decisions. This ombudsman’s decisions should be binding on the platform and in turn create clear standards to be expected for future decisions for UK users. These standards should be adjudicated by Ofcom, with platforms able to make representations on how they are applied within their moderation processes. The ombudsman should not prevent platforms removing content which they have due cause to remove.

Parliamentary oversight

128.Appointing a regulator and an ombudsman to counteract online harms only achieves part of the goal of bringing democratic accountability to technology platforms. There must also be democratic oversight for these bodies. The decisions that Ofcom and the ombudsman will make may at times be deeply political and as such their decisions should flow from a parliamentary mandate. Will Moy of Full Fact suggested that social media companies were currently setting the rules because Parliament had failed to act.219

129.The Online Harms White Paper suggested there should be a role for Parliament in overseeing the regulator through the process of laying an annual report and accounts before Parliament, and responding to requests for information.220 The White Paper’s consultation asked about what Parliament’s role should be.221 Respondents strongly supported Parliament having a defined oversight role over the regulator. This included several bodies suggesting establishing a dedicated body to review codes of practice to ensure consistency with existing civil liberties.

130.We agree that Parliament should have a strong role in overseeing the regulator. Yet merely laying an annual report and accounts before Parliament would not be sufficient. The Electoral Commission and its oversight by the Speaker’s Committee provides a better model and one that would ensure greater parliamentary oversight and independence, and engagement with the regulator from Government.222 The Speaker’s Committee sets the budget of the Electoral Commission and thereby better protects it from political interference. Just as the Electoral Commission’s role in elections requires neutrality and independence so this work which judges free expression online requires that same level of independence.

131.This need for independence from Government can be seen from other countries’ attempts to tackle disinformation. In Singapore, the Government has the ability to remove all online information that it deems to be incorrect and not in the public interest.223 This power vested in the Government inevitably led to the first uses of the law being to take down posts critical of the Government.224 The way the UK acts in creating independent oversight for a regulator with powers so closely related to free expression will be closely watched by other countries. Caroline Dinenage MP, Minister for Digital and Culture, told us that many countries are watching how the UK delivers its Online Harms regulation.225 We should be careful not to provide an example that helps legitimise the actions of authoritarian regimes. To ensure this, any committee established to oversee this area of activity should be constituted in a way that ensured there was no inbuilt government majority. The committee’s independence could be further underpinned by taking its membership from both Houses of Parliament.

132.The proposed committee’s role should be to ensure transparency and accountability in the work of the regulator and the ombudsman. This committee would represent the public and ensure that platforms are being held accountable for their decisions by the regulator and the ombudsman. This includes representing citizens who are not in a position to take action on their own behalf, including children who make up a third of internet users.226 However, both the regulator and the ombudsman should be independent and empowered to make decisions on the policy and practice of technology platforms. The proposed committee should not be a court of appeal for cases that are overseen by the regulator and the ombudsman.

133.The proposed committee should play a key role in selecting the ombudsman’s chief executive in order to ensure independence from the Government. This would follow the example of the House of Commons Treasury Select Committee having the power to veto the Government’s choice of the Chair of the Office for Budget Responsibility. There should also be parliamentary oversight for Ofcom’s leadership. However, Ofcom’s work is broader than this Online Harms work. The DCMS Select Committee have requested a role in the appointment process and we would support that with the ability of this new joint committee to feed into that process.227

134.Parliament should set up a joint committee of both Houses to oversee Ofcom’s Online Harms work and that of the proposed ombudsman. This committee should be constituted so that there can be no Government majority amongst its Members. The committee should ensure an adequate budget for this portion of Ofcom’s work. Ofcom should be obliged to submit all codes of practice to the Committee for scrutiny.

135.The joint committee should set the budget for the content moderation ombudsman. The committee should hold an appointment hearing with the ombudsman’s proposed chief executive and hold the power of veto over their appointment.

Regulatory capacity

136.In order for these measures to provide effective accountability for the technology platforms, the relevant regulators must be able to keep pace with technological change. Alongside Ofcom, who have been the focus of much of this Chapter, there are several other regulators that have a role to play in ensuring that democracy can flourish in a digital society. This includes the ICO, the ASA, the CMA, and the Electoral Commission.

Box 3: The Regulators

Advertising Standards Authority (ASA)

In the UK, general advertising and direct marketing across all media is regulated by the ASA, under the principle that adverts must be “legal, decent, honest and truthful.”

Advertisements made by companies and third sector bodies (such as voluntary and community organisations) must adhere to ASA rules. However, non-broadcast political advertising which principally aims to influence voters in local, regional, national or international elections or referendums is exempt under Rule 7 of the CAP Code and is not regulated by the ASA.

Competition and Markets Authority (CMA)

The CMA is an independent non-ministerial department which leads the Government’s Digital Markets Taskforce, and seeks to regulate the digital sphere by: enabling disruptors to challenge incumbents; empowering consumers through choice and control; supporting quality services and content online and providing industry, especially SMEs, with fair access to digital markets to be able grow their businesses.

Electoral Commission

The Electoral Commission is an independent body which regulates the funding of political parties, individual party members, and candidates, as well as organisations campaigning in referenda. It is distinct from other regulators for being answerable to Parliament. It also enforces inclusion of imprints of printed election material under the Political Parties, Elections and Referendums Act 2000.

Information Commissioner’s Office (ICO)

The ICO is the independent regulatory office in charge of upholding information rights in the interest of the public. The organisation covers the Data Protection Act 2018 (DPA), Freedom of Information Act and the Privacy and Electronic Communications Regulations.

Under the DPA, all organisations that process personal information must register with the ICO, who publish the names and addresses of the data controllers. They also include a description of the type of processing each organisation performs.

Ofcom

Ofcom oversees telecommunications, post, broadcast TV and radio (including the BBC’s output), has duties in relation to broadcast advertising and regulates certain online video services. It has a statutory duty to promote media literacy, under which it carries out research into people’s use of online services such as social media and video sharing platforms. In February 2020 the Government announced it was “minded” to grant new powers to Ofcom as the regulator for online harms.

137.In January, we hosted an informal workshop with representatives from relevant regulators, technology companies and external experts to discuss regulatory capacity and innovation in this area.228 This focused on the challenges that regulators face in adapting to the digital environment. Issues raised included a need for greater collaboration between regulators and the industry, the importance of regulators having a wide skill base and being able to attract talent from industry, and the growing resource constraint that regulators face.

138.In taking evidence, we heard particular concern voiced about the digital capacity of the Electoral Commission. Democracy Club, a civic technology company, told us that the Electoral Commission does not have an in-house digital team and has not been given the tools necessary to pursue its aims in a digital age.229 Full Fact argued that the Electoral Commission needs better funding and a strong technology team to develop the tools necessary to monitor spending as it happens, so that fraud or misuse is caught before it can have affected the outcome of elections or referendums.230 Doteveryone similarly stated that the Electoral Commission was hamstrung by limited resources and that it was vital that it build the digital capabilities needed to anticipate and respond to future developments in digital campaigning. It recommended that the Electoral Commission should work with the CDEI and the Better Regulation Executive to develop the horizon scanning capacities needed to identify emerging challenges. Louise Edwards, Director of Regulation at the Electoral Commission, told us that the Commission felt it had the resources to enforce the current framework and that the issue was that the existing legal framework prevented the Commission from regulating elections properly in a digital age.231 We will return to the regulation of elections in Chapter 6, however, it is not clear that the digital expertise and resources are currently available to the Commission to oversee its desired level of digital innovation.

139.We also heard other concerns about the under resourcing of other regulators. Alex Krasodomski-Jones, from Demos, suggested that the ICO was woefully under-resourced. Elizabeth Denham, the Information Commissioner, told us that although a regulator could always make a case for additional resources, she believed that the lack of resources at the Electoral Commission was of greater concern.232

140.Beyond the lack of resources, additional concerns were raised about the difficulty caused by the number of regulators with varying remits. Baroness O’Neill of Bengarve felt that even with the variety of existing regulators, it was difficult to be confident that all concerns about democracy online were covered by any regulator or which regulator that might be.233 Caroline Elsom, Senior Researcher at the Centre for Policy Studies, argued that there were at least five regulators working in this space which in turn brought a real risk of overregulating the technology industry in the UK.234

141.A number of different models for encouraging better co-operation between regulators have been suggested. Many of these arrangements focus on the existence or creation of formal structures through which regulators meet. The House of Lords Communications Committee in its report on regulating in a digital world suggested the creation of a digital authority to oversee regulators operating in this space.235 This suggestion was made before the Online Harms White Paper was published and Ofcom was designated as its regulator. The report suggested that this digital authority should have a pool of staff resources that could support the activity of any of the relevant regulators. The Information Commissioner told us that her office had proposed a joint board of regulators to co-ordinate regulatory activity to ensure that all the relevant regulators were not pursuing the same technology company at the same time but from slightly different angles.236 She suggested that this should be done in a way that did not create a large and cumbersome layer of bureaucracy but which allowed for greater co-ordination and the sharing of expert resources. The ICO’s proposal is modelled on the Regulatory Reform (Collaboration etc. between Ombudsmen) Order 2007 which allows the Local Government and Social Care Ombudsman, and the Parliamentary and Health Service Ombudsman to work together on a joint investigation if a complaint covers both jurisdictions.237 In addition to these formal structures, there are also examples of regulators developing more informal networks and means of collaboration. The Artificial Intelligence (AI) Regulator Group, for example, was created by the ICO in early 2019 and brings together over 20 regulators including the ASA, the National Institute for Health and Care Excellence, the Civil Aviation Authority, Bank of England, Environment Agency, Ofsted and others to discuss best practice in the use of AI by regulators, and the possible focus of future AI regulation. This network adopts an agile approach to creating and dissolving working groups on specific issues of common interest, such as data sharing and facilitating collaboration between regulators. Ofcom and the ICO are also both members of the UK Regulators Network which is a partnership between regulators in the financial and transport sectors to share knowledge and explore cross-cutting issues.238

142.Kevin Bakhurst, Group Director of Content and Media Policy at Ofcom, stated that there is an important role in looking for areas of overlap between regulators to ensure that people do not feel as if they are being regulated twice. His then-colleague, Tony Close, Director of Content, Standards Licencing and Enforcement, told us that whilst they already had a very structured relationship with the ASA, a more formal relationship with all regulators in this area is likely to be beneficial.239 Louise Edwards of the Electoral Commission told us that she could see some advantage to more cross-cutting investigations, but stressed the need to share sufficient information with other regulators for this to be effective.240

143.The ICO’s proposal is founded on the idea of regulators in this space still maintaining clear independence from each other. Their respective remits would define the limits of the proposed cooperation. The Commissioner told us that as creatures of statute they need to remain independent and to carry out their own intended statutory role.241 However, in light of the economic damage caused by the COVID-19 pandemic, there will be increased pressure on public sector spending including the funding of regulators. The newly created digital levy could provide some of the funding to support online regulation, along with additional funding from fines proposed in this Chapter and in Chapter 6 on reforming electoral law, but there will be an overwhelming need for resources to be used efficiently.

144.Ofcom have relied on a model of being funded by the industries they regulate. This may struggle to translate to the digital world. The ASA, for example, has seen its funding fall as advertising moves online. This is because Google and Facebook are unwilling to levy advertisers in the same way traditional media owners do. Traditional media owners levy a fee on advertising to fund the ASA allowing advertisers to opt out where they wish to whilst Google allows advertisers to opt in. In practice few chose to opt out from traditional media and few now choose to opt in online.242 Online platforms have preferred to fund and participate in industry bodies such as the Internet Advertising Bureau UK which sets voluntary minimum standards for its members. A lack of funding is a particular concern as the ASA’s workload has increased as it covers online content. Since 2011 the ASA has regulated claims made on brand owners’ websites and social media and as of 2019 this made up just over a third of the ASA’s workload.243

145.One particular issue we heard about was the need to attract people with digital expertise to work for regulators. Rachel Coldicutt, then CEO of Doteveryone, told us that many of the people who have this expertise were currently working in the industry and more needed to be done to make it attractive for them to move into regulatory roles.244 The Institute for Government has previously found similar issues across Government in trying to attract people with the requisite digital skills.245 Kevin Bakhurst of Ofcom told us that previous experience suggested that when their work had expanded to cover new areas they were able to attract new talent as individuals were interested in adding a regulatory role to their CV. Individuals came to work for Ofcom for a couple of years and Ofcom benefited as a result.246 Tony Close told us that there was merit in people from the private sector going into the regulator and people from the regulator moving into the private sector. Since we heard from Mr Close, he has moved to work for Facebook.247 There may be a case for creating a central pool of digital expertise from which this talent can be drawn which could support the work of the different digital regulators, for example, supporting the Electoral Commission to develop tools to be used in a General Election.

146.The CDEI acts as an advisory body to Government situated within the DCMS, although it also has its own independent board.248 Its role in bringing together policymakers, industry and civil society to help develop the right governance regime for data-drive technologies makes it well placed to look into the question of how regulators should oversee this area.

147.The Government should introduce legislation to enact the ICO’s proposal for a committee of regulators that would allow for joint investigations between regulators in the model of the Regulatory Reform (Collaboration etc between Ombudsmen) Order 2007. This committee should also act as a forum to encourage the sharing of best practice between regulators and support horizon scanning activity.

148.The CDEI should conduct a review of regulatory digital capacity across the CMA, ICO, Electoral Commission, ASA and Ofcom to determine their levels of digital expertise. This review should be completed with urgency, to inform the Online Harms Bill before it becomes law. The CDEI should work with Ofcom to help determine its role in online regulation. The review should consider:

(a)What relative levels of digital expertise exist within regulators, and where skills gaps are becoming evident;

(b)How these regulators currently draw on external expertise, and what shared system might be devised for seeking advice and support;

(c)What changes in legislation governing regulators would be needed to allow for a shared pool of digital expertise and staffing resource that could work between and across regulators;

(d)How this joint pool of staffing resource could be prioritised and funded between regulators.


132 Overseas Development Institute, ‘Accountability: the core concept and its subtypes’ (April 2009) p 1: https://assets.publishing.service.gov.uk/media/57a08b4740f0b652dd000bd6/APPP-WP1.pdf [accessed 13 May 2020]

133 Committee on Standards in Public Life, ‘The 7 principles of public life’, (May 1995) https://www.gov.uk/government/publications/the-7-principles-of-public-life [accessed 13 May 2020]

134 Written Evidence from Dr Ana Langer and Dr Luke Temple (DAD0048)

135 eMarketer, ‘Facebook and Google Maintain Grip in UK Digital Ad Market’ (October 2019) https://www.emarketer.com/content/facebook-and-google-maintain-grip-in-uk-digital-ad-market

136 Q 108 (James Mitchinson)

137 Dr Martin Moore, ‘Tech Giants and Civic Power’ (April 2016) p 3: https://www.kcl.ac.uk/policy-institute/assets/cmcp/tech-giants-and-civic-power.pdf#page=7 [accessed 13 May 2020]

138 Written Evidence from Microsoft (DAD0050)

139 Q 252 (Vint Cerf)

140 YouTube, ‘YouTube for Press’, https://www.youtube.com/intl/en-GB/about/press/ [accessed 13 May 2020]

141 Ofcom, ‘Online Nation’ (29 May 2019) https://www.ofcom.org.uk/research-and-data/internet-and-on-demand-research/online-nation [accessed 13 May 2020]

142 Centre for Data Ethics and Innovation, ‘Public Attitudes Towards Online Targeting’ – A report by Ipsos MORI (February 2020) p 38: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/863025/1901705901_Attitudes_to_Online_Targeting_Report_FINAL_PUBLIC_030220.pdf#page=39 [accessed 13 May 2020]

143 Doteveryone, People, Power and Technology: The 2020 Digital Attitudes Report (May 2020) https://www.doteveryone.org.uk/wp-content/uploads/2020/05/PPT-2020_Soft-Copy.pdf [accessed 13 May 2020]

144 Competition & Markets Authority, Online platforms and digital advertising (December 2019) https://assets.publishing.service.gov.uk/media/5dfa0580ed915d0933009761/Interim_report.pdf [accessed 13 May 2020]

145 Centre for Data Ethics and Innovation, ‘Review of online targeting: Final report and recommendations’ (February 2020) p 56: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/864167/CDEJ7836-Review-of-Online-Targeting-05022020.pdf#page=56 [accessed 13 May 2020]

146 Q 163 (Christoph Schott), conducted online on a sample of 2030 adults weighted to be representative of all GB adults.

147 Written Evidence from Doteveryone (DAD0037)

148 Doteveryone, People, Power and Technology: The 2020 Digital Attitudes Report (May 2020): https://www.doteveryone.org.uk/wp-content/uploads/2020/05/PPT-2020_Soft-Copy.pdf [accessed 13 May 2020]

149 Ofcom, ‘Parents more concerned about their children online’ (4 February 2020): https://www.ofcom.org.uk/about-ofcom/latest/features-and-news/parents-more-concerned-about-their-children-online [accessed 04 June 2020]

150 DCMS and Home Office, Online Harms White Paper, CP 57, (April 2019): https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/793360/Online_Harms_White_Paper.pdf [accessed 13 May 2020]

151 DCMS and Home Office, ‘Online Harms White Paper – Initial consultation response’, (February 2020): https://www.gov.uk/government/consultations/online-harms-white-paper/public-feedback/online-harms-white-paper-initial-consultation-response [accessed 13 May 2020]

152 Ibid.

153 Written evidence from MySociety (DAD0024)

154 HM Government, Internet Safety Strategy – Green paper (October 2017): https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/650949/Internet_Safety_Strategy_green_paper.pdf [accessed 27 May 2020]

155 ‘Stakes are high on Facebook’s first day of trading’ The Los Angeles Times (17 May 2012): https://web.archive.org/web/20120518023059/http://www.latimes.com/business/la-fi-facebook-pricing-20120518%2C0%2C3426310.story [accessed 28 May 2020]

156 Dr Tarleton Gillespie, The politics of ‘platforms’, New Media & Society, vol. 12 (2010), pp 347–364: https://doi.org/10.1177/1461444809342738

157 Q 337 (Caroline Dinenage MP)

158 Q 15 (Sarah Connolly)

159 Q 17 (Sarah Connolly)

160 Q 14 (Sarah Connolly)

161 Q 285 (Tony Close)

163 Q 338 (Caroline Dinenage MP)

164 Index on Censorship, Index on Censorship submission to Online Harms White Paper consultation, (June 2019): https://www.indexoncensorship.org/wp-content/uploads/2019/07/Online-Harms-Consultation-Response-Index-on-Censorship.pdf [accessed 13 May 2020]

166 Q 3 (Baroness O’Neill of Bengarve)

167 Q 2 (Baroness O’Neill of Bengarve)

168 JS Mill, On Liberty (London: Penguin, 1974), p 119

169 Q 52 (Professor Cristian Vaccari)

170 Written evidence from the University of Liverpool (DAD0022)

171 Joint Committee on Human Rights, Democracy, freedom of expression and freedom of association: Threats to MPs (First Report of Session 2019, HC 37, HL Paper 5)

172 Written evidence from Professor Peter Hopkins (DAD0053) and (DAD0056)

174 Q 163 (Dr Jennifer Cobbe)

175 Q 164 (Dr Jennifer Cobbe)

176 Q 164 (Alaphia Zoyab)

177 Facebook, ‘Charting a Way Forward – Online Content Regulation’ (February 2020): https://about.fb.com/news/2020/02/online-content-regulation/ [accessed 13 May 2020]

178 Q164 (Alaphia Zoyab)

179 Written evidence from Dr Jennifer Cobbe (DAD0074)

180 YouTube, PewDiePie, https://www.youtube.com/channel/UC-lHJZR3Gqxm24_Vd_AJ5Yw [accessed 22 June 2020]

181 Vox, ‘YouTube’s most popular user amplified anti-Semitic rhetoric. Again.’ (13 December 2018): https://www.vox.com/2018/12/13/18136253/pewdiepie-vs-tseries-links-to-white-supremacist-alt-right-redpill [accessed 13 May 2020]

182 BBC, ‘Logan Paul ‘unwise’ to do Alex Jones YouTube interview’ (11 April 2019): https://www.bbc.co.uk/news/technology-47852628 [accessed 13 May 2020] and ‘Facebook, Apple, YouTube and Spotify ban Infowars’ Alex Jones’, The Guardian (5 August 2018): https://www.theguardian.com/technology/2018/aug/06/apple-removes-podcasts-infowars-alex-jones [accessed 13 May 2020]

183 New Statesman, ‘How celebrities became the biggest peddlers of 5G coronavirus conspiracy theories’ (6 April 2020): https://www.newstatesman.com/science-tech/social-media/2020/04/how-celebrities-became-biggest-peddlers-5g-conspiracy-theory-coronavirus-covid-19 [accessed 13 May 2020]

184 ‘How the ‘Plandemic’ Movie and Its Falsehoods Spread Widely Online’, New York Times (20 May 2020): https://www.nytimes.com/2020/05/20/technology/plandemic-movie-youtube-facebook-coronavirus.html?referringSource=articleShare [accessed 27 May 2020]

185 Mark Zuckerberg, ‘A Blueprint for Content Governance and Enforcement’, (November 2018): https://www.facebook.com/notes/mark-zuckerberg/a-blueprint-for-content-governance-and-enforcement/10156443129621634/ [accessed 13 May 2020]

186 Supplementary written evidence from Google (DAD0101)

187 ‘Facebook Executives Shut Down Efforts to Make the Site Less Divisive’, Wall Street Journal (May 2020): https://www.wsj.com/articles/facebook-knows-it-encourages-division-top-executives-nixed-solutions-11590507499?mod=searchresults&page=1&pos=1 [accessed 3 June 2020]

188 Q 243 (Vint Cerf)

189 Q 300 (Karim Palant)

190 Socialblade, PewDiePie: https://socialblade.com/youtube/user/pewdiepie [accessed 13 May 2020] and Business Insider, ‘Ninja reportedly got paid between $20 million and $30 million by Microsoft to leave Amazon’s Twitch streaming service’ (27 January 2020): https://www.businessinsider.com/how-much-did-ninja-make-for-leaving-twitch-2020-1?r=US&IR=T [accessed 13 May 2020]

191 Tarleton Gillespie, Custodians of the Internet: platforms, content moderation, and the hidden decisions that shape social media (New Haven: Yale University Press, 2018)

192 Q 253 (Katie O’Donovan)

193 Supplementary written evidence from Google (DAD0101)

194 Q 170 (Professor Sarah Roberts)

195 United Parcel Service, an international logistics company.

196 Dr Tarleton Gillespie, Custodians of the Internet: platforms, content moderation, and the hidden decisions that shape social media (New Haven: Yale University Press, 2018) p 111

197 Q 172 (Professor Sarah Roberts)

198 Q 174 (Professor Sarah Roberts)

199 Electronic Frontier Foundation, ‘Content Moderation is Broken. Let us Count the Ways’ (April 2019): https://www.eff.org/deeplinks/2019/04/content-moderation-broken-let-us-count-ways [accessed 13 May 2020]

200 Q 16 (Sarah Connolly)

201 The Verge, ‘The coronavirus is forcing tech giants to make a risky bet on AI’ (18 March 2020): https://www.theverge.com/interface/2020/3/18/21183549/coronavirus-content-moderators-facebook-google-twitter [accessed 13 May 2020]

202 Reuters Institute, ‘GLASNOST! Nine ways Facebook can make itself a better forum for free speech and democracy’, (December 2018): https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2019–01/Garton_Ash_et_al_Facebook_report_FINAL_0.pdf [accessed 13 May 2020]

203 Facebook Data Transparency Advisory Group, Report of the Facebook Data Transparency Advisory Group, (April 2019) p 13: https://law.yale.edu/sites/default/files/area/center/justice/document/dtag_report_5.22.2019.pdf#page=13 [accessed 13 May 2020]

204 DCMS and Home Office, Online Harms White Paper, CP 57, (April 2019): https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/793360/Online_Harms_White_Paper.pdf [accessed 13 May 2020]

205 Doteveryone, People, Power and Technology: The 2020 Digital Attitudes Report, (May 2020): https://www.doteveryone.org.uk/wp-content/uploads/2020/05/PPT-2020_Soft-Copy.pdf [accessed 13 May 2020]

206 Oversight Board, ‘Meet the Board’: https://www.oversightboard.com/meet-the-board/ [accessed 13 May 2020]

207 Q 172 (Professor Safiya Noble)

208 Written evidence from Dr Ysabel Gerrard (DAD0093)

209 Q 93 (Will Moy)

210 Written evidence from Doteveryone (DAD0037)

211 Information Commissioner’s Office, ‘The Information Commissioner’s response to the Department for Digital, Culture, Media & Sport consultation on the Online Harms White Paper’ (July 2019) https://ico.org.uk/media/about-the-ico/consultation-responses/2019/2615232/ico-response-online-harms-20190701.pdf [accessed 13 May 2020]

212 Q 16 (Sarah Connolly)

213 Q 307 (Karim Palant)

214 Q 340 (Caroline Dinenage MP)

215 Mackenzie Common, ‘Fear the Reaper: How Content Moderation Rules are Enforced on Social Media’ (January 2019): https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3405337 [accessed 13 May 2020]

216 Dr Tarleton Gillespie, Custodians of the Internet: platforms, content moderation, and the hidden decisions that shape social media, (New Haven: Yale University Press 2018) p 77

217 Q 340 (Caroline Dinenage MP)

218 ‘In Lieu of Fun, Episode 43: Nicole Wong and Alex MacGillivray’ (6 May 2020) YouTube video, added by In Lieu of Fun: https://youtu.be/oYRMd-X77w0?t=2135 [accessed 13 May 2020]

219 Q 93 (Will Moy)

220 DCMS and Home Office, Online Harms White Paper, CP 57, April 2019: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/793360/Online_Harms_White_Paper.pdf [accessed 13 May 2020]

222 UK Parliament, ‘Speaker’s Committee on the Electoral Commission’: https://committees.parliament.uk/committee/144/speakers-committee-on-the-electoral-commission [accessed 13 May 2020]

223 Parliament of Singapore, Protection from Online Falsehoods and Manipulation Bill (April 2019): https://www.parliament.gov.sg/docs/default-source/default-document-library/protection-from-online-falsehoods-and-manipulation-bill10-2019.pdf [accessed 13 May 2020]

224 The Verge, ‘Singapore’s fake news law should be a warning to American lawmakers’ (3 December 2019): https://www.theverge.com/interface/2019/12/3/20991422/singapore-fake-news-law-censorship-politics-usa?mc_cid=e1af9c8950&mc_eid=81d7bde7ce [accessed 13 May 2020]

225 Q 337 (Caroline Dinenage MP)

226 Written evidence from 5Rights Foundation (DAD0082)

227 DCMS Committee, The Online Harms White Paper (Twelfth Report, Session 2017–19, HC 2431) p 7

228 For more information on the regulatory innovation workshop, please see Appendix 5.

229 Written evidence from Democracy Club CIC (DAD0045)

230 Written evidence from Full Fact (DAD0042)

231 Q 212 (Louise Edwards)

232 Q 290 (Elizabeth Denham)

233 Q 6 (Baroness O’Neill)

234 Q 34 (Caroline Elsom)

235 Communications Committee, Regulating in a digital world (2nd Report, Session 2017–19, HL Paper 299)

236 Q 292 (Elizabeth Denham)

237 Information Commissioner’s Office, ‘The Information Commissioner’s response to the Department for Digital, Culture, Media & Sport consultation on the Online Harms White Paper’ (July 2019) p 11: https://ico.org.uk/media/about-the-ico/consultation-responses/2019/2615232/ico-response-online-harms-20190701.pdf [accessed 13 May 2020]

238 The UK Regulators Network, ‘About’: https://www.ukrn.org.uk/about/ [accessed 13 May 2020]

239 Q 283 (Tony Close)

240 Q 211 (Louise Edwards)

241 292 (Elizabeth Denham)

242 Mediatel, ‘Apple TV’s luvvie hell; and the ASA funding crisis’ (1 April 2019): https://mediatel.co.uk/news/2019/04/01/apple-tvs-luvvie-hell-and-the-asa-funding-crisis/ [accessed on 26 May 2020]

243 Advertising Standards Authority, Annual report 2019 (3 June 2020): https://www.asa.org.uk/resource/asa-and-cap-2019-annual-report.html [accessed 15 June 2020]

244 Q 35 (Rachel Coldicutt)

245 Institute for Government, ‘Making a success of digital government’ (October 2016) p 23: https://www.instituteforgovernment.org.uk/sites/default/files/publications/IFGJ4942_Digital_Government_Report_10_16%20WEB%20%28a%29.pdf [accessed 13 May 2020]

246 Q 282 (Kevin Bakhurst)

247 ‘Facebook poaches social media regulator Tony Close from Ofcom’, The Times (29 April 2020): https://www.thetimes.co.uk/article/facebook-poaches-tony-close-from-ofcom-mdrkv7t2w [accessed 13 May 2020]

248 Department for Digital, Culture, Media & Sport and Department for Business, Energy & Industrial Strategy, ‘Stellar new board appointed to lead world-first Centre for Data Ethics and Innovation’ (November 2018): https://www.gov.uk/government/news/stellar-new-board-appointed-to-lead-world-first-centre-for-data-ethics-and-innovation [accessed 2 June 2020]




© Parliamentary copyright 2018