6.Social media companies, including Twitter, Facebook and YouTube, have created platforms used by billions of people to come together, communicate and collaborate. They are used often by campaigns and individuals for positive messages and movements challenging hatred, racism or misogyny—for example @everydaysexism or #aintnomuslimbruv. However, there is a great deal of evidence that these platforms are being used to spread hate, abuse and extremism. That trend continues to grow at an alarming rate but it remains unchecked and, even where it is illegal, largely unpoliced.
7.We took evidence from Google (the parent company of YouTube), Twitter and Facebook on hate speech and extremism published on their platforms. We chose those companies because of their market size and because each operates a regional headquarters in the UK. We are grateful for their co-operation and willingness to be held accountable to Parliament. The recommendations in this report are largely based on the evidence that we heard from those companies, but we recognise that both our concerns and our recommendations will apply to social media generally and are not limited to those companies, and that some smaller companies and platforms have far lower standards and also less scrutiny.
8.Significant strides have been made in recent years to develop a better understanding of hate and extremism on the internet but the evidence suggests that the problem is getting worse. Carl Miller, a Research Director at the think tank Demos, said that hate and extremism is growing in parallel with the exponential growth of all social media.3 The rising number of prosecutions for online hate crime supports that assertion; 1,209 people were convicted under the Communications Act in 2014 compared to 143 people in 2004.4 Google told us that YouTube, one of its subsidiaries, had experienced a 25% increase in ‘flagged’ content year-on-year.5 Assistant Chief Constable Mark Hamilton, the National Police Chief’s Council’s hate crime lead, said that there had been a significant increase in online hate crime over the last 24 to 36 months.6
9.We took evidence on different types of abusive and illegal content that takes place online, including content that is designed to stir up hatred against minorities, that which is designed to abuse or harass individuals, and content designed to promote or glorify terrorism or extremism, and recruit to extremist organisations. We also heard many examples of the failure by social media companies to act against other forms of abusive or illegal content, such as sexual images of children and online child abuse, which was further evidence that their efforts to remove illegal content were frequently ineffective.7
10.It was shockingly easy to find examples of material that was intended to stir up hatred against ethnic minorities on all three of the social media platforms that we examined—YouTube, Twitter and Facebook.
11.YouTube was awash with videos that promoted far-right racist tropes, such as antisemitic conspiracy theories. We found titles that included “White Genocide Europe—Britain is waking up”, “Diversity is a code word for white genocide” and “Jews admit organizing White Genocide”. Antisemitic holocaust denial videos included “The Greatest Lie Ever Told”, “The Great Jewish Lie” and “The Sick Lies of a Holocaust ‘Survivor’”. We brought a number of examples to YouTube’s attention, some of which, on the basis of their shocking content, were subsequently blocked to users residing in the UK. However, YouTube refused to block the video entitled ‘Jews admit organizing White Genocide’ by David Duke, the American far-right activist and former ‘Imperial Wizard of the Ku Klux Klan’. That was despite Mr Duke exposing virulently antisemitic tropes in the video. For example, he said:
You might learn that Jews are indoctrinated from birth to act instinctively as a team to take over any company, any organisation, any political party, in pursuit of Jewish interests. Every Jew has seen the evils of the holocaust a thousand times in the media. Every one learns how they must stick together against the Gentiles who are portrayed as evil, who might fire up the ovens at any time.
[ … ] That’s why the Jews run the western world, through their media domination and their teamwork they have taken over the football field of politics and finance and culture.
Mr Duke went on to describe the concept of diversity as a “Jewish supremacist weapon to divide and conquer other nations”.8 YouTube refused to remove the video on the grounds that “the video did not cross the line into hate speech.”9
12.On Twitter, there were numerous examples of incendiary content found using Twitter hashtags that are used by the far-right, as identified in research by Dr Imran Awan and Dr Irene Zempi.10 A search for those hashtags identified significant numbers of racist and dehumanising tweets that were plainly intended to stir up hatred, including a cartoon of a white woman being gang raped by Muslims over the ‘altar of multiculturalism’; a cartoon stating that ‘Muslims rape’; and an example of the racist ‘Pakemon’ campaign that featured numerous smears against Muslims. In response to our complaints, Twitter removed some of the tweets and suspended most of the accounts we identified; however many of the same vile and provocative images could still be found on the platform six weeks later when this report was drafted. Twitter refused to remove a cartoon that we reported depicting a group of male, ethnic minority migrants tying up and abusing a semi-naked white woman, while stabbing her baby to death. It refused to take action on the grounds “that it was not in breach of [Twitter’s] hateful conduct policy.”11
13.On Facebook we found community pages devoted to stirring up hatred, particularly against Jews and Muslims, although much of the content that is posted on Facebook is done so within ‘closed groups’ and is not as openly available as similar content is on Twitter. Nevertheless we found openly antisemitic and islamophobic community pages such as ‘The truth about the Talmud’ and ‘Ban Islam’. After we reported the pages to Facebook, it removed some specific posts but said that those community pages “do not violate, because we make it clear that you can criticise religions, but you cannot express hate against people because of their religion.”12
Tweets intended to stir up hatred against ethnic minorities which we reported to Twitter
14.Targeted online hate abuse and harassment has a pernicious impact on its victims, with many feeling acute distress at being targeted in their own homes. Dr Imran Awan and Dr Irene Zempi found that victims of anti-Muslim hate had described themselves as living in fear because of the possibility that online threats might materialise in the ‘real world’.13 Shane Gorman, an advisor on hate crime to Leonard Cheshire Disability, said:
It is bad enough for someone to become socially isolated, to be cut off in their own community [ … ] but when they are at home and people can target them through their computer, it can have a great effect.14
15.Women in particular have become targets for abuse and misogynistic harassment on social media, particularly on Twitter. In a study, Demos found that 10,000 tweets were sent from UK accounts in three weeks aggressively attacking individuals as a “slut” or a “whore”.15 The Fawcett Society conducted an informal survey to examine the type and prevalence of abuse that women receive. Sexist messages were the most common type of harassment experienced, with 70% of respondents who had received abuse on Twitter saying that they had experienced it. Around a third of women experienced “politically extremist hate messages, unwanted sexual messages or images, stalking, and threats of violence.” Users reported experiences of other users organising abuse against them in similar proportions.16
16.Melanie Jeffs, a manager at Nottingham Women’s Centre, was the victim of a wave of vicious, targeted abuse on Twitter. She received the abuse in response to publicity following her work to have misogyny recognised as a hate crime by Nottinghamshire Police. She was subjected to misogynistic taunts regarding her appearance and also received death threats. She said:
It reached a crescendo when someone tweeted out a comment about wanting to find me and tie me up and then a gif image of a woman having a dagger plunge through the back of her head until it came out of her mouth.17
17.Members of Parliament have also experienced high levels of racism, misogynistic abuse and other forms of harassment on Twitter. Rt Hon Lindsay Hoyle MP, the principal Deputy Speaker, told us that all MPs were vulnerable to abuse, but that it particularly affected women MPs, and that it was possible to “break that down even further to ethnic minority MPs and, in particular, ethnic minority women MPs”.18 Diane Abbott MP has spoken out about her experiences of receiving racist and sexist abuse online on a daily basis. She said:
I have had rape threats, death threats, and am referred to routinely as a bitch and/or nigger, and am sent horrible images on Twitter. The death threats included an EDL-affiliated account with the tag “burn Diane Abbott”.19
Our October 2016 report on Antisemitism in the UK included a number of examples of deeply antisemitic tweets that were directed at Luciana Berger MP.20 Other women MPs have also spoken out bravely about the abuse they have received just for being women in the public eye, including Caroline Ansell MP and Anna Soubry MP; and Tulip Siddiq MP and Jess Phillips MP, who have had huge numbers of death and rape threats targeted at them in recent months.
18.In addition to their legal obligations, social media companies also publish community guidelines that prohibit hate speech, harassment, physical threats and other types of abuse. They have policies for removing content which violates those standards and in some circumstances users can be banned for contravening community guidelines. For example, Twitter prohibits posts that promote “violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease.” Facebook prohibits material that “directly attacks” people on the basis of the same characteristics. YouTube also prohibits material that “promotes violence or hatred against individuals or groups” based on most of those characteristics (their list does not explicitly include national origin) and it also prohibits hate speech against ‘veteran status’. All three companies prohibit violent threats and material that promotes violence, including material that promotes or threatens terrorism. They have also developed procedures for users to report, flag or avoid seeing content they are concerned about.
19.Further work is also under way. For example, Nick Pickles from Twitter told us that Twitter was taking action and developing new tools to mitigate the impact of ‘dogpiling’ on individuals—where large numbers of users abuse or harass an individual simultaneously. He said:
[ … ] we can make sure that the victim doesn’t feel like they are exposed on their own. We can also make it quicker for our teams to review the reports and review the accounts that are involved in that dogpile. A good example of how our terms of service and our rules have evolved is that we have actually added cover to our Twitter rules to prohibit people from encouraging that kind of behaviour.21
Peter Barron, Vice President of Communications and Public Affairs at Google UK (the parent company of YouTube), said that YouTube’s community standards had been adapted to tackle videos featuring UK gangs brandishing weapons.22 Facebook is reviewing its processes on how it handles violent videos and other objectionable material, saying it needed to improve after a video of a murder in the United States remained on its service for more than two hours.23
20.In our report on radicalisation published last August we said that YouTube was a ‘vehicle of choice’ for spreading terrorist propaganda and for attracting new recruits.24 Little has changed since that report was published.
21.YouTube hosts propaganda videos that celebrate proscribed jihadist groups such as ISIS, Jabhat al-Nusra and Jund al-Aqsa. Shortly after the 22 March terrorist attack on Westminster, YouTube was reportedly inundated with violent ISIS recruitment videos which the platform failed to block, despite them being posted under usernames such as “Islamic Caliphate” or “IS Agent”. According to The Times, many of the videos were produced by the ‘media wing’ of ISIS and showed “beheadings and other extreme violence, including by children.”25 The propaganda surge was foreseeable; there is well-established evidence that ‘trigger events’ such as terrorist incidents lead to spikes in such posts.26
22.YouTube also hosts videos promoting violent far-right and neo-Nazi groups such as Combat 18, the North West Infidels and National Action (which is a proscribed organisation). We reported to Google a number of videos that promoted extreme far-right groups, including one featuring a speech by a member of National Action, which it agreed to block from viewers residing in the UK. Peter Barron from Google, the parent company of YouTube, told us that the video “was removed because it was representing a proscribed organisation and was therefore illegal content.”27 However, despite us making Google aware that videos that promoted National Action were available on their platform, it failed to remove numerous other videos that celebrated the far-right group. One such video featured masked men who shouted “they fear us because they think we will gas them, and we will.” Following the oral evidence session, we wrote to Google again to highlight the continuing presence of extremist, illegal content. It responded by blocking a number of the videos featuring National Action that we brought to their attention. However, identical videos under a different name remained on YouTube, and videos that invited support for National Action were still on YouTube six weeks later when this report was drafted.28
23.On 9 February, it was reported that advertisements for hundreds of large companies, universities and charities were appearing on YouTube videos created by supporters of terrorist groups such as ISIS. An advert appearing alongside a YouTube video typically earns whoever posts the video $7.60 for every 1,000 views, which means that mainstream reputable companies, charity donors and taxpayers were inadvertently funding terrorists and their sympathisers.29 Google was also earning money from those videos. We challenged Peter Barron from Google on the morality of generating revenue from extremist content; he replied that the company had “no interest” in making money in that way.30 Nevertheless, in the days following our evidence session, hundreds of organisations, including the UK Government, removed their advertising from YouTube due to concerns that their brands and messages were being associated with extremist videos.31 Government Ministers told us that advertising on YouTube “will not be reactivated until such time as Google can give definitive assurance that government messages will be delivered in a safe and appropriate way.” In 2016, the Government spent £3,878,600 advertising on YouTube.32 Ministers were unable to tell us whether the Government had requested any refund from YouTube for placing its advertisements alongside extremist content.33
24.It is shocking that Google failed to perform basic due diligence regarding advertising on YouTube paid for by reputable companies and organisations which appeared alongside videos containing inappropriate and unacceptable content, some of which were created by terrorist organisations. We believe it to be a reflection of the laissez-faire approach that many social media companies have taken to moderating extremist content on their platforms. We note that Google can act quickly to remove videos from YouTube when they are found to infringe copyright rules, but that the same prompt action is not taken when the material involves hateful or illegal content. There may be some lasting financial implications for Google’s advertising division from this episode; however the most salient fact is that one of the world’s largest companies has profited from hatred and has allowed itself to be a platform from which extremists have generated revenue.
25.We recognise that many social media and technology companies—including Google, Facebook and YouTube who gave evidence to our inquiry—have considered the impact that online hate, abuse and extremism can have on individuals. We welcome the effort that has been made to reduce such behaviours on social media, such as publishing clear community guidelines, building new technologies and promoting online safety, for example for schools and young people. However, it is very clear to us from the evidence we have received that nowhere near enough is being done. The biggest and richest social media companies are shamefully far from taking sufficient action to tackle illegal and dangerous content, to implement proper community standards or to keep their users safe. Given their immense size, resources and global reach, it is completely irresponsible of them to fail to abide by the law, and to keep their users and others safe.
26.The Government has been clear that what is illegal offline is also illegal online in relation to hate speech and abuse, and we believe that there should be no ambiguity about this. It has also said that in the vast majority of cases, communications sent via social media will not cross the threshold into criminal behaviour. The Crown Prosecution Service has published guidelines to clarify the circumstances in which a prosecution should be brought.34
27.It is illegal to invite support for proscribed groups such as National Action. It is also illegal to disseminate terrorist material. In oral evidence, Robert Buckland MP, the Solicitor-General, outlined the provisions of the relevant legislation:
In particular, section 2 of the Terrorism Act 2006 created an offence of the dissemination of terrorist material, either intentionally—we would not say that the social media platforms are doing it intentionally—or recklessly.35
He indicated that social media companies had not been prosecuted for such offences so far because “there has been a perception that, somehow, these things are too difficult to deal with.”36 Sarah Newton MP, the Home Office Minister responsible for countering extremism, stressed that the police were operationally independent and it would not be appropriate for the Government to seek a police investigation into the actions of social media companies.37 We consider the legal framework in more detail later in this report.
28.The Metropolitan Police’s Counter Terrorism Internet Referral Unit (CTIRU) was set up in 2010 to remove unlawful terrorist material from the internet, with a focus on UK-based material. Twitter, which has been a magnet for jihadist propaganda, said that it had a close relationship with the CTIRU.38 Google described CTIRU as a ‘trusted flagger’ with “an accuracy rate of around 80%” and said that it proactively monitors its platforms and that it had become “valuable enforcement partners.”39 Google also said that it has plans “to significantly extend our Trusted Flagger programme” and “invest” in improving its processes.40
29.Facebook, Google and Twitter said that their content moderation strategy is based on a ‘report and take down’ model under which their staff do not proactively search for illegal content but instead rely on their user base to report offensive material for later review and possible removal.41 However, the companies are working on technological solutions to share information on illegal content which will speed up its identification.42
30.Social media companies must be held accountable for removing extremist and terrorist propaganda hosted on their networks. The weakness and delays in Google’s response to our reports of illegal neo-Nazi propaganda on YouTube were dreadful. Despite us consistently reporting the presence of videos promoting National Action, a proscribed far-right group, examples of this material can still be found simply by searching for the name of that organisation. So too can similar videos with different names. As well as probably being illegal, we regard it as completely irresponsible and indefensible. If social media companies are capable of using technology immediately to remove material that breaches copyright, they should be capable of using similar content to stop extremists re-posting or sharing illegal material under a different name. We believe that the Government should now assess whether the continued publication of illegal material and the failure to take reasonable steps to identify or remove it is in breach of the law, and how the law and enforcement mechanisms should be strengthened in this area.
31.Social media companies rely on their users to report extremist and hateful content for review by moderators. They are, in effect, outsourcing the vast bulk of their safeguarding responsibilities at zero expense. We believe that it is unacceptable that social media companies are not taking greater responsibility for identifying illegal content themselves. In the UK, the Metropolitan Police’s Counter Terrorism Internet Referral Unit (CTIRU) monitors social media companies for terrorist material. That means that multi-billion pound companies like Google, Facebook and Twitter are expecting the taxpayer to bear the costs of keeping their platforms and brand reputations clean of extremism.
32.We recommend that all social media companies introduce clear and well-funded arrangements for proactively identifying and removing illegal content—particularly dangerous terrorist content or material related to online child abuse. We note the significant work that has been done on online child abuse and we welcome that, but we believe similar cooperation and investment is needed for other kinds of illegal and dangerous content.
33.We note that football teams are obliged to pay for policing in their stadiums and immediate surrounding areas under Section 25 of the Police Act 1996. We believe that the Government should now consult on adopting similar principles online—for example, requiring social media companies to contribute to the Metropolitan Police’s CTIRU for the costs of enforcement activities which should rightfully be carried out by the companies themselves.
34.Facebook, Microsoft, Twitter and YouTube have signed up to an EU ‘Code of Conduct’. This states that those technology companies will take the lead in countering the spread of illegal hate speech online to guide their own activities, and will share best practice with other internet companies, platforms and social media companies.43 While the Code of Conduct stipulates that the companies should review the “majority of valid notifications for removal of illegal hate speech in less than 24 hours and remove or disable access to such content, if necessary”, there is no mention of penalties to be administered if they fail to comply.44 A hate speech taskforce including representatives from Google, Facebook and Twitter, set up by German justice minister Heiko Maas in 2015, vowed to aim to delete illegal posts within 24 hours.45 On 14 March, it was reported that YouTube was deleting 90% of reported content and 82% within 24 hours; Facebook takes down only 39% of content reported, and 33% within 24 hours; Twitter was removing only 1% of reported posts. Those figures compare with corresponding figures from last September when Facebook was found to be deleting 46%; YouTube 10% and Twitter 1% of illegal content flagged up by users.46
35.In response to the low rate of illegal content being removed from social media companies, the German Justice Ministry proposed new rules which would require social media platforms to provide a round-the-clock service for users to flag illegal content, which would have to be removed by the site within seven days. All copies of the content would also have to be deleted and social media companies would need to publish a quarterly report detailing how they have dealt with such material. Platforms could face fines of up to €50 million (£44 million) and would also have to nominate a person responsible for handling complaints, who could face fines of up to €5 million personally if the company fails to abide by mandatory standards.
36.Here in the UK we have easily found repeated examples of social media companies failing to remove illegal content when asked to do so—including dangerous terrorist recruitment material, promotion of sexual abuse of children and incitement to racial hatred. The biggest companies have been repeatedly urged by Governments, police forces, community leaders and the public, to clean up their act, and to respond quickly and proactively to identify and remove illegal content. They have repeatedly failed to do so. That should not be accepted any longer. Social media is too important to everyone—to communities, individuals, the economy and public life—to continue with such a lax approach to dangerous content that can wreck lives. And the major social media companies are big enough, rich enough and clever enough to sort this problem out—as they have proved they can do in relation to advertising or copyright. It is shameful that they have failed to use the same ingenuity to protect public safety and abide by the law as they have to protect their own income.
37.Social media companies currently face almost no penalties for failing to remove illegal content. There are too many examples of social media companies being made aware of illegal material yet failing to remove it, or to do so in a timely way. We recommend that the Government consult on a system of escalating sanctions to include meaningful fines for social media companies which fail to remove illegal content within a strict timeframe.
38.Social media companies explained that the law is their primary source for identifying what material should be prohibited but that their own community guidelines also set the rules on what it is and what it is not acceptable to post.
39.We welcome the fact that YouTube, Facebook and Twitter all have clear community standards that go beyond the requirements of the law. We strongly welcome the commitment that all three social media companies have to removing hate speech or graphically violent content, and their acceptance of their social responsibility towards their users and towards wider communities. We recognise that each of the companies has done some valuable and important work to develop these community standards and to promote public safety and awareness, particularly among children and young people. We welcome too the statements each company has made about wanting to do more. However, we believe that the interpretation and implementation of the community standards in practice is too often slow and haphazard. We have seen examples where moderators have refused to remove material which violates any normal reading of the community standards, or where clearly unacceptable material is only removed once a complaint is escalated to a very senior level.
40.We recommend that social media companies review with the utmost urgency their community standards and the way in which they are being interpreted and implemented, including the training and seniority of those who are making decisions on content moderation, and the way in which the context of the material is examined.
41.Many platforms have been criticised for applying rules on prohibited hate and abuse inconsistently. For example, Twitter has been criticised for providing high-profile people with fast and meaningful responses to abuse reports whereas those not in the public eye have typically received a cursory response or none at all. Tell MAMA, an organisation that monitors Islamophobia, said that it had reported accounts that were obviously racist or islamophobic to Twitter, but often no action was taken, or was taken only after significant delays. For example, Twitter failed immediately to suspend an account called “@gasmuslims” when Tell MAMA first reported it to moderators (the account was suspended later).47 Tell MAMA also told us about an account called “@Fahrenheit211” which Tell MAMA reported, but to no avail. Tell MAMA said:
It remains our concern that an individual who has no problem using the term ‘Muzzie’, ‘Paedo prophet’, or ‘Muslim scum’ has continued to promote messages on Twitter which are antithetical to our shared values. To date, Twitter has taken no action to remove this account and Twitter corporate responses on this account are slow, when in fact, what is being demonstrated is far right extremism and the development of a far right extremist network.48
We reported a tweet posted from the @Fahrenheit211 account to Twitter just prior to them appearing before us to give oral evidence; only then was the user suspended.
42.Facebook has been criticised for being considerably more responsive to reports which appear in the media, including those highlighted in newspaper exposés, compared to those received from its own user base. For example, Facebook was accused of failing to remove videos of beheadings carried out by ISIS fighters, a video of a sexual assault on a child, and images of allegedly illegal paedophilic cartoons. The content was reported to the website’s moderators by an apparently ordinary user.49 Facebook reportedly responded with a statement that said the posts did not breach the website’s community standards. However, after The Times raised the images and videos with Facebook, moderators took action and removed the content.50 The Mail on Sunday reported a similar case in which Facebook initially refused to remove a video of a man “viciously stabbed with a 15in knife again and again and left lying in a pool of his blood as he begs for his life” and another video that showed a teenage mother “repeatedly beating her baby over the head with her hand and a pillow.” Comments under the videos reportedly indicated that a number of users had reported the videos to Facebook but no action was taken. Facebook later said in a statement “Following such reports from The Mail on Sunday we have removed the content while we investigate.”51
43.We have heard time and time again that, for people without the platforms available to Members of Parliament or journalists, responses from social media companies to reports of unacceptable content are opaque, inconsistent or are ignored altogether. It should not rely on high level interventions for social media companies to take action; and there must be no hierarchy of service provision. We call on social media companies urgently to improve the quality and speed of their responses to reports of dangerous and illegal content, wherever those reports come from.
44.Social media companies are highly secretive about the number of staff and the level of resources that they devote to monitoring and removing inappropriate content. Google, Facebook and Twitter all refused to tell us the number of staff that they employed for such purposes.52 Nick Pickles from Twitter said, “We do not give out numbers for the simple reason that someone, somewhere would say that it is not enough.”53 Simon Milner from Facebook told us that the number of people working on such issues numbered in the thousands but refused to be more specific. He said, “I would suggest that there is not necessarily a linear relationship between the number of people you employ and the effectiveness of the work you do.”54 Peter Barron from Google also said it employed “thousands” of staff for such work.55 The companies were reluctant to disclose precisely how much money they spent on related public safety issues, although Google was prepared to say that it spent “hundreds of millions” on such work but he did not go into detail on how that money was spent.56 Simon Milner said that the answers to such questions were “commercially sensitive”.57
45.It is unacceptable that Twitter, Facebook and YouTube refused to reveal the number of people that they employ to safeguard users or the amount that they spend on public safety initiatives because of “commercial sensitivity”. These companies are making substantial profits at the same time as hosting illegal and often dangerous material; and then relying on taxpayers to pay for the consequences. These companies wield enormous power and influence and that means that such matters are in the public interest.
46.We call on social media companies to publish quarterly reports on their safeguarding efforts, including analysis of the number of reports received on prohibited content, how the companies responded to reports, and what action is being taken to eliminate such content in the future. It is in everyone’s interest, including the social media companies themselves, to find ways to reduce pernicious and illegal material. Transparent performance reports, published regularly, would be an effective method to drive up standards radically and we hope it would also encourage competition between platforms to find innovative solutions to these persistent problems. If they refuse to do so, we recommend that the Government consult on requiring them to do so.
47.The social media companies told us that they were seeking algorithmic solutions to reducing harmful content. Google, for example, said that it was committed to identifying new ways in which technology “and particularly machine learning” could be used to identify extreme content. However, it has been reported that such tools would only be used to identify content that might be inappropriate for advertisers rather than for all content that might contravene the law or Google/YouTube’s own community guidelines. Google said that the technology would not be used routinely to review videos for possible removal and that videos would still be checked by the company’s review team only after they were flagged by other users.58
48.YouTube, Facebook, Microsoft and Twitter have announced a partnership to share ‘hashes’ to enable each company to scan for terrorist content and enable them to terminate associated accounts. Google also said that it had used ‘matching technology’ to help prevent the re-uploading of content that violates its policies. To help address child sexual abuse imagery (CSAI), Google said that it had developed video fingerprinting technology and created “a service to be shared across the industry to combat CSAI.”59
49.We welcome the development of technological solutions to tackle the problem of inappropriate content on social media—including Twitter’s new mechanisms to prevent dogpiling, and new matching technology. We recognise that technology cannot solve all the issues and that human judgement will often continue to be needed in complex cases to decide whether material breaches the law or community standards. But we are disappointed at the pace of development of technological solutions—and in particular that Google is currently only using its technology to identify illegal or extreme content in order to help advertisers, rather than to help it remove illegal content proactively. We recommend that they use their existing technology to help them abide by the law and meet their community standards.
50.The Government has said that it would consider the proposals from the German government on tackling illegal online content when they were made available. Sarah Newton MP told us that the Government would “do absolutely everything we can to keep people safe.”60 On 27 February, the Government announced that a Green Paper on online safety would be published this summer. The work was expected to centre on four main priorities:
51.The announcement stated that the new ‘Internet Safety Strategy’ was aimed at making the UK the safest country in the world for children and young people to be online. A report has been commissioned to provide up to date evidence of how young people are using the internet, the dangers they face, and the gaps that exist in keeping them safe.61 The Government has indicated that Ministers will also hold a series of round–table meetings with social media companies, technology firms, young people, charities and mental health experts to examine online risks and how to tackle them, and concerns around issues such as trolling and other aggressive behaviour including rape threats against women. Rt Hon Karen Bradley MP, the Secretary of State for Culture. Media and Sport, indicated that social media companies could face new laws if they fail to help stop cyber bullying.62
52.The Government has also recently met with social media companies to press them to do more to tackle extremist material on the internet. Google, Twitter, Facebook and Microsoft agreed to lead an industry board comprised of communication service providers to develop better tools to remove terrorist propaganda, share best practice and to work with civil society groups to promote counter-narratives.63
53.The Government has been clear that what is illegal offline is also illegal online in relation to hate speech and abuse. It has described the legal framework for prosecuting online hate crime as “robust”.64 However, relevant legislation is spread across a number of different Acts of Parliament and each was passed before social media were mainstream tools, and some Acts were passed even before the internet itself was widely used.65 The most relevant aspects of the law are Section 1 of the Malicious Communications Act 1988 and Section 127 of the Communications Act 2003, which refer to communications deemed to be “grossly offensive”:
In written evidence, the Government also cited a further four Acts of Parliament that might be relevant to hate crime cases.66
54.Witnesses described the laws against online hate speech as being out of date and vague on the sort of language or behaviour that is illegal. Carl Miller, a Research Director at Demos, described the law as “incredibly unclear” on where the line on criminality lay:
We have not had a proper law passed on this since social media became in widespread use. If you talk to lawyers about this, most of them will say they don’t even know which Act really applies here. Some of it is the Communications Act, as I said; some of it is the Protection from Harassment Act. Some people say that it is public order legislation; others say that counter-terrorism or incitement of racial hatred legislation applies here.67
55.The Government said that in the vast majority of cases, communications sent via social media will not cross the threshold into criminal behaviour. The Crown Prosecution Service has published guidelines to clarify the circumstances in which a prosecution should be brought; however, the Law Commission said that guidelines are “no substitute for clearer statutory provisions”.68 It cites evidence that the current law lacks “legal certainty”, a principle that holds that the law must provide those subject to it with the ability to regulate their conduct. The Commission said that there “is a clear public interest in tackling online abuse and “trolling”, but this must be done through clear and predictable legal provisions.”69 The Commission is currently consulting on its next programme of work and it has said that it sees merit in examining the law on offensive communications as an area for potential reform.70
56.Most legal provisions in this field predate the era of mass social media use and some predate the internet itself. The Government should review the entire legislative framework governing online hate speech, harassment and extremism and ensure that the law is up to date. It is essential that the principles of free speech and open public debate in democracy are maintained—but protecting democracy also means ensuring that some voices are not drowned out by harassment and persecution, by the promotion of violence against particular groups, or by terrorism and extremism.
4 Law Commission, HCR0021, para 2.4. Part 1 of the Malicious Communications Act saw a ten-fold increase in the number of convictions over the same period. For an outline of the laws against abusive content online, see paragraph 53
5 Q628 [Peter Barron]. “Flagged content” refers to inappropriate content that has been reported to website administrators for review
8 YouTube [David Duke], Jews admit organizing White Genocide, 13 October 2015
10 Hope not Hate, Jo Cox ‘deserved to die’: Cyber Hate Speech Unleashed on Twitter, November 2016. Hashtags identified by Dr Imran Awan and Dr Irene Zempi included #whitepower, #MakeBritainwhiteagain, #Stopimmigration; #refugeesnotwelcome, #defendEurope; #DeportallMuslims, #Rapefugee and #BanIslam
13 Birmingham City University, Nottingham Trent university & Tell MAMA, We Fear for our Lives: Offline and Online Experiences of Anti-Muslim Hostility, October 2015, page 4
15 Demos, New Demos study reveals scale of social media misogyny, 26 May 2016
16 Fawcett Society, HCR0102, para 10. Respondents to the survey were reached through social media channels. The survey should not therefore be considered as a representative sample of women’s experiences across the whole population. Between 23 February and 9 March 2017, 182 people responded, 97% of whom were women
19 Guardian, Diane Abbott, I fought racism and misogyny to become an MP. The fight is getting harder, 17 January 2017
20 Tenth Report, Session 2016–17, Antisemitism in the UK, HC 136
23 Reuters, Facebook says it will review handling of violent videos, 17 April 2017
24 Eighth Report, Session 2016–17, HC 135, Radicalisation: the counter narrative and identifying the tipping point, para 38
25 The Times, Isis uses terror attack to sign up YouTube recruits, 27 March 2017
28 For example, YouTube: National Action Speak in Darlington & National Action Speak in Rochdale
29 Q453. Peter Barron agreed that was a reasonable estimate for revenue generated from viewing figures. It is understood that in some cases advertising revenues had gone to the rights holders of songs used on the videos rather than to the video owner
31 The Times, Top brands pull Google adverts in protest at hate video links, 23 March 2017
34 Crown Prosecution Service, Guidelines on prosecuting cases involving communications sent via social media
39 Q410 [Peter Barron] & Google, HCR0109, pages 1 & 2. ‘Trusted Flaggers’ are given access to a tool that allows for reporting multiple videos at the same time
41 Qq503–509. Witnesses cited the e-commerce directive as the regulatory basis for their model of content moderation
44 European Commission, Code of Conduct on Countering Illegal Hate Speech Online
45 Guardian, Facebook, YouTube, Twitter and Microsoft sign EU hate speech code, 31 May 2016
46 German Federal Ministry of Justice and Consumer Protection, Deletion of illegal hate posts on Facebook, YouTube and Twitter, 2015
49 The user that reported the content was an undercover journalist
50 The Times, Facebook publishing child pornography, 13 April 2017
51 Mail on Sunday, Shame on you, Facebook: How your child was just 3 clicks from this vile video of a man being beaten and stabbed, 18 March 2017
58 The Times, Google tracks down extremist videos on YouTube but refuses to delete them, 8 April 2017. Machine learning is a type of artificial intelligence that provides computers with the ability to ‘learn’ (or adapt to new data) without being explicitly programmed.
61 Department for Culture, Media and Sport, Government launches major new drive on internet safety, 27 February 2017
62 Daily Mail, Facebook and Twitter could face threat of new laws if they fail to crack down on cyber bullying, minster warns, 27 February 2017
63 Financial Times, Tech firms pledge to improve response to terror propaganda, 31 March 2017’; and Home Office, Home Secretary statement: meeting with Communication Service Providers, 30 March 2017
64 Home Office, Action against Hate, July 2016, para 70
65 Most relevant legislation was passed no later than 2003 and usually before. By comparison, the largest social media companies were launched later. Facebook was launched in 2004, YouTube in 2005, Twitter in 2006, Snapchat in 2011, etc.
66 Home Office, HCR0052, p4. In addition to the Malicious Communications Act 1988 and the Communications Act 2003, the Government said the following Acts of Parliament are relevant to online hate crime cases: Computer Misuse Act 1990; Protection from Harassment Act 1997; The Criminal Justice and Public Order Act 1994; Section 15 Sexual Offences Act 2003 (for grooming). The Government also listed Breach of the Peace as a relevant legal provision
68 Crown Prosecution Service, Guidelines on prosecuting cases involving communications sent via social media, 10 October 2016 & Law Commission, HCR0021, para 2.5
27 April 2017