Hate crime: abuse, hate and extremism online Contents

Conclusions and recommendations

Advertising revenue derived from extremist videos

1.It is shocking that Google failed to perform basic due diligence regarding advertising on YouTube paid for by reputable companies and organisations which appeared alongside videos containing inappropriate and unacceptable content, some of which were created by terrorist organisations. We believe it to be a reflection of the laissez-faire approach that many social media companies have taken to moderating extremist content on their platforms. We note that Google can act quickly to remove videos from YouTube when they are found to infringe copyright rules, but that the same prompt action is not taken when the material involves hateful or illegal content. There may be some lasting financial implications for Google’s advertising division from this episode; however the most salient fact is that one of the world’s largest companies has profited from hatred and has allowed itself to be a platform from which extremists have generated revenue. (Paragraph 24)

The responsibility to take action

2.We recognise that many social media and technology companies—including Google, Facebook and YouTube who gave evidence to our inquiry—have considered the impact that online hate, abuse and extremism can have on individuals. We welcome the effort that has been made to reduce such behaviours on social media, such as publishing clear community guidelines, building new technologies and promoting online safety, for example for schools and young people. However, it is very clear to us from the evidence we have received that nowhere near enough is being done. The biggest and richest social media companies are shamefully far from taking sufficient action to tackle illegal and dangerous content, to implement proper community standards or to keep their users safe. Given their immense size, resources and global reach, it is completely irresponsible of them to fail to abide by the law, and to keep their users and others safe. (Paragraph 25)

Removal of illegal content

3.Social media companies must be held accountable for removing extremist and terrorist propaganda hosted on their networks. The weakness and delays in Google’s response to our reports of illegal neo-Nazi propaganda on YouTube were dreadful. Despite us consistently reporting the presence of videos promoting National Action, a proscribed far-right group, examples of this material can still be found simply by searching for the name of that organisation. So too can similar videos with different names. As well as probably being illegal, we regard it as completely irresponsible and indefensible. If social media companies are capable of using technology immediately to remove material that breaches copyright, they should be capable of using similar content to stop extremists re-posting or sharing illegal material under a different name. We believe that the Government should now assess whether the continued publication of illegal material and the failure to take reasonable steps to identify or remove it is in breach of the law, and how the law and enforcement mechanisms should be strengthened in this area. (Paragraph 30)

4.Social media companies rely on their users to report extremist and hateful content for review by moderators. They are, in effect, outsourcing the vast bulk of their safeguarding responsibilities at zero expense. We believe that it is unacceptable that social media companies are not taking greater responsibility for identifying illegal content themselves. In the UK, the Metropolitan Police’s Counter Terrorism Internet Referral Unit (CTIRU) monitors social media companies for terrorist material. That means that multi-billion pound companies like Google, Facebook and Twitter are expecting the taxpayer to bear the costs of keeping their platforms and brand reputations clean of extremism. (Paragraph 31)

5.We recommend that all social media companies introduce clear and well-funded arrangements for proactively identifying and removing illegal content—particularly dangerous terrorist content or material related to online child abuse. We note the significant work that has been done on online child abuse and we welcome that, but we believe similar cooperation and investment is needed for other kinds of illegal and dangerous content. (Paragraph 32)

6.We note that football teams are obliged to pay for policing in their stadiums and immediate surrounding areas under Section 25 of the Police Act 1996. We believe that the Government should now consult on adopting similar principles online—for example, requiring social media companies to contribute to the Metropolitan Police’s CTIRU for the costs of enforcement activities which should rightfully be carried out by the companies themselves. (Paragraph 33)

7.Here in the UK we have easily found repeated examples of social media companies failing to remove illegal content when asked to do so—including dangerous terrorist recruitment material, promotion of sexual abuse of children and incitement to racial hatred. The biggest companies have been repeatedly urged by Governments, police forces, community leaders and the public, to clean up their act, and to respond quickly and proactively to identify and remove illegal content. They have repeatedly failed to do so. That should not be accepted any longer. Social media is too important to everyone—to communities, individuals, the economy and public life—to continue with such a lax approach to dangerous content that can wreck lives. And the major social media companies are big enough, rich enough and clever enough to sort this problem out—as they have proved they can do in relation to advertising or copyright. It is shameful that they have failed to use the same ingenuity to protect public safety and abide by the law as they have to protect their own income. (Paragraph 36)

8.Social media companies currently face almost no penalties for failing to remove illegal content. There are too many examples of social media companies being made aware of illegal material yet failing to remove it, or to do so in a timely way. We recommend that the Government consult on a system of escalating sanctions to include meaningful fines for social media companies which fail to remove illegal content within a strict timeframe. (Paragraph 37)

Community standards

9.We welcome the fact that YouTube, Facebook and Twitter all have clear community standards that go beyond the requirements of the law. We strongly welcome the commitment that all three social media companies have to removing hate speech or graphically violent content, and their acceptance of their social responsibility towards their users and towards wider communities. We recognise that each of the companies has done some valuable and important work to develop these community standards and to promote public safety and awareness, particularly among children and young people. We welcome too the statements each company has made about wanting to do more. However, we believe that the interpretation and implementation of the community standards in practice is too often slow and haphazard. We have seen examples where moderators have refused to remove material which violates any normal reading of the community standards, or where clearly unacceptable material is only removed once a complaint is escalated to a very senior level. (Paragraph 39)

10.We recommend that social media companies review with the utmost urgency their community standards and the way in which they are being interpreted and implemented, including the training and seniority of those who are making decisions on content moderation, and the way in which the context of the material is examined. (Paragraph 40)

Social media companies’ response to complaints

11.We have heard time and time again that, for people without the platforms available to Members of Parliament or journalists, responses from social media companies to reports of unacceptable content are opaque, inconsistent or are ignored altogether. It should not rely on high level interventions for social media companies to take action; and there must be no hierarchy of service provision. We call on social media companies urgently to improve the quality and speed of their responses to reports of dangerous and illegal content, wherever those reports come from. (Paragraph 43)

12.It is unacceptable that Twitter, Facebook and YouTube refused to reveal the number of people that they employ to safeguard users or the amount that they spend on public safety initiatives because of “commercial sensitivity”. These companies are making substantial profits at the same time as hosting illegal and often dangerous material; and then relying on taxpayers to pay for the consequences. These companies wield enormous power and influence and that means that such matters are in the public interest. (Paragraph 45)

13.We call on social media companies to publish quarterly reports on their safeguarding efforts, including analysis of the number of reports received on prohibited content, how the companies responded to reports, and what action is being taken to eliminate such content in the future. It is in everyone’s interest, including the social media companies themselves, to find ways to reduce pernicious and illegal material. Transparent performance reports, published regularly, would be an effective method to drive up standards radically and we hope it would also encourage competition between platforms to find innovative solutions to these persistent problems. If they refuse to do so, we recommend that the Government consult on requiring them to do so. (Paragraph 46)

Technological responses

14.We welcome the development of technological solutions to tackle the problem of inappropriate content on social media—including Twitter’s new mechanisms to prevent dogpiling, and new matching technology. We recognise that technology cannot solve all the issues and that human judgement will often continue to be needed in complex cases to decide whether material breaches the law or community standards. But we are disappointed at the pace of development of technological solutions—and in particular that Google is currently only using its technology to identify illegal or extreme content in order to help advertisers, rather than to help it remove illegal content proactively. We recommend that they use their existing technology to help them abide by the law and meet their community standards. (Paragraph 49)

Legislative framework

15.Most legal provisions in this field predate the era of mass social media use and some predate the internet itself. The Government should review the entire legislative framework governing online hate speech, harassment and extremism and ensure that the law is up to date. It is essential that the principles of free speech and open public debate in democracy are maintained—but protecting democracy also means ensuring that some voices are not drowned out by harassment and persecution, by the promotion of violence against particular groups, or by terrorism and extremism. (Paragraph 56)

Conclusion

16.The announcement of the General Election has curtailed our consideration of the full range of issues in our hate crime inquiry. We have limited our recommendations to dealing with online hate, which we regard as arguably the most pressing issue which needs to be addressed. However, we hope that our successor committee in the next Parliament will return to this highly significant topic and will draw on the wide-ranging and valuable evidence that we have gathered in this inquiry to inform broader recommendations across the spectrum of challenges which tackling hate crime presents. (Paragraph 57)





27 April 2017