94.There are a wide range of potential offences relating to expression. The following are prohibited speech acts:
a)Threat to kill;
b)Fear or provocation of violence;
c)Acts intended or likely to stir up hatred on grounds of race; religion; or sexual orientation;
d)Encouraging or assisting the commission of an offence;
e)Terrorism-related offences (including the encouragement of terrorism);
f)Intentional harassment, alarm or distress;
g)Harassment, alarm or distress (without intent);
h)Defamation;
i)Malicious communications;
j)Improper use of public electronic communications network.
95.Many of these offences can be committed by communications between a small group, maybe only two people. While this sort of offence can be serious, and is rightly taken seriously, there is a particular danger in communications which are sent to many people. Such communications can be made in many ways: they can be published as part of hardcopy media; they can be put out on social media; they can be put out by the authors in hardcopy form.
96.There are physical limits on the reach of hardcopy media. Those producing hardcopy media are also inhibited by publisher liability: they will be legally responsible for material which is, for example, libellous. This means that hardcopy media are likely both to be more restrained than online media, and that the reach may be smaller.
97.Even so, it is important not to underplay the role that hardcopy media plays in fostering an environment in which representative democracy is not seen as legitimate, or in which abuse is mistaken for vigorous political debate. Moreover, in a world in which newspapers are published online, and have comments below the line, there is no longer a clear distinction between “hard” and “soft” copy or between “old” and “new” media. We were told “below the line” comments were a fertile ground for abuse, and that newspaper stories could prompt a deluge of online comments.70 Publication of photographs of MPs’ houses, or even, in one case we know of, of a floor plan, is not unusual. It is not the role of the law to ban speech which is foolish or unpleasant. The fact that publishers are legally liable for what they publish provides an incentive to remain within the law. It is important that the press is free to challenge and criticise. But the press may wish to reflect on whether their challenge is framed in terms which will improve representative democracy or undermine it and whether some of the language they use is contributing towards a culture of hate speech and violence.
98.Under the system of regulation adopted by both the EU and the US in the early days of the internet, social media platforms such as Facebook or Twitter are not liable as publishers for material posted by their users, in the way that traditional media companies are liable. This decision was understandable when it was taken but it removed any incentive to ensure that material published online was lawful under the laws of the countries in which it appeared and now means that decisions over freedom of speech are made by corporations working across many different jurisdictions and attempting to apply the same standards to each.
99.Social media now provides key communication channels, on a scale which would previously have been unimaginable.
Box 14: Scale of social media use
“It is hard to quantify the speed and scale of communication on social media because of the variety of different sites and providers. Latest studies suggest that around 500 million instant messages are sent per day on Twitter. 1.47 billion people are daily Facebook users, including around 44% of the United Kingdom population. More than 40 billion messages a day are sent on WhatsApp, a free user to user and group messaging service. On YouTube, a video hosting site, over 400 hours of video footage is uploaded every minute.” |
Source: Law Commission, para. 143
Box 15: Effects of social media on political engagement
“Social media has increased the ease with which the general public can communicate with Members of Parliament. This undoubtedly has a positive outcome—bringing parliamentary accountability closer to individual constituents, allowing for MPs to communicate directly with the public, without dilution by the mainstream media, and vice versa. Lively debates occur on social media with and between MPs and, overall, this has increased the volume of democratic participation. However, the flip side of the increase in the ease with which individuals can directly communicate with MPs is that it is more common for MPs to receive abuse. In some cases, this has resulted in MPs receiving a litany of racist or misogynistic abuse, and direct threats of violence.” |
Source: Index on Censorship (DFF0015)
100.The paradox is that MPs and political campaigners are increasingly driven off social media by threats and abuse, even though such media should be a key communication tool and an arena for political debate. While social media offers an unparalleled opportunity for increasing democratic participation and debate, it also offers the opportunity for abuse, threat and virtual mobbing. As the studies by Amnesty International and the IPU show, online abuse is commonplace. Amnesty International’s research found that black women politicians are almost twice as likely as their white peers to be abused on Twitter and that Diane Abbott received almost half of all abuse against women MPs active on Twitter during the period between January and June 2017. While both the Amnesty and IPU data suggested such abuse is most likely to be directed against women and BAME individuals, more recent data from the University of Sheffield suggested that male MPs may “attract significantly more abuse than female ones.”71 All are agreed that the scale of the problem is increasing. Many of the MPs we spoke to now avoided social media, or, if they maintained a social media presence took steps to reduce the impact of abuse on their staff and themselves.
101.As the Law Commission describes, there has been a tendency to treat online abuse less seriously than offline equivalents. This is wrong. The Law Commission sets out the harms arising from such abuse. Although its work was not primarily concerned with public life, many of the harms identified go to the heart of representative democracy:
“(1) failing to combat abusive online communications allows behaviours to escalate into even more serious offline offending, such as stalking and physical abuse;
(2) failures in legislation mean that the police response is often confused and minimal, making it more likely that women will not report offending;
(3) persistent abusive online communications against women, children and other minority groups “normalises” behaviour of this kind, creating a society in which abuse against women and [ … ] minority groups could also go unchallenged offline;
(4) failure to legislate effectively against abusive online communications has a disproportionate economic impact on women, who feel unsafe on the internet and may disengage with the many opportunities it offers; and
(5) large-scale online abuse suffered by high-profile women may further erode the willingness of women to stand for elected public office, or to take up senior positions, reducing diversity in the workforce and public life for the next generation.”72
102.As the Law Commission concluded, although:
“[ … ] there are no existing measures which allow for a categorical comparison of relative harm, it is clear that there are characteristics of online abuse which may mean that a victim may be subject to more, and aggravated, forms of harm from online offending.”73
103.The Law Commission’s point about online abuse escalating into offline action was based both on academic research and discussion with organisations with practical experience such as Women’s Aid.74 It was borne out by evidence given to us. The MPs who gave evidence on our first panel agreed that social media gave the impression that unacceptable behaviour was mainstream, and allowed individuals who might pose a threat, but who would formerly have remained isolated, to join with others in the constituency—or across the country.75
104.It is clear that social media can currently be used in ways which undermine the rights of others. This can be done directly, by using the medium to make threats, breach privacy, or arrange offline action. But other rights can be undermined indirectly by creating a culture which accepts and normalises abusive and threatening behaviour, particularly towards women and minority groups, thereby normalising behaviour which would formerly have been unacceptable and by denying free speech rights to others. Online abuse has a wider impact: MPs who are driven off social media by abuse are denied the important access to use social media to communicate with the wider public.
105.In principle, what is illegal offline should also be illegal online. Many offences, such as threats to kill, do not depend on the communication method used. In addition, there are a range of communications offences, chiefly in the Malicious Communications Act 1988 and in section 127 of the Communications Act 2003. The Law Commission scoping report on Abusive and Offensive Online Communications gives a survey of the law in this area, as well as a review of what is known about the effects of online abuse, and we are indebted to it.76
106.Many of the MPs we heard from considered that anonymity fostered online abuse. In contrast, Jodie Ginsberg and Richard Wingfield considered there could be very good reasons for online anonymity (which could in any event be removed if requested for law enforcement purposes).77 In Ms Ginsberg’s view, the problem was impunity—crimes committed online were simply not pursued as they would have been offline: “Some people are engaging in actively criminal behaviour and there is no consequence.”78
107.Some of our witnesses considered that online speech should be dealt with in the same way as off-line speech, and those who maliciously or mistakenly felt that free speech rights extended to abuse and intimidation online should be dealt with by the police as they should be off-line.79
108.The Law Commission believe that this approach will not work because of:
109.Commander Basu, Assistant Commissioner of the Metropolitan Police, agreed that the scale of internet enabled communications was so great that normal policing approaches were overwhelmed. He rejected the idea of social media companies providing funding to the police to support the work of the police in dealing with the increase in criminal activity online, much of which is done through using social media platforms. Commissioner Cressida Dick also warned against a levy on internet companies to support policing, saying:
“[ … ] we are a public service and we only provide special police services, as the ones we charge for are called, when it is over and above our normal duty. It would be a fundamental change in the law that governs how the police are funded and our constitutional position.”81
We note that implicit in that model of policing is the idea that funding comes from general taxation levied on all individuals and organisations within the jurisdiction. It is also significant that Commander Basu considered “There is no way this country could afford a police service that could police the internet. It just will not happen. It is not even worth debating it.”82
110.This is not to say there is no action taken against those who use social media to break the law. The CPS has issued guidance on prosecuting cases involving communications on social media. As set out in paragraph 11 and 30, there have been several recent cases in which those using social media to threaten MPs have been successfully investigated and prosecuted.83
111.It is important that threats, intimidation and harassment online should be treated as seriously as threats, intimidation and harassment offline. We welcome the efforts made towards this. The danger is that the internet will become a place where unlawful conduct is commonplace but only the most serious crimes can be dealt with. The scale of traffic on social media networks means that the criminal law alone is not enough to deal with this issue, but it is important that prosecutorial authorities ensure it is clear that the internet is not a place where there is effective impunity for all but the most serious breaches of the law.
112.Social media offers private communication platforms, run by private companies, where speech which is offensive can proliferate, where there can be virtual mobs, which can be orchestrated, and where bad behaviour can be normalised. Free speech rights can be denied if the level of abuse is so great that many people have to disengage with social media. It is clear this is already happening. Nevertheless, the right to free speech is vital and this includes the right to express opinions which are shocking or offensive.
113.There needs to be a high level of protection for free speech within the law. But there are already restrictions on certain types of speech such as hate speech. None of our witnesses argued in favour of extending the substance of these restrictions. Nonetheless, as the Law Commission notes, there is a case for revisiting the law relating to abuse and offensive content online, to make it clearer. We look forward to the Law Commission’s proposals, which are expected to be out for consultation in 2020. As part of that work the Law Commission will be considering whether coordinated harassment by groups of people online could be more effectively addressed by the criminal law. We note the current assumption that this consideration should be limited to coordinated activity. If a person who spontaneously joins a mob offline will be criminally liable for any breaches of the law which ensue, we do not see why it should not be the same if someone joins in mobbing online.
114.While we understand the need for careful consideration before law reform, we consider that this work is urgent, and should be given high priority both within the Law Commission, and, once the Commission has reported, by Government.
115.Social media is currently largely self regulated. There is no legal obligation for the owners of platforms to allow any type of speech on those platforms. Social media companies have their own standards, which vary from company to company. While standards vary between companies, those standards are applied worldwide, as far as possible. Moreover, as American companies, they come from a very particular tradition: the US is more particular in its protection of free speech than countries where the ECHR applies. The standards applied by social media companies are inherently likely to give more weight to the right to speak freely, even if that speech is offensive, threatening, abusive, or likely to incite hatred, than to the rights which might be affected by such speech.
116.We took oral evidence from Ms Rebecca Stimson, UK Head of Public Policy, Facebook and Ms Katy Minshall, Head of Public Policy UK, Twitter: two of the largest social media companies. Both companies have policies designed to allow free speech while limiting harmful speech. Both witnesses recognised there is a problem. Each described their company’s policies to try to stop it; for example, Ms Stimson told us that Facebook’s policy on threats had recently changed, so that “anything that is a threat to a public figure, even if by any standards most people would not think that it was realistic, we will now remove.”84 Twitter attempts to prevent online mobbing.85
117.Whatever the companies’ intentions, neither is effective in removing abuse. We were particularly concerned that shocking misogynistic images were found not to be a violation of Twitter policies until public and political outcry forced change.86 We were concerned to learn that in relation to its hateful conduct policy Twitter has omitted “sex” from the list of protected characteristics. This could account for their failure to give adequate protection to women from misogynistic abuse and we trust they will remedy this omission.
118.Both companies pointed to the scale of their attempts to remove offensive material through their content moderation teams. Ms Minshall told us that 38% of enforcement decisions on abuse were flagged up by Twitter’s internal tools.87 However, it was clear the extent to which automated tools could identify problematic posts was limited.
119.Facebook emphasised that it had 30,000 employees working on safety and security.88 Ms Stimson told us “We have 15,000 human reviewers around the world looking at content 24 hours a day, seven days a week, in 50 different languages.”89 Set against the scale of Facebook use this is insignificant: 1.52 billion people are daily users and 44% of the UK population uses the service. We note that Facebook’s net income in the year ending 31 December 2018 was over $22 billion.90
120.Given the scale of publication on the internet compared with the resources allocated to reviewing content, the evidence we have received about the toleration of abuse and the acceptance that it is impossible for illegal content online to be tackled by traditional law enforcement methods, we do not consider that pure self-regulation of online content can continue. There needs to be greater regulation of social media companies.
121.Attempts to regulate what is published on the internet have, largely, focused on ensuring that companies take down material quickly once it is notified to them, as in the German Network Enforcement Act. The UK Government’s Online Harms White Paper includes a similar model for take down although its proposals go far wider.
122.Companies already operate this “notify and take down” model. They provide various tools for their users to report material which they consider breaches a site’s policy. They also have systems in which trusted operators, who include Parliamentary Digital Service, can get material they identify as illegal taken down quickly.91 Given the scale of internet use, and the speed with which material can be spread, we do not consider that it is satisfactory to have a system for moderating social media which relies to a great extent on users to identify and flag objectionable posts for takedown. Nor do we think it is reasonable to expect those who face high volumes of abuse to invest time and resources in managing their accounts in ways others do not have to.
123.Commander Basu was sceptical about regulation which would require companies to take down offensive speech within a stipulated time, pointing out that “in an hour it can be replicated and changed and its DNA footprint changed so many times, and it can go to so many different channels, some of which will never deal with law enforcement or Governments, that it will be impossible to detect”.92 Instead he considered the problem should be tackled by the companies themselves.
124.There was some hope that machine learning could be used to identify offensive material. While it is clear there are capabilities in this area, these are less advanced that many suppose. As Ms Stimson said, while machine learning had been able to identify terrorist content, bullying and harassment are harder to identify:
“We found about 2 million pieces of that kind of content, but only about 15% of that was found by our machines. For the rest, we rely on individuals reporting to us and human reviewers. It is often more about context and intent, which can involve more nuanced decisions.”93
125.Much depends on context. “I could murder a curry” is very different from “let us murder an MP.” There will always have to be a significant human input into these systems. Companies should devote increased resources into ensuring their platforms are safe; the onus of removing offensive content should not be on the victims to report or the police to investigate. The scale of social media companies’ current activity in relation to dealing with these problems is insubstantial compared to the scale of the problem. Bearing in mind the scale of their profits, an increase in resources should not be impractical.
126.As the CSPL said “Facebook, Twitter and Google are not simply platforms for the content that others post; they play a role in shaping what users see”.94 The CSPL recommended rebalancing the legal framework to make social media companies more liable for illegal conduct on their sites. The Government has gone further. In its Online Harms White Paper it proposes:
127.Context is everything. As Jodie Ginsberg said:
“The terms “illegal”, “harmful” and “abusive” are all conflated to the point where people begin to think that things that are harmful are illegal, or potentially should be illegal.”96
Richard Wingfield was concerned that the regulatory model proposed would be too restrictive:
“[ … ] from a freedom of expression perspective, if we looked at what that would mean in the offline world we would be pretty horrified. It would basically mean us needing permission every time we wanted to say something or having it checked before we were allowed to say it, or having recording devices installed in every room, corridor, pub and place of employment and constantly being listened to in order to ensure that no one was ever saying anything illegal or harmful.”97
128.We agree that the law is a blunt instrument, and not everything harmful should be illegal. Equally, not everything which is legal should be condoned or encouraged. It is not unusual for the law to regulate activities which are not themselves criminal to reduce the level of harm to society. The ban on smoking in enclosed spaces is a prime example of this. Similarly, the European Court of Human Rights has held in case law that exercise of free speech which is legal in some contexts, might be lawfully restricted in others, such as during a religious service.98 There is a qualitative difference between speech between individuals, or material which is published offline, and online publication, which is not subject to publisher’s liability, which can be expressed to all the world, and which can be quickly and easily replicated.
129.The regulation of companies which operate on the internet, such as social media platforms, is now being actively taken forward, both in the UK and elsewhere. Free speech is not the only value to be considered. National and international authorities are increasingly taking action against companies which are considered to have allowed privacy breaches, for example, or abused their market position. And there is increasing parliamentary interest in the subject: our colleagues on the Digital, Culture, Media and Sport Committee have already conducted an inquiry into fake news, and have established a sub-committee on disinformation and are looking at the Online harms White Paper; the House of Lords Communications Committee has recently published a comprehensive report on regulating in a digital world99 and we ourselves are enquiring into privacy and the digital revolution.
130.There is an increasing appetite for regulation of the internet, both at national and international level. The balance between different rights should be an important component of that regulation. The following principles should be borne in mind as the detail of that regulation is developed:
a)The rights to freedom of expression and freedom of association are qualified. Free speech is accompanied by duties and responsibilities, and it does not encompass speech which undermines the rights or freedoms of others, including others’ right to freedom of expression.
b)Forcing others off the use of communications platforms either by online mobbing or abuse, is not a valid exercise of the freedom of expression.
c)ECHR caselaw recognises that different countries will take different decisions about what speech needs to be restricted in a democratic society to take account, for example, their different cultural traditions, local conditions and local risks. Social media companies will need to respect the laws of the countries in which they operate, and those laws should themselves respect human rights. Regulators should not be unduly inhibited in what they propose.
72 See Law Commission, Abusive and Offensive Online Communications, HC 1682, November 2018, para 1.49; the Law Commission does not discuss other ways in which abuse on line may affects women’s rights, such as their right to freedom of expression.
73 See Law Commission, Abusive and Offensive Online Communications, HC 1682, November 2018, para 3.86
74 See Law Commission, Abusive and Offensive Online Communications, HC 1682, November 2018, para 3.12ff; see also Karsten Mlller and Carlo Schwarz, Fanning the Flames of Hate: Social Media and Hate Crime, 19 February, 2018
76 See Law Commission, Abusive and Offensive Online Communications, HC 1682, November 2018
80 See Law Commission, Abusive and Offensive Online Communications, HC 1682, November 2018
83 Crown Prosecution Service, Social Media - Guidelines on prosecuting cases involving communications sent via social media, 21 August 2018
90 Facebook Reports Fourth Quarter and Full Year 2018 Results - Cision distribution - PR Newswire. The figure for active users above differs from that in the Law commission quotation, as it draws on this later information. The report from the House of Lords Select Committee on Communications gives more examples of issues in the various platforms’ approach to moderation. See House of Lords, Report of the Select Committee on Communications, Session 2017–19, HL Paper 299, paras 207 ff
94 Committee on Standards in Public Life, Intimidation in Public Life: A Review by the Committee on Standards in Public Life, Cm 9543, December 2017, p 13
98 Mariya Alekhina and Others v. Russia (application no. 38004/12)
99 House of Lords, Report of the Select Committee on Communications, Session 2017–19, HL Paper 299
Published: 18 October 2019