Draft Online Safety Bill Contents

Appendix 1: Case Studies

In this Appendix we set out briefly how the draft Bill and our recommendations will help address some of the online harms we heard about during our inquiry.

Racist abuse

Racist abuse is as unacceptable online as it is offline. Our recommendations will ensure that tech companies put systems in place to take down racist abuse and to stop its spread.

The prevalence of racist abuse online was brought sharply to public attention this year when Marcus Rashford, Jadon Sancho, and Bukayo Saka faced a wave of abuse on social media after they missed penalties in the Euro 2020 final. In turn, Rio Ferdinand spoke movingly to this Committee about the impact that racist abuse on social media has had on him and his family. But racism online is by no means isolated to high-profile individuals. It is a fact of life for many people of colour online, and it is always unacceptable.

The 1991 Football Offences Act made racist chanting that is ‘threatening, abusive or insulting to a person’ and offence within football grounds. There should also be enforcement against the same behaviour online as well.775

Our recommendations ensure that in addition to encompassing abuse, harassment, and threats on the grounds of race against individuals, online services will also have to address hate crimes such as stirring up racial hatred that may not currently be covered.

Platforms will have a duty to design their systems to identify, limit the spread of, and remove racist abuse quickly following a user report. Ofcom will produce a Code of Practice on system and platform design against which platforms will be held responsible for the way in which such material is recommended and amplified. Online services will be required to take steps to prevent abuse by anonymous accounts and will be required to ensure there are governance processes in place to ensure proper requests from law enforcement are responded to quickly. Where possible service providers should also share information about known offenders with the football authorities so that they can consider whether offences have been committed that would require further penalties, like the imposition of stadium banning orders. Finally, they will be required to address the risks that algorithmic recommendation tools and hashtags may amplify racist abuse.

Online fraud

Too many people are falling foul of online scams, and we want this Bill to help protect them.

85 per cent of financial scams rely on the internet in some way. Fraud is now the most reported crime in the UK. Fraudsters can approach individuals directly or pay for advertising to promote their scams. We heard, for example, how fraudsters pretend to be Martin Lewis, founder of Moneysavingexpert.com, to entice victims.

The draft Bill includes fraud within “illegal content”, with online services required to mitigate risks of harm and swiftly take content down if it has been reported. Paid-for advertisements are exempt.

Under our recommendations, providers will be required also to have systems and processes in place to proactively identify fraudulent content and minimise its impact on their platforms. This will include paid for adverts.

Advertising more widely is already a regulated industry, and the Bill will not step on the toes of existing regulators such as the ASA, though Ofcom will co-designate regulators to regulate parts of the Bill where their areas of responsibility sit side-by-side. The role of Ofcom will be to regulate how companies like Google or Facebook allow and promote adverts. The regulation of the advertisers themselves will remain matters for those regulators, and for the police when criminal offences are committed.

Extreme pornography

The fact that pornography depicting rape and serious violence is freely available on the internet is unacceptable and must be addressed.

We heard shocking evidence about the types of pornography which are easily available on the internet. Depictions of rape and extreme violence are freely available on many widely-used sites, often autoplaying with no age checks. The sale of such abhorrent material offline would often be a criminal offence, and it has no place online.

Under the draft Bill, user-to-user and search engines will be required to prevent children from accessing pornography. The position in relation to adults under the draft Bill is less certain. The Government has said it will designate so-called “revenge pornography” as priority illegal content. Yet, other kinds of extreme pornography may not be covered. While the dissemination of such material is illegal, it is not an offence against specific individuals. Unless the Government were to make it “priority content”, it would only be regulated as part of a platform’s terms of service.

Our recommendations will mean that sites should be required to prevent children from accessing all pornography, whether or not they host user-to-user content or are a search engine. The requirement on services to be safe by design will prevent people being sent unwanted recommendations of pornographic content. Online services will also be required to take down illegal, extreme pornographic material with speed once reported and take other mitigating measures. These could include warnings for users against the uploading of such content, effective governance to deal with reports, and reports to law enforcement.

Religious hatred and antisemitism

No-one should be abused for their religious faith or identity and tech companies must take steps to prevent the spread of such material and remove it from their platforms.

We heard many moving testimonies about the impact that religious hatred and antisemitism online has on individuals, families and communities. There was a record number of antisemitic incidents in the UK in May–June 2021, many of which were online. 45 per cent of religious hate crime offences in 2020–21 were against Muslims, many of which took place online. Online material can have real-world consequences—the attackers in both the Finsbury Park Mosque attack in 2017 and the 2019 Christchurch Mosque attack are believed to have been radicalised in part online.

Our recommendations ensure that the Bill encompasses abuse, harassment, and threats on the grounds of religion against individuals, as in the draft Bill, and ensure that online services will also have to address hate crimes such as stirring up religious hatred that may not currently be covered. Our report means service providers will have be required to design their systems to identify, limit the spread of, and remove such material quickly following a report. Online services will be required to take steps to prevent abuse by disposable anonymous accounts and will be required to ensure there are governance processes in place to ensure proper requests from law enforcement are responded to quickly. Finally, they will be required to address the risks that algorithmic recommendation tools and hashtags may amplify antisemitic abuse or religious hatred.

Self-harm

Promoting self-harm online should be illegal.

Self-harm, particularly amongst teenagers, is an epidemic. We were horrified to hear how videos instructing people in self-harm could come up in recommendation feeds and be promoted again and again by predatory algorithms to teenagers.

Under the draft Bill, online services would be expected to protect children from viewing content promoting or instructing viewers in self-harm. For adults, they would be required to set terms and conditions and apply them consistently.

Under our recommendations, encouraging or assisting someone to cause themselves serious physical harm would be criminalised in line with the Law Commission’s recommendation. Service providers would have to protect adults and children from content or activity promoting self-harm by taking that content down quickly.

Our recommendations would also require online services to address the risk of algorithms creating “rabbit holes”, in which self-harm promoting content is consistently recommended to vulnerable individuals and becomes normalised.

We recognise that great care is required in applying these recommendations to avoid barring some vulnerable individuals from social media. Ofcom will produce a Code of Practice helping platforms to identify this and other illegal and harmful content.

Zach’s law

Targeting epilepsy sufferers with flashing images online is a despicable practice which should be made illegal.

Zach Eagling became the figurehead for a campaign for online safety when internet trolls targeted his fundraising tweet with flashing images. The Epilepsy Society has highlighted how people with photosensitive epilepsy are regularly targeted with flashing images which can cause a seizure. As well as the serious medical, physical, and psychological implications of such attacks, some people with epilepsy feel driven off social media as a result. This can have a huge impact on people for whom the internet offers an opportunity to meet other people with epilepsy and build new communities.

Our recommendation would make sending flashing images to a person with epilepsy with the intention of causing a seizure a criminal offence, as recommended by the Law Commission. It would also require platforms to consider safety by design features to mitigate these risks, and Ofcom will be responsible for producing and implementing a Code of Practice on the design of systems and platforms. One example of this might be to create a user setting preventing flashing images from autoplaying or blocking them from showing at all.

Cyberflashing

Targeting people with unsolicited sexual images should be made illegal.

Cyberflashing, or sending unsolicited images of genitalia, is a problem particularly faced by young women and girls. We heard from Professor Clare McGlynn that the Ofsted review of sexual abuse in schools saw a high percentage of girls having to deal with being sent unsolicited penis images on a regular basis.

Our recommendation is that cyberflashing should become illegal once the Government has acted on the Law Commission’s recommendations. This means that platforms will have the duty to mitigate and effectively manage the risk of harm to individuals from cyberflashing and remove unsolicited nude images from their platform quickly.

However, even when this bar is not met and the incident does not meet a criminal threshold, the platform will need to include as part of its risk assessment ways of identifying and mitigating the risk of harm arising from the dissemination of such material—for example, by not automatically displaying images when received.

Violence against women and girls (VAWG).

Online abuse targeted at women and girls should be prevented, illegal acts stopped, and systems should not facilitate violence.

We heard about the epidemic of online abuse against women and girls. 62 per cent of women aged 18–34 report having been a victim of online abuse and harassment. This can include stalking, abusive messages, sending unsolicited explicit images or sharing intimate pictures without consent, coercive ‘sexting’ and the creation and sharing of ‘deepfake’ pornography. Inevitably, repeated attacks increase the distress felt and harm caused by victims, though we heard how the trauma from some of those harms is often not recognised or is minimised or trivialised.

Many of these abusive acts are illegal and the list of criminal offences in this area continues to grow. Upskirting, for example, is now an offence, and if our recommendations are accepted, cyberflashing will become one. A new harm-based offence for communications could also cover sharing intimate pictures.

Our recommendation is that platforms should systems in place to identify where there is a risk of harm from all such illegal acts and put systems in place to mitigate and effectively manage risks of harm to individuals, and to remove such material quickly when they become aware of it.

Unlike stirring up racial hatred, there is currently no criminal offence of stirring up hatred against women (though we support the Law Commission’s recommendation to create one), nor is misogyny a hate crime. Under the draft Bill, such acts would be regulated as part of the service provider’s terms of service and their commitment to act on misogynistic abuse online. We do not believe this should be left to platforms. Where the abuse and harassment of women and girls leads to serious psychological harm, it should be criminalised. We recommend that services should be required to identify and effectively mitigate risks caused by misogynistic abuse resulting from the way their systems and processes operate. We also recommend that they be required to address functionality that could be used in domestic abuse or VAWG, such as geolocation, and act to reduce those risks.

Incitements to violence

Any online attempt to encourage the violent overthrow of the result of a UK parliamentary election will be treated as terrorist content, and tech companies must proactively identify and remove such content.

We heard how the spread of disinformation online has been associated with extensive real-world harm, including riots, mass-killings, and harms to democracy and national security. Frances Haugen described events such as mass-murder in Myanmar and Ethiopia, and the riots at the US Capitol on 6 January as the “opening chapters” if engagement-based ranking is left unchecked and continues to amplify and concentrate extreme content that is divisive and polarising.

Under the draft Bill, any attempt to violently overthrow the UK’s Parliament or elected Government would be treated as terrorism content if it threatened serious violence, damage to property or risked the health and safety of the public in trying to advance an ideological cause. Online services would be required to proactively minimise the presence of such content.

Our recommendations also tackle the design features that can lead to the spread of content advocating violence. Platforms will be required to consider safety by design measures that allow them to react quickly to emerging threats and situations, that can create friction around the sharing of illegal content and require effective moderation of groups.

Deepfake pornography

Knowingly false and threatening communications such as deepfake pornography should be made illegal and tech companies should be held responsible for reducing its spread.

The malicious use of deepfake pornography is an issue which is growing in prevalence and which can have a devastating impact on victims. In an adjournment debate on 2 December, Maria Miller MP described the decision to create and share a deepfake or a nudified image as “a highly sinister, predatory and sexualised act undertaken without the consent of the person involved.”

Under our recommendations, the Law Commission’s new draft offence of sending knowingly false communications likely to cause harm will be implemented concurrently with the Online Safety Act. In this way, platforms would be required to exercise their duty to mitigate for the creation of deepfake pornography.

Platforms which host pornography could reasonably be expected to identify deepfake pornography as a risk that could arise on their services and would therefore need systems and processes in place to mitigate that risk.

The offence of sending knowingly false content on a user-to-user service with malicious intent, could also apply to any known deepfake film.

Foreign interference in elections

Using anonymous accounts to influence elections from the UK or abroad should be treated as a risk by the tech companies

We heard of instances in the UK and other jurisdictions of malicious actors at home and overseas using platforms to manipulate election processes, aggravate divides and generally sow distrust. Sophie Zhang touched on her investigations into a number of co-ordinated campaigns where inauthentic Facebook accounts had been used in this way in Honduras, Brazil and Azerbaijan.

Our recommendation is that platforms which allow anonymous and pseudonymous accounts should be required to include the resulting risks as a specific category in their risk assessment on safety by design. In particular, they might be expected to cover the risk of illegal activity taking place on their platform without law enforcement being able to tie it to a perpetrator, the risk of “disposable” accounts being created for the purpose of undertaking illegal or harmful activity, and the risk of increased online abuse due to the disinhibition effect. In this way, platforms will be required to take steps to prevent abuse by disposable anonymous accounts and will be required to ensure there are governance processes in place to ensure proper requests from law enforcement are responded to quickly.

Campaign activity, including advertising, which is clearly being coordinated from overseas and in breach of election law, should also be treated as illegal content.

The Elections Bill will require online campaign material to display its promoter. Material failing to do so should be treated as illegal content and services should have systems and processes in place to mitigate the resulting risks of harm.


775 Football (Offences) Act 1991, section 3




© Parliamentary copyright 2021