4 Social media
Nature and scale
87. From chat rooms to Facebook, from Snapchat to
Twitter, social media platforms play to the human desire to keep
in touch. Online social media provide new ways of interacting
and for modified ways of behaving. Everyone with a connected computer
is now a potential publisher and some people publish with too
little regard for the consequencesfor others as well as
themselves.
88. The most recent research[119]
from the NSPCC shows that 28% of young people who have a social
networking profile have experienced something that has upset them
in the last year. These experiences include cyber-stalking, being
subjected to aggressive or offensive language, being sent sexually
explicit pictures and being asked to provide personal or private
information. However, the greatest proportion of the group (37%)
had experienced "trolling".[120]
Alongside this evidence that online bullying is clearly a problem
for young people, the latest Childline statistics show an 87%
increase in 2012/13 in the number of young people contacting the
NSPCC for support and advice about being bullied via social networking
sites, chat rooms, online gaming sites, or via their mobile phones.
The NSPCC attributes this trend in part to the increasing ownership
by young people of smartphones and tablets.
89. The results of a July 2013 survey by the bullying
prevention charity, BeatBullying, provide further evidence of
both the nature and scale of online bullying and the dangerous
sides of the internet:
· One in five 12-16 year-olds have interacted
with strangers online
· More than a third of 12-16 year-olds go
online most often in their own bedroom
· One in five 12-16 year-olds think being
bullied online is part of life
· More than a quarter of 12-16 year-olds
admitted to witnessing bullying online, but only half of these
did something about it
· The primary reasons young people gave
for not doing anything about the online bullying was being worried
about being bullied themselves or not knowing who to speak to
about it
· Almost a quarter (23%) of 12-16 year-olds
spend more than five hours a day online during school holidays;
more than double the number during term time (10%)
· The majority (80%) of 12-16 year-olds
said they feel safe online, compared to only 60% of the younger
age group (8-11 year-olds). But worryingly, one in five (22%)
of 12-16 year-olds said they think being bullied online is part
of life
· For those 12-16 year-olds who did do something
about the cyber bullying, most went to their parents for advice;
however, only 38% of parents think their children are at risk
of being bullied online.[121]
90. Anthony Smythe of BeatBullying told us: "Our
research would suggest that one in three children have experienced
cyber-bullying. More worrying is that you will ?nd that one in
13 are subject to persistent cyber-bullying and that is what leads
to the cases of suicide and self- harm that we have seen over
the recent summer months."[122]
Our own conversations with young people left us in little doubt
as to the corrosive effect of bullying-often perpetrated by "friends".
91. Two of the best known social media platforms
(there are many) provided both written and oral evidence in the
course of our inquiry: Facebook and Twitter. Written evidence
from Facebook begins by describing its mission "to
make the world more open and connected and to give people the
power to share."[123]
Facebook is a global community of more than 1.15 billion people
and hundreds of thousands of organisations. Facebook works "to
foster a safe and open environment where everyone can freely discuss
issues and express their views, while respecting the rights of
others."[124]
92. Twitter told us of their 200 million active users
across the world and 15 million in the UK alone; the platform
now serves 500 million tweets a day. "Like most technology
companies we are clear that there is no single silver bullet for
online safety, rather it must be a combined approach from technology
companies, educators, governments and parents to ensure that we
equip people with the digital skills they will need to navigate
the web and wider world going forward."[125]
The law
93. Evidence from the DCMS makes the general point
that behaviour that is illegal off-line is also illegal online.[126]
Communications sent via social media are capable of amounting
to criminal offences in relation to a range of legislation, including:
· Communications which may constitute credible
threats of violence to the person or damage to property.
· Communications which specifically target
an individual or individuals and which may constitute harassment
or stalking within the meaning of the Protection from Harassment
Act 1997.
· Communications which may be considered
grossly offensive, indecent, obscene or false.[127]
94. The Director for Public Prosecutions published
guidelines for prosecutors when considering cases involving communications
via social media. Relevant legislation includes: Malicious Communications
Act 1988; section 127, Communications Act 2003; Offences Against
the Person Act 1861; Computer Misuse Act 1990; Protection from
Harassment Act 1997; Criminal Justice and Public Order Act 1994;
section 15, Sexual Offences Act 2003 (for grooming).
95. The DCMS cites data from the Crime Survey of
England and Wales which shows that, in 2011/12, 3.5% of adults
(aged 16 and over) had experienced upsetting or illegal images.
1.4% had experienced abusive or threatening behaviour. Some of
these experiences are likely not to have met the criminal threshold.[128]
96. BeatBullying have argued for greater clarity
in the law; they told us:
More than 1,700 cases involving abusive messages
sent online or via text message reached English and Welsh courts
in 2012. However, cyberbullying is not a specific criminal offence
in the UK. Some types of harassing or threatening behaviouror
communicationscould be a criminal offence. These laws were
introduced many years before Twitter, Facebook and Ask.FM, and
they have failed to keep pace with the demands of modern technology.
Unfortunately, serious cases of cyberbullying, which have often
resulted in suicide, have dominated our headlines in recent months.
That is why BeatBullying have been calling on the Government
to review current legislation and make bullying and cyberbullying
a criminal offence so that children and young people have the
protection they need and deserve, at the earliest opportunity,
to avoid this escalation.[129]
97. BeatBullying's evidence went on to cite the recent
Anti-Social Behaviour, Crime and Policing Bill as a possible vehicle
for introducing bullying and cyberbullying as a form of anti-social
behaviour. Jim Gamble told us: "The Prevention of Harassment
Act is bullying. The Prevention of Harassment Act is trolling
... We need to ensure that the laws as they exist, when they can
be applied, are applied."[130]
Any changes to legislation, including consolidation of current
laws, which clarify the status of bullying, whether off-line or
online, would be welcome. At the same time, much could be achieved
by the timely introduction of improved guidance on the interpretation
of existing laws.
Enforcement
98. On Twitter, users "agree" to obey local
laws. Twitter's rules and terms of service "clearly state
that the Twitter service may not be used for any unlawful purposes
or in furtherance of illegal activities. International users agree
to comply with all local laws regarding online conduct and acceptable
content."[131]
A question that arises is how more could be done by Twitter and
other social media providers to assist in compliance with the
law.
99. Facebook's detailed Statement of Rights and Responsibilities
("SRR") describes the content and behaviour that is
and is not permitted on its service. With respect to safety, the
SRR specifically prohibits the following types of behaviours:
· Bullying, intimidating, or harassing any
user.
· Posting content that: is hate speech,
threatening, or pornographic; incites violence; or contains nudity
or graphic or gratuitous violence.
· Using Facebook to do anything unlawful,
misleading, malicious, or discriminatory.[132]
100. Both Facebook and Twitter have sensible terms
and conditions attaching to the use of their services. However,
these should be made much clearer, explicit and visible. People
who might be tempted to misuse social media need to be left in
no doubt that abuses online are just as unacceptable as similar
misbehaviour face-to-face.
101. Facebook encourages people to report content
that they believe violates their terms. "Report" buttons
are "on every piece of content on our site." "When
we receive a report, we have a dedicated team of professionals
that investigate the piece of content in question. If the content
in question is found to violate our terms, we remove it. If it
does not violate our terms, then we do not remove it. We also
take action, such as disabling entire accounts (eg of trolls)
or unpublishing Pages, if deemed necessary."[133]
Reports are handled by the User Operations team comprising hundreds
of employees located in India, Ireland and the USA. The User
Operations team is separated into four specific teams covering
safety, hate and harassment, access and abusive content.
102. Facebook is aware that many under-13s are falsifying
their ages to open accounts, in violation of the Statement of
Rights and Responsibilities. Often parents assist them in doing
so, something that Facebook's Simon Milner has reportedly[134]
likened to allowing younger children to view Harry Potter video
works (some of which have a '12' certificate). Sinéad McSweeney
of Twitter told us: "We do not collect age information on
sign-up. I think Twitter has established a reputation in the area
of privacy. We minimise the amount of information that we require
from users to sign up so we do not collect age or gender or other
details about our users. Where it comes to our attention that
somebody under the age of 13 is using the platform, their accounts
are removed."[135]
She went on to imply that a child under 13 would either have to
be aware of this age rule, or read about it in Twitter's privacy
policy.[136]
103. Claire Lilley of the NSPCC suggested: "Some
of these sites need more human moderators to look for the fake
accounts. That is one part of it, but they also have very sophisticated
algorithms where they look at what language people are using,
what sites they are visiting and what they are talking about online.
They can tell with quite a degree of sophistication a lot about
the individuals. Twitter, for example, has a minimum age of 13,
but when you sign up to Twitter, you do not have to put in your
date of birth. There is nothing to stop you. Twitter would say
that what it does is to look at its algorithms to spot the children
who are under 13 and therefore potentially on a site that is aimed
at an older audience and containing information, posts and so
on that are not suitable for their age. Its argument is that it
uses very sophisticated algorithms, so I think there is a lot
more that the sites could do."[137]
104. Twitter's age-verification process could at
best be described as algorithmic and reactive; non-existent might
be a more accurate description. Given that Facebook and Twitter
are aware of the extent to which their services are accessed by
younger children, we expect them to pay greater attention to factoring
this into the services provided, the content allowed and the access
to both. The same applies to other social media companies in a
similar position.
105. BeatBullying told us that BeatBullying.org is
the only e-mentoring and social networking site to be endorsed
by CEOP. "We strongly believe that our approach to online
safety must be adopted by all internet providers if children and
young people are to be safe online."[138]
This website checks all content prior to it being uploaded. As
a general policy, Twitter told us that they do not mediate content.
However, there are some limitations on the type of content that
can be published with Twitter. These limitations include prohibitions
on the posting of other people's private or confidential information,
impersonation of others in a manner that does or is intended to
mislead, confuse, or deceive others, the posting of direct, specific
threats of violence against others, and trademark and copyright
infringement.[139]
Twitter told us that users can mark their own tweets as sensitive
which by default means a warning message is posted to anyone wishing
to view these. This is a good reminder that self-restraint and
self-regulation are crucial aspects of any enforcement regime
in the online world.
106. In spite of reassuring words from Facebook
and Twitter, it is clear that these platforms, in common with
other social media providers, could do far more to signal the
unacceptability of abuse and to stamp it out when it arises.
107. Offensive communications via social media that
do not cross the threshold into criminal behaviour should, the
Government expects, be dealt with expeditiously by the social
media companies.[140]
We agree. Social media providers should follow the examples
of Facebook and Twitter in having appropriate terms and conditions.
We believe there is significant scope for such providersincluding
Facebook and Twitterto enforce such conditions with greater
robustness.
Reporting
108. A service for reporting all hate crimes online
was launched by the police in April 2011. The website, called
True Vision, is supported by all forces in England, Wales and
Northern Ireland and can be accessed at www.report-it.org.uk.
All reports of incitement to racial hatred content hosted in
the UK previously reported to the Internet Watch Foundation (IWF)
should now be reported directly to True Vision. True Vision takes
reports about: racist or religious hate crime; homophobic and
transphobic hate crime; disability hate crime; bullying and harassment;
domestic abuse. The National Centre for Cyberstalking Research
commented: "The True Vision initiative for hate crime reporting
is an excellent example of a simple and transparent reporting
mechanism, but it needs to be more widely publicised."[141]
109. Social media providers generally offer reporting
mechanisms, with varying degrees of user-friendliness and degree
of follow-up. In serious cases, the access providers will also
get involved. TalkTalk told us that they investigate any abusive
or threatening comments posted on sites by their customers when
provided with the log information that supports the complaint.
In severe cases, relevant data will be disclosed to the third
party solicitor, on receipt of a fully sealed court order from
a UK court.[142]
110. As noted above, Facebook told us about their
'report' buttons. Twitter also told us that they have now introduced
a similar reporting facility. Twitter said that reports that
are flagged for threats, harassment or self-harm are reviewed
manually. Twitter advises users to report illegal content, such
as threats, to local law enforcement and refers to working closely
with the police in the UK.[143]
Stella Creasy MPwho has herself been subject to bullying
and threats via social mediaargued for the introduction
of an "online panic button system" to alert sites like
Twitter to an emerging problem.[144]
She told us how she had been subjected to graphic threats and
harassment on Twitter over the course of two weeks.[145]
Even this was "just a fraction" of what had been endured
by Caroline Criado-Perez who had been receiving 50 rape threats
an hour. These threats evidently started for no reason other
than Ms Criado-Perez's successful campaign to keep female representation
on English bank notes. In January, two people were jailed for
their roles in the abuse to which Caroline Criado-Perez was subjected.[146]
Another individual has recently been charged with sending malicious
communications to Stella Creasy MP.[147]
All were charged under section 127 of the Communications Act 2003.
111. The NSPCC have suggested that providers of
social media services should provide a range of options for users
to report material, with extra support for younger users. They
add that default privacy settings should be the highest possible
and there should be adequate use of human moderators.[148]
Claire Lilley of the NSPCC told us that, even if children report
bullying to social networking sites, "they often feel like
nothing is being done as a result of that and that no action is
being taken, so children, when it is happening to them, [are]
feeling extreme vulnerability and humiliation, and a sense of
helplessness."[149]
These comments were borne out by one of the teenage girls we talked
to in January who told us that, with Facebook, it was hard to
get bullying material taken down or blocked; when it was eventually
removed, the damage had already been done. Twitter continues
to be criticised for not doing enough to combat abusive and threatening
behaviour online,[150]
even in the wake of the limited and tardy corrective action it
took following last year's case involving Caroline Criado-Perez
and Stella Creasy MP.[151]
112. Anthony Smythe of BeatBullying said:
I would like more transparency of the websites
to hear from the big websites about how many cases of cyber-bullying
are reported to them each year. What did they do about them? How
quickly did they respond to those cases? What support and help
did they offer? It is about having a bit more accountability.
They will say that they probably do that, and if you spend ?ve
hours on the internet, you might ?nd that information annexed
somewhere. I would like that information signposted on their main
websites so that parents and young people can have access and
understand the website that they are using.[152]
113. Stella Creasy MP told us: "One of the other
things I have asked the companies to do is publish their data
about the numbers of reports of abuse they get and the numbers
of the concerns so we can get a question of scale."[153]
114. Social media providers should offer a range
of prominently displayed options for, and routes to, reporting
harmful content and communications. They need to act on these
reports much more quickly and effectively, keeping the complainant
andwhere appropriatethe subject of the complaints
informed of outcomes and actions.
115. Ofcom should monitor and report on complaints
it receives, perhaps via an improved ParentPort, regarding the
speed and effectiveness of response to complaints by different
social media providers.
Advice and support
116. Anthony Smythe of BeatBullying told us: "What
is the greatest concern for children and young peopleand
now for adultsis the feeling that they are helpless and
are hopeless in terms of getting advice and support. They do not
know where to go to."[154]
BeatBullying develops these points in their written evidence:
Everyone involved with children's and young people's
use of the internetparents, schools, service providers,
organisations and children themselveshas a shared responsibility
for online safety. That is why in April 2013 BeatBullying launched
a campaign for better anti-bullying protections called Ayden's
Law. The campaign calls for a national strategy to tackle cyberbullying
and would set out how the voluntary and community sector, parents
and schools would be equipped to (a) protect the children in their
care from harm online and (b) educate and equip children about
internet safety and responsible digital citizenship so that they
understand the issues for themselves.
Any approach to online safety must ultimately
be about shaping attitudes and changing behaviors as much as it
is about teaching techniques for staying safe or for anything
else.[155]
117. Claire Lilley said: "I would say that what
bullying on social media comes down to is about behaviour. We
can wave the long arm of the law at children, but what we need
to do is to educate them about the impact of the behaviour in
which they are engaging on people who are at the receiving end
of it. We need to do much more to educate them to build their
resilience, both when they are on the receiving end, but also
to build their empathy and their sense of respect for other children."[156]
118. The Home Office told us that they undertake
to "work with DCMS to ensure we are linked into initiatives
such as Safer Internet Centre and Get Safe Online, which provide
internet safety information and advice alongside a wealth of internet
safety resources for schools and information for parents and children."[157]
119. Social media companies could, and in some cases
do, provide resources and funding for educational initiatives.
For example, Simon Milner of Facebook, referred to support given
to the South West Grid for Learning which is "particularly
helpful"[158]
for schools and teachers. He also indicated that a request for
funds would be listened to "with a very open mind."[159]
We also heard evidence from the Government, Facebook and Twitter
of the value of the helpline operated by the UK Safer Internet
Centre, which operates on minimum and time-limited funding from
the European Union. We believe it is in the interests of social
media platforms, if they wish to avoid a more regulatory approach,
to put their money where their mouths are and provide more funding
for the valuable work being done on internet safety by voluntary
organisations and charities.
120. A good deal of advice on the safe use of
social media is available already. This should be signposted more
clearly for teachers, who are likely to be in the front line when
it comes to dealing with bullying both in the playground and in
the online world.
Anonymity
121. The cloak of anonymity is a useful one for a
dissident or free-thinker to wear; but it can also mask the bully
and the criminal. Evidence from Dr Claire Hardaker, a Lecturer
in Corpus Linguistics, identifies anonymity as one of the factors
that can lead to harmful behaviour online (others include detachment
and entertainment). She notes: "the internet offers a perceived
anonymity that has no real parallel offline, and this appearance
of invisibility encourages the user to feel that they can do unpleasant
things with a highly reduced risk of suffering any consequences."[160]
She goes on to question the feasibility of removing anonymity:
In a nutshell, this is borderline impossible,
if only because it is unenforceable, and unworkable. Even if all
countries agree to legally mandating online identity disclosure
(unlikely) the costs of setting up, administrating, and then enforcing
it would be staggering. Further, we need only consider the risks
inherent in having a child's name, age, location, etc. available
online to realise that online identity disclosure would actually
create more dangers than anonymity currently averts.[161]
122. These views appear at odds with those of John
Carr of the Children's Charities' Coalition on Internet Safety.
Referring to the "abuse of anonymity" he emphasised
the importance of being able to trace individuals; this would
require social media providers to take greater steps to verify
the identity of their account holders.[162]
He said: "So the requirement on internet service providers
would be to verify that the individual who has just signed up
with them is not in fact a dog, but an actual human being with
an actual and verified address where they can be reached. That
alone would act as a major brake on a lot of the bad behaviour."[163]
123. The Open Rights Group told us: "It is too
easy to assume that tackling anonymity online is a simple solution
to abusiveness." The Group added:
In fact, people are usually not truly 'anonymous'
when they are online. People leave all sorts of information that
can identify them. It is sometimes possible to use this information
to identify somebody with varying levels of confidenceeven
if the person posts messages as an 'anonymous' or 'pseudonymous'
user. For example an ISP may try to 'match' an IP address with
one of their subscribers. There are various legal powers that
in some circumstance, require Internet companies to disclose this
data, and which permit the use of it in various contexts for the
purposes of trying to identify a user.[164]
124. Nicholas Lansman of the ISPA said: "People
can attempt to hide themselves online, but there are technical
ways in which they can be discovered."[165]
Claire Perry MP referred to a particularly tragic case when she
told us: "I was encouraged with Ask.fmhaving spent
a lot of time with Hanna Smith's father, who was one of the young
women who did indeed commit suicidethat company did set
up a facility where users could choose to be anonymous, but you
would know if the user was anonymous when you were exchanging
information with them."[166]
125. Anonymity is not just a cloak for cowards who
bully; it is used by others to disguise their criminal activities.
In January of this year, the National Crime Agency announced that
17 Britons had already been arrested as a result of Operation
Endeavour, spanning 14 countries. This particular case involved
the live streaming of child abuse in the Philippines for viewing
across the world. The prevalence of child abuse images on the
internet and the associated activities of paedophiles provide
just one of the starkest of reminders that keeping children safe
off-line includes keeping them safe online too.
119 Ev 111 Back
120
Trolling: the practice of posting deliberately inflammatory material. Back
121
Ev 75 Back
122
Q 5 Back
123
Ev 89 Back
124
Ev 89 Back
125
Ev 92 Back
126
Ev 109 Back
127
Ev 109-110 Back
128
Ev 110 Back
129
Ev 76 Back
130
Q 116 Back
131
Ev 92 Back
132
Ev 89 Back
133
Ev 89 Back
134
Facebook admits it is powerless to stop young users setting up profiles,
Guardian, 23 January 2013 Back
135
Q 166 Back
136
Q 169 Back
137
Q 22 Back
138
Ev 73 Back
139
Ev 92 Back
140
Ev 111 Back
141
Ev w144 Back
142
Ev 85 Back
143
Ev 93 Back
144
Q 70 Back
145
Q 61 Back
146
"Two jailed for Twitter abuse of feminist campaigner",
Guardian, 24 January 2014 Back
147
"Man charged over MP Stella Creasy tweets", BBC News,
23 January 2014 Back
148
Ev 73 Back
149
Q 10 Back
150
"Ex-footballer Collymore accuses Twitter over abusive messages",
BBC News, 22 January 2014 Back
151
Q 61 Back
152
Q 14 Back
153
Q 66 Back
154
Q 5 Back
155
Ev 73 Back
156
Q 12 Back
157
Ev 107 Back
158
Q 138 Back
159
Q 150 Back
160
Ev w2 Back
161
Ev w4 Back
162
Qq 22-26 Back
163
Q 24 Back
164
Ev w126 Back
165
Q 110
@ 166 FOOTNOTE3@
165">Back
Back
|