Culture, Media and Sport CommitteeWritten evidence submitted by Prof Andy Phippen

Context

I have been involved in research around online safety for over 10 years and work very closely with organisations such as the UK Safer Internet Centre and South West Grid for Learning as well as many schools across the country. This work, over the years, has encapsulated issues of identity and trust, school policy and practice around online safety, online safety problems faced by children and young people and how our understanding of these have changed over the years, and the impact of new technologies on behaviours. I work with young people from primary age up to higher education.

In all of my research a fundamental aspect is being able to discuss these issues at a grass roots level with young people themselves. While we might have additional data collection mechanisms such as survey work, the core of any research project will be sitting with young people, usually in groups, talking about the research questions and their thoughts. This is all conducted within a clear ethicial framework working alongside schools to place the young people in a safe, non-judgemental environment where they are free to share opinions and reflections without being challenged or confronted. Parental consent is obtained for any young people spoken to, they are fully briefed on the research context, and have the right to withdraw from the discussions at any time. While these discussions can sometimes centre on difficult issues, such as sexting, pornography and gender, I usually find the discussions very open and productive (we don’t ask young people to disclose about their own behaviour, but to reflect on the behaviour of peers). I also work with stakeholders in the online safety area, such as parents, teachers and social care professionals to understand their attitudes and concerns related to online safety and young people. A recent example of this work was a piece of research conducted by the NSPCC/UK Safer Internet Centre around sexting.1

In addition to research practice, I am also involved with a lot of schools in an educational context, delivering assemblies and classes related to online issues and associated behaviours. While this is not formal research work it further allows the exploration of online safety with young people in a qualitative setting and can elicit interesting and rewarding discussions.

As well at the work above, all of which is available in the public domain, I am currently involved in three qualitative studies that, while not yet published, may have relevance to this inquiry. These studies are:

An exploration into gaming culture and its impact on behaviour and attitudes.

Research into gender imbalance and issues around sexist abuse and the role online technology plays in facilitating these.

Exploring the impact of superfast broadband on young people’s use of online technologies.

In presenting a response to the CMS Committee Inquiry into Online Safety, I will draw from all of this work to place context around my responses.

Clearly the online world is something with which the vast majority of young people are engaged and use on a daily basis (indeed, the differentiation between the online and offline is often something young people do not acknowledge, it is simply “life”). In agreement with the specified inquiry brief, one thing I can observe through many years working in this area is that public understanding of the issues involved in “being” online has improved and we have moved on from the traditional threats such as “stranger danger” to a realisation of a complex environment requiring multi-stakeholder input and perspectives. In addition, within the field we have moved away from the belief that young people are simply passive participants needing protection toward an awareness of fully engaged digital citizens who need to be mindful of the impact of their own behaviours on others as well as awareness of the risks involved living in the online world.

Digital risk can take many forms, from those well established such as grooming to sexting, cyberbullying/online abuse, accessing inappropriate content and trolling. However, in ensuring protection from harm we must also establish a balance with rights to freedom of expression and access to legitimate content and interaction.

As someone who has worked in the field for a long time, it is encouraging to see Parliament engaging with these issues in recent time. However, a lot of recent policy discussion around this topic has focused on a very specific aspect of online life—access to adult content and the measures needed to ensure young people cannot see this.

More concerning is that this dialogue seems now to have grouped access to clearly illegal content (child abuse images) with access to legal content for adults which is inappropriate for minors. These are two very separate issues and it is not helpful to be presenting them in the same context—countermeasures to tackle child abuse images are clearly set in law and addressed by excellent organisations such at the Internet Watch Foundation. Child abuse images are illegal and no one has the “right” to access them. However, when we are talking about legal content inappropriate for young people it is a far more complex debate as we do not want to restrict the rights of the public or ostracise them in some way (for example by suggesting that there is a link between access to legal and illegal content) in order to protect young people.

This is a very narrow aspect on online safety at large—what about sexting, cyberbullying, trolling, upset by “innocuous” content, education, the right to discuss these issues in school, etc? It is encouraging to see that this inquiry is, at least in part, looking to broaden to Parliamentary debate and to start to ask questions around the wider context. This is something that will be explored in more detail when addressing the specific points of the inquiry below.

How Best to Protect Minors from Accessing Adult Content

There has been much discussion over this point over the last couple of years. The OCC published an excellent, balanced, review of the literature in this area recently.2 However, I would like to present a slightly different perspective on this matter that has arisen from my discussions with young people. Adult content is certainly nothing new, it has been available far longer than it has been accessible online. However, the challenge presented in the online world is both accessibility and the type of content one can now download. While 25 years ago magazines and perhaps VHS videos provided a level of access to adult content, it was not as strong or diverse at the availability of pornography online. This is something that has been acknowledged by many young people I have spoken to.

I was asked over the summer this year by a 14 year old boy what I thought about people his age looking at this sort of content. This is an interesting question to pose because it both acknowledges that young people of this age are frequent users of pornography (this has been borne out in my discussions with young people of this age, particularly boys) and also challenges the social belief they should not be doing it. My answer was hopefully a pragmatic one—I would rather they didn’t but I wasn’t naïve enough to think they could be prevented from accessing it, so I would rather we focused on providing an environment in schools where they could talk about the possible issues that arise from access to this sort of content.

Protection from access is an interesting concept. How can we protect them from content they wish to access (which is certainty something I would observe from talking to boys far more than girls)? This, again, was reflected in discussions recently with a very mature group of 14–16 year old boys in a local school—one boy, who was discussing the recent policy discussions around “opt-in” and filtering in the home, made a very clear statement: “You will not prevent teenage boys from accessing pornography”. He did not state this to be rebellious or controversial, he was stating it from his observations of his peers. They access and share pornography and have many ways of doing so.

Much of the recent debate has been around filtering and it is worth exploring this as a “solution”, which is clearly is not. Software can only ever provide tools to provide some level of support to what are, essentially, social issues facilitated by technology. We have worked with filtering in schools for a long time now and while it is certainly a useful tool to prevent access to inappropriate content in the school setting, it is equally clear that filtering controls have to be set extremely high in order to prevent access to those things the school does not wish their pupils to see.

Therefore, lots of innocuous content will be also be blocked which can lead to frustration for both staff and pupils. This is because filtering is still a pretty blunt tool—in general it looks for keyword and blacklisted sites and prevents access as a result. Filtering still struggles with context—the famous Scunthorpe problem being a clear example of this (and some filters will still block this). Filtering also only blocks the channel on which the technology is installed—in the case of a school this channel would be the institutions network. It will not prevent an ambitious young person with a mobile device handed down from their parents from accessing this content. And if an individual’s mobile device has been set up to prevent accessing adult content, they will have a friend whose device hasn’t and they will share content via Bluetooth, MMS, etc.

One boy I spoke to recently was more candid when I discussed the recent policy directions with him—he said he doesn’t care whether people prevent him from accessing indecent images online, he’ll just ask a local girl to send him pictures instead. While this does present a number of issues, notwithstanding a very concerning attitude toward women in general, it does clearly highlight one issue that doesn’t seem to be discussed in the recent debates—online consumption of “professional” pornography is only one source of indecent content for young people.

Filtering will prevent access for those not wishing to find inappropriate content, and as such does provide some use in this area. If we turn our focus to filters in the home it may prevent younger children from stumbling across this sort of content, but will it “protect” a determined teen? Also, if filtering in the home is going to present similar overblocking challenges as those solutions in schools, how many homes will switch off the filters because they become too frustrating to use and because they prevent access to legitimate content? Certainly the Open Rights Group (I am on the advisory council for this organization) have many examples of adults contacting them because of filters that, whether on home computers or mobiles, have prevented access to legitimate sites with no means to unblock them.

On countless occasions young people have asked me why they have no opportunity to discuss these issues in school. Many are also critical of what learning they do have around online issues in either PSHE or sex education. Rather than just trying to prevent young people from accessing these sort of things, which will always be a game of catch up when they are finding new ways of access, should we not be providing an education environment where these issues can be discussed? Shouldn’t we be providing a means for reflection and challenge on why society feels this content is inappropriate for young people and what the impact of access might be? Sex education which addresses online issues, inappropriate content, etc, seems to be extremely disjointed and generally delivered by the few specialists we have in this area in the UK. I certainly hear very rarely of this sort of lesson being delivered in a school by teachers within the establishment. Yet whenever I have had sessions in schools with both boys and girls around this topic (which can results in very different conversations) I have found the young people to be engaged and enthusiastic in their discussions and ask to do more of this sort of thing.

So the barrier is not the unwillingness of the pupils, but the lack of coordination nationally to permit teachers to address these things in schools. Education in this sort of area requires multi-stakeholder buy in from staff, parents, pupils and policy makers. A lone teacher who believes this is important may subsequently suffer at the hands of senior management, parents or governors who have received no “permission” to address these topics within the curriculum.

Filtering Out Extremist Material, including Images of Child Abuse and Material Intended to Promote Terrorism or other Acts of Violence

Much of the discussion above is equally pertinent to this point, filtering cannot be seen as the complete solution. It is good to see acknowledgement that harmful content online is not just pornography (indeed, in my discussions, young people are far more likely to say they have been upset by images of violence or animal abuse than they are by sexual content) but again we need to be clear about the difference between what is legal and what is illegal. If content is illegal, the Internet Watch Foundation (IWF) have a very clear remit for reporting, investigating and either blocking or moving for prosecution (depending on whether the content is hosted abroad or in the UK). The IWF maintain a blacklist of sites serving illegal content and all ISPs within the UK buy into this. The key challenge for the IWF is awareness of their role among the public and this could be something that could be helped by UK Government.

However, when we consider legal, but upsetting, content, we are in more difficult territory if we are looking to filtering as a solution. Who decides what should be filtered? How can we be sure about the meaning and intent behind a search term (for example, if one is searching for extremist materials might use the same search terms as someone conducting research on the area)? And is this a sliding scale of upset? I have had many young people tell me how RSPCA adverts have upset them—should we move to block them too? Again, I would rather see a society that, rather than trying, and failing, to prevent access to anything that might upset a young people, provides an environment where there are aware upsetting content can come in many different forms and affect individuals in different ways and if they are upset, they can talk to someone about it. During my work around sexting I was amazed at how few young people would turn to a teacher if they became a victim. They feared a lack of understanding or a judgmental response. That is a real concern because we cannot possibly consider, and filter, all content any individual young person might finding upsetting. To consider young people as a single collective is patronising to the diversity and individuality they exhibit and what upsets one may be viewed as humorous or innocuous by another.

Preventing Abusive or Threatening Comments on Social Media

Finally, while it is good to see an Parliamentary committee looking at this issue, because when I talk to young people about online issues cyberbullying (or simply “people being mean”) is by far the most likely response when I ask what upsets them online, I would, again, take issue with the wording used. Put simply, you will never prevent abusive or threatening comments on social media, just as you will never prevent nasty comments in the playground, on TV shows such as the X Factor, on the football pitch or at PMQs. However, far more can be done to provide an environment where young people should know they do not have to put up with this sort of thing and that people can help. I should stress, first of all, that it is rare that a young person will talk about “cyberbullying” and the term is unhelpful when trying to address the problems caused by people being abusive online. Online abuse can be described in many forms, such as nastiness, people being mean, calling names, ganging up or simply banter and again equitability is difficult to manage in this area—one young person’s banter will be considered abuse by another.

However, it is important that services providers offer the means to report abuse, something that is already in place by some social media platforms but not others. Young people also need to be aware that they can use these reporting mechanisms to take comments down and also they will not get into trouble themselves. The wording of some reporting functions is intimidating and can be overwhelming for young people.

But, once again, education is the pivotal player in this area—education around what is and isn’t acceptable language, the impact of abuse on others, the disconnect and perceived anonymity the Internet provides, and the potential illegality of some forms of abuse.

It is also encouraging to see some discussion from the Director of Public Prosecutions and Crown Prosecution Service around this issue, it is helpful to get some clarification on abusive language. It is also encouraging to see it stated it is rarely in the public interest for minors to be prosecuted for this sort of thing.

I have seen a number of schools adopting a restorative justice approach to the resolution of online abuse issues and this makes sense—in a lot of cases the perpetrator will not be aware of the impact of their words on the victim and to place them in an environment where they can hear about it is a powerful one. However, it is important the restorative justice is implemented effectively with trained counsellors—it is not something that can be done effectively without experienced professionals.

In summary, it is encouraging to see this inquiry exploring some of the broader issues around online safety and not just focus on the headline grabbing ones. However, it is still a concern that a lot of the language focuses on prevention and filtering rather than education and awareness. The Internet is not something that can be “policed” with technology. If we wish to protect young people and help them protect themselves, we cannot lay responsibility solely on service providers and software just as we cannot blame the Royal Mail for delivering an unwanted letter. We need parents to be aware of issues, we need education professionals to be able and allowed to provide an effective environment to both learn about and talk about these issues, and we need policy makers who are awareness of the broad context of online safety and the complexities of growing up in the digital age.

September 2013

1 http://www.nspcc.org.uk/Inform/resourcesforprofessionals/sexualabuse/sexting_wda93252.html

2 http://www.mdx.ac.uk/Assets/BasicallyporniseverywhereReport.pdf

Prepared 18th March 2014