Culture, Media and Sport CommitteeWritten evidence submitted by Big Brother Watch
The Culture, Media and Sport Committee has decided to investigate a number of aspects of online safety that are currently raising concerns, in particular:
1. How Best to Protect Minors from Accessing Adult Content
Firstly, it should be asked—is it the role of Government to prevent minors accessing adult content, or parents?
Critically—any measure must consider the privacy and freedom of speech issues involved. Given the importance of trying to help young people appreciate the importance of protecting their personal data and to consider what they are sharing online, intrusive “big sister” approaches should be resisted. If we raise young children with an expectation that they have no privacy, we cannot in later life then hope they discover the importance of privacy.
With nine in 10 parents aware that internet filters are freely available, it is clear that internet controls need to be made easier to use rather than taking the control away from parents and handing it to the government.
The first problem associated with online safety is not only are there issues trying to define adult content, but there are differing views on what is unacceptable. For instance, some households would see topless photographs as adult content whereas others would not. This is why parental discretion is important. Opt-in filters remove parental discretion whilst risking some parents seeing all adult content as being impossible to accesses, therefore being less likely to discuss the issues that it raises. These are moral, not legal, judgements and parents are the only people who can make them.
Providing the choice for parents is key, and the Government has rightly pressed industry to make those choices available and easy to access. However, Government should not be seeking to tell parents what is the right choice and should absolutely have regard for what is technically feasible, particularly when trying to minimise circumvention.
We are concerned that the policy to date has been framed around the idea of creating a “safe” internet, and that once a filter is activated parents can relax. There is a real danger of lulling parents into a false sense of security, whilst simultaneously doing nothing to address the wider importance of education and discussion among families. Equally, the system does nothing to deal with situations where parents quite reasonably do wish to access legal content, but now feel trapped between a binary choice of “on or off”, neither of which works for them.
As a result, whether pornography websites or eating disorder discussion boards, deciding how to categorise the 2bn+ webpages now live is a gargantuan task, and given that nearly all the content is legal, creating a framework that is satisfactory to all is inevitably impossible.
Ultimately, any filtering risks driving content off-web and underground. From driving vulnerable people to more obscure sites where context is not provided to creating communities based upon the circulation of certain content, the essential part of child safety is dialogue with parents.
We believe that device level filtering would be a more preferable solution to network level filtering, and the Government by its approach has pushed the solution to a network level. Device-level filtering has several advantages. It allows granular access for different members of the family and is far harder to circumvent than a network filter. There is also the ability to add exceptions for legitimate sites that are blocked locally, or increase the level of filtering, putting parents in control of what their children can see. We would also argue that children are likely to seek parental input if they are concerned. As the EU Kids Online, September 2011 survey found, 70% of children feel parental input is helpful.
Parents are far more aware than has been claimed about the risks online and of the tools available to them. Some parents will choose to trust their children and discuss the associated issues with them, rather than installing filters. Others will use filters, and some may not allow their children to have a computer in their bedroom. These are parental choices and should remain so. Equally, some of the statistics presented to justify greater controls have been deeply flawed—for example the Advertising Standards Authority rebuked the Carphone Warehouse for its marketing around the Bemilo mobile phone service, while the figure claiming one in three 10 year olds have seen pornography online was based upon a single canvass of a secondary school in North London, by Psychologies Magazine.
Finally, we would highlight the risks of over-blocking, which do a great deal to undermine confidence in filters, while also jeopardising e-commerce. If people are unable to access health advice or support services, the knock-on effects could be significantly damaging. Equally, for businesses trading online, particularly SMEs, the impact of being blocked can be financially destructive. The instances of legitimate websites being blocked and taking many months to be un-blocked is something that should be given proper attention.
We agree more can be done in this area. We believe the four points of action are:
2. Filtering Out Extremist Material, including Images of Child Abuse and Material Intended to Promote Terrorism or other Acts of Violence
This question comes dangerously close to conflating illegal content with legal content.
A critical part of this policy area must be a clear, unambiguous statement that the fundamental basis for any content being filtered without the input of consumers is legality.
The IWF produces a list of URLs that meet a strict test of hosting child abuse images. Given no such organisation exists for material intended to promote terrorism or other acts of violence; it is unclear as to what process would be followed for this material. Given the political nature of some of this content, it would be unacceptable for Government to be making this decision without independent oversight.
Definition and dissemination in practice is not simple, and should require due process.
Fundamentally however filtering this content is not and should not be seen as the priority. If content is illegal then the primary focus should be to find who is hosting the content and to prosecute them, while also removing the content at source. Filtering must not become the norm simply because it is easier than pursuing multi-jurisdictional law enforcement action. (We would also highlight how this relates to the wider issues surrounding foreign policy and filtering, particularly where Government seeks to filter content based on decisions made without due process. Our own process will be copied in regimes less democratic than our own, whether on morality or public safety grounds.)
3. Preventing Abusive or Threatening Comments on Social Media
Abusive and threatening comments are not solely an online phenomenon and it is important to maintain a sense of proportion when dealing with these issues.
It is also important to note the intent of Parliament when passing legislation, particularly the Malicious Communication Act and s127 of the Communications Act 2003, neither of which were ever intended to be a catch-all for social media behaviour. Both were passed before Twitter or Facebook existed.
Online speech is still speech and should not be subject to harsher legal sanctions because it is made online. The criminal sanctions regime that exists offline is adequate to protect from incitement and assault.
As with off-line behaviour, preventing behaviour is only ever going to be a product of education and intervention by those with a direct relationship with a child, particularly parents.
A parent led approach can also be supplemented with an acknowledgement by social media companies of the threats and abusive language that can be used online.
Companies should have processes in place to respond to comments that constitute criminal behaviour, and this should be done where possible with the community online playing a role to establish acceptable standards of conduct, rather than top-down blocking or closing down accounts.
We believe that when public speech has been deemed to be offensive then this is best addressed through free, open and honest debate. Offensive content should not be addressed through prosecutions. In December 2012 the Crown Prosecution Service felt it necessary to publish interim guidelines for prosecutors on the approach to be taken in cases that involve communications or messages sent by social media. The guidelines outlines the need for communications to be “grossly offensive” or “obscene” for a prosecution to be made, however there remains an urgent need to reform laws that pose a serious risk to freedom of speech after several baffling prosecutions in the last few years. The legislation, on which these guidelines are based, was designed for a completely different purpose but now allows the CPS the right to police comments made on social media.
The danger is that while these guidelines may reduce the number of prosecutions, arrests will continue and only those with the stomach to take a case to court will escape without the long-term handicap of a criminal record by accepting a caution. Too frequently we are seeing people being offended on behalf of others and the legal system should not allow itself to be dictated by mob rule. Instead, the police should focus on bringing to justice those who seek irrefutable harm, rather than those who clearly seek to cause offence.
Equally, it is important that policy reflects the fact that both service providers and networks are not in any way the appropriate bodies to be deciding what is and is not acceptable speech—particularly given many of them will have users in countries around the world.
People, not technology, are responsible for how it is used and policy must respect the neutrality of networks and not try to turn them into an arm of enforcing a particular moral or legal framework.
September 2013