Select Committee on Culture, Media and Sport Tenth Report


4  Controlling risk

55.  Dr Byron, in her Review, noted that risks were a reality of life in the online world.[90] Certainly, we cannot see any immediate prospect that existing causes of risk will diminish or disappear of their own accord. Therefore, the aim of public and private company policy, as Bebo observed, should be to minimise risk and to provide consumers with the knowledge and tools they need to address potential harm if and when they encounter it.[91]

Traditional media

56.  Measures to control risks from access to potentially harmful content broadcast on traditional media are well established. The Broadcasting Code administered by Ofcom guides broadcasters on the concept of harm and offence and on how it should be respected in programming policy. Broadcasters observe the nine o'clock watershed, a point after which it may be assumed that younger children will not be watching television or, if they are watching, will be doing so with the permission of their parents. However, the concept of the "watershed" is now being eroded by the 24-hour availability of material on demand or from archives, as well as the increasing number of personal video recorders. The EU Audio Visual Media Services Directive promulgated in 2007 requires each Member State to draw up a new regulatory framework for on-demand video services. Ofcom told us that the UK was in the early stages of developing the model, which was likely to be one of co-regulation, defined as self-regulation backed by statutory powers.[92] Regulation of on-demand services is currently exercised by a self-regulatory body, the Association for Television on Demand (ATVOD), whose members are required to adhere to the Association's code of practice. The ATVOD Board updates the code with practice statements which are binding on ATVOD members.[93] Ofcom told us that "effective and consistently applied content information is likely to be a significant element" of a future regulatory framework for on-demand services, along with measures such as PIN numbers to control children's access to potentially harmful content.[94]

57.  Films have likewise been classified and accorded ratings, some of which specify a minimum age at which it is deemed that any risks from content can be managed. Films for cinematic release or for DVD are classified by the British Board of Film Classification (BBFC); the admission of children to a film or the sale of a video recording or DVD must be restricted in accordance with that classification. Breach of the conditions of a classification is an offence under criminal law. The British Board of Film Classification also has responsibility for classifying certain video games; we consider the BBFC's future role in games classification in Section 6.

Existing structures for protection from harmful content on the Internet

The Home Office Taskforce

58.  Certain bodies have been established by the Government to help manage the risks arising specifically from the Internet. The most all-embracing is the Home Office Taskforce on Child Protection on the Internet, formed in 2001 to bring together the Government, online technology providers, statutory and non-statutory bodies, law enforcement and child protection specialists. Main meetings are chaired by the Home Secretary or by the Minister responsible for child protection. The Home Office provides the secretariat, and a Programme Board comprising representatives from industry, charities, the Child Exploitation and Online Protection Centre (CEOP) and Government departments sets the direction. So far, the Taskforce has issued various sets of good practice guidance, recommendations for changes to the criminal law, and the development of training for professionals involved in child protection.

59.  The Home Office Taskforce was much praised in evidence and is widely regarded as a good example of collaboration between the Government and the Internet-based industry. Mr Galvin, representing BT,[95] said that the Task Force "has proved to be a vital piece of glue to bring together the industry";[96] and Dr O'Connell, Chief Safety Officer at Bebo and a member of the Task Force, said that it was viewed "round the world as a model of good practice".[97] The Children's Charities' Coalition for Internet Safety (CHIS) said that the Task Force had "become a major element within the UK's self-regulatory regime and that it "undoubtedly has performed and continues to perform an extremely valuable function".[98] It spoke of "the enormous importance which we and the industry attach both to being able to work together in this way and to being able to engage directly with senior Ministers in the Government". CHIS said that "anything which removes or weakens that political link will also weaken the political impact of the policy".[99]

60.  Dr Byron's report recommended that the existing Task Force should be transformed into a new UK Council for Child Internet Safety, with a strengthened secretariat and responsibility for leading a strategy across Government. She envisaged that the Home Office and the Department for Children, Schools and Families would chair the Council, with the roles of other Government departments, especially the Department for Culture, Media and Sport, "properly reflected" in working arrangements. Dr Byron also recommended that the Council should appoint an advisory group with expertise in technology and child development, should listen to children, young people and parents, and should have a rolling research programme.

61.  We asked Dr Byron how the new Council would differ from the Task Force. She commended the Task Force, which she said was "a model of good practice" and demonstrated that the UK has taken a responsible lead in thinking about the protection of children from potential dangers of the Internet. However, she saw a need for a body which was "properly resourced" with "more of a cross-Government feel", noting that the Internet industry was "very fatigued" at the different Government departments having "several sometimes contradictory conversations" with the industry.[100] The Children's Charities' Coalition for Internet Safety was particularly critical of the level of resources and staff support which had been devoted by the Government to the Task Force, which had never had its own budget or any staff solely dedicated to its work. The Coalition told us that the Task Force had had to rely upon underspends from other programmes for funding, and key officials had occasionally been redeployed without consultation.[101] Bebo also remarked upon the need for a co-ordinating body such as the Task Force to have dedicated civil servants, a budget and proper standing.[102] Bebo's Chief Safety Officer, Dr O'Connell, suggested that the establishment of a UK Council would facilitate communication between the industry and the wider public, the media, Members of Parliament and others.[103] The Government argued that it was time for the Task Force "to move on to the next level" and to become a body "which will have additional traction in order to make progress". Mr Coaker, Parliamentary Under-Secretary of State at the Home Office, added that the new body would be more "all-embracing" and would look at access not just to illegal content but also to content which was legal but harmful and inappropriate.[104]

62.  The Home Office Task Force on Child Internet Safety has, by common consent, done good work and has served its purpose well; but its loose funding and support structures have given the impression that its work is of a comparatively low priority. We agree with Dr Byron that the structure and funding of the Task Force should be formalised. We also welcome the announcement by the Government that the date for establishment of the Council is to be brought forward from April 2009 to September 2008. However, we are concerned at reports from some key players that there has been no contact with Government to take this forward and from others that there has been little opportunity to influence decisions as to how the Council will operate in practice. We expect the Government to address these issues urgently.

63.  Ofcom is a member of the Home Secretary's Task Force and has been invited to join the Council.[105] We explored what sort of role Ofcom might have in the Council's work. Ofcom itself suggested to us that it might have a research role or that it might assist in co-ordinating codes of practice.[106] Mr Coaker saw a role for Ofcom in monitoring whether the work of the Council was actually being reflected in changed practice and better procedures in the industry.[107] Ed Richards, Chief Executive Officer at Ofcom, proposed that the Council and industry should concentrate on trying to achieve early effectiveness of self-regulation, perhaps with a role for Ofcom in assessing progress made.[108] Dr Byron did not envisage that the Council itself would have powers of sanction against the industries involved. Ministers suggested that the reputation of individual companies whose practices lagged behind would be damaged and that they would appear less attractive to advertisers as a result; therefore there was a financial incentive to demonstrate good practice.[109]

64.  We asked the Government to explain why the Council was to be co-chaired by the Home Office and the Department for Children, Schools and Families. It replied that the Council would evolve from the Home Secretary's Taskforce and that it was therefore appropriate for the Home Office to co-chair the Council; and it argued that the importance of the Council's work in educating parents and young people on Internet safety justified a co-chairing role for the Department for Children, Schools and Families. Other Departments, including the Department for Culture, Media and Sport, would merely have "a crucial role".[110]

65.  We asked Dr Byron what budget should be allocated to the Council. She said that she had "not thought about money" and "would not know"; but she urged that enough money be allocated to target effectively and creatively.[111] The Government appeared confident that it had a good idea of the likely resources needed, although it did not give an explicit guarantee that it would fund the Council's expenditure in full.[112]

66.  We agree that the Council, at least in its early years, should be chaired by a Minister, to ensure that Council members have direct access to public policy-making. However, we question the proposed joint chairing arrangement, which excludes DCMS Ministers. We believe that it would be unfortunate if DCMS were to appear subsidiary in Council governance, given its role in media regulation, although we recognise the practical difficulties in sharing the chairing role between many Departments: indeed, we question whether co-chairing is desirable in principle. We invite the Government to consider carefully whether to appoint a single lead minister, either from one of the Departments represented or perhaps from the Cabinet Office. There may be a case in future for the Council to be chaired by someone who sits outside Government, particularly if the role of the Council is to expand. Given that the Government has accepted Dr Byron's recommendations in full, we believe it should now move quickly to provide a budget.

67.  The work of the UK Council on Child Internet Safety as proposed by Dr Byron has not yet been fully defined: terms of reference and a work plan will be agreed by the Executive Board in September 2008.[113] We expect that there will be a significant early effort in certain areas, such as drawing up guidance and setting minimum standards, for instance on the time taken to remove potentially harmful content. While there might be an expectation that most of the Council's effort would be directed towards child protection, we believe that there is a danger of overlooking possible harm to vulnerable adults, and we recommend that the Government should give this proper consideration when deciding the Council's terms of reference.

The Child Exploitation and Online Protection Centre

68.  More recently, in recognition of the distinct threat of sexual exploitation through the Internet, the Government established the Child Exploitation and Online Protection Centre (CEOP), essentially a national law enforcement and child protection agency affiliated to the Serious Organised Crime Agency but retaining full operational independence.[114] CEOP told us that its purpose was:

  • "to identify, locate and protect children from sexual exploitation and online abuse;
  • to engage and empower children, young people, parents and the community through information and education; and
  • to enhance existing responses by working with industry to make the online environment safer by design and by improving the management of high risk offenders".[115]

CEOP deals primarily with cases referred to it and receives on average between 450 and 550 reports per month.[116] Dr Byron commended CEOP strongly for its work;[117] so did Mr Coaker, Parliamentary Under-Secretary of State at the Home Office, who described the Centre's Chief Executive Officer, Mr Gamble, as "excellent" and "a trailblazer".[118]

69.  When we took evidence from Mr Gamble in March 2008, he pressed for more funding for CEOP, partly to meet the extra demand arising from abuse of social networking services and to deal with a backlog of cases. He estimated that there was a backlog of about 700 cases, albeit ones in which there was least evidence of a need for urgent action.[119] He also spoke of "huge pressure on our referral staff" and noted that "you cannot switch off the tap".[120] The Home Office told us that it had provided £5.65 million for CEOP in 2007-08: this had been supplemented by £3.19 million in cash and in kind from various sources, including Government departments, charities and industry. For 2008-09, the Home Office has provided a budget of £5.77 million; figures for contributions from other Government departments, charities and industry are not yet available.[121] The Home Office said that while it had not received a formal request from CEOP for more funding, it would of course consider any approach made; and it stressed that it was inconceivable that the Government would not fund CEOP in the future.[122]

70.  We are much impressed by the work of the Child Exploitation and Online Protection Centre and its close co-operation with charities such as the National Society for the Prevention of Cruelty to Children. However, we are concerned that levels of funding are not keeping pace with the increasing volume of work which is referred to the Centre, and we therefore encourage the Government to look favourably on any request by CEOP for increased resources. We also welcome the financial contribution made by charities and industry, and we believe that the latter should be increased: business models for Internet-based services rely upon public confidence that networking sites are safe to use, and CEOP plays a large part in delivering that safety.

The Internet Watch Foundation

71.   Businesses which provide Internet content, access and services, either from fixed terminals (such as PCs) or from mobile devices, have recognised the particular dangers posed by certain types of content on the Internet. In 1996, the Internet Watch Foundation was formed as a self-regulatory industry body, to minimise the availability of potentially illegal or otherwise harmful content on the Internet, and to take action to prevent exposure to illegal content, in particular by:

  • operating a hotline enabling the public to report such instances;
  • operating a notice and takedown service to alert hosting service providers of criminal content found on their servers; and
  • alerting relevant law enforcement agencies to the content.

All major Internet service providers and search engines are members of the Foundation, as are all mobile network operators.[123]

72.  The Foundation's Board aims to minimise the availability of potentially illegal Internet content, specifically:

  • child sexual abuse images hosted anywhere in the world;
  • criminally obscene content hosted in the UK;
  • content inciting racial hatred hosted in the UK.[124]

For illegal content hosted in the UK, the Foundation may issue a "take-down" notice to the content host. If the content host fails to comply, it becomes liable to prosecution.[125] Mr Galvin, representing BT, told us that he could not think of a single example when an Internet service provider had refused to take down a site once it had been requested to do so.[126] If the content is hosted outside the UK, the Foundation has no powers to require the site host to take down the material: it told us that "we do not have any relationships with any other government or any other hotline in the world to put them on notice about these types of websites".[127] It will, however, inform the relevant authorities and add the website to its database of addresses hosting illegal content.[128]

73.  The overwhelming majority of domestic consumers' Internet connections are operated by Internet service providers (ISPs) which block access to sites listed in this database. All major search engines block access to such sites.[129] The Foundation claimed that it had achieved "remarkable internationally recognised success" in that, since 2003, less than 1% of reports of online child sexual abuse content processed by the Foundation had been traced to content hosted in the UK, compared to 18% in 1997.[130] However the Chief Executive of the Internet Watch Foundation did tell us that, with a degree of technical knowledge, it is possible to circumvent the blocks imposed.[131]

Other self-regulatory structures and codes

74.  Some businesses providing services using the Internet or other new media have themselves taken steps to draw up minimum standards. In some cases, these standards have been agreed with other interested parties, through the Home Office Taskforce on Child Protection on the Internet. In December 2005, for instance, the Taskforce published good practice guidance for providers of search services, stressing the importance of early advice to users on the risks and recommending that search providers should offer filters and clear facilities for users to report inappropriate material. Most recently, the Taskforce has published good practice guidance for providers of social networking and other user-interactive services: this guidance includes safety tips for parents and for children and young people.[132]

75.  Mobile network operators adhere to a Code of Practice on New Forms of Content, drawn up in 2004. The Code covers classification of commercially supplied content, access controls for content classified as being unsuitable for people under 18, provision of parental controls to restrict access to internet sites, action against bulk communications and malicious communications, and the provision of information and advice on protection. O2 said that the Code had been widely recognised as a model of best practice, had been adopted by mobile operators in a number of other countries, and formed the basis of the EU Framework for Safer Mobile Use. Under this framework, mobile phone operators in EU Member States have committed to control access to adult internet content, run awareness-raising campaigns for parents and children, and classify commercial content according to national standards of decency and appropriateness.[133] We note that the effectiveness of the Code is being reviewed by Ofcom and that network operators will be undertaking a separate consultation exercise on updating the Code.[134]

76.  Content provided by third parties (such as film clips or video games) and accessible using mobile devices is classified by the Independent Mobile Classification Body, using a framework based on existing standards in other media.[135] The only distinction made is between "18-rated" and "unclassified";[136] it is difficult to verify other age distinctions remotely.[137] T-Mobile told us that in three years there had been no official complaints upheld by the IMCB and that the classification framework was well understood by those providing content for viewing on mobile devices; and it added that obligations to meet the requirements of the Code were contained in all service contracts.[138] Mr Carrick-Davies, Chief Executive of Childnet International, said that the mobile industry had "done a great deal of work in this country in identifying a code of practice which gives parents greater choices", largely through filter settings for Internet access. The difficulty, he believed, lay in ensuring that parents were aware of the options.[139]

Controlling content-based risks

77.  The contrast between the care taken to regulate content broadcast using traditional means—through dedicated regulators, codes and sanctions—and the near absence of control over access to Internet content is striking. However, it is incorrect to assume that the Internet is beyond all regulation. Mr Purvis, Partner for Content and Standards at Ofcom, pointed out that "it is not as if there is a completely wild west on the Internet: the Internet content creators are subject to the law of the land like any other content creators".[140] Likewise, Mr Carrick-Davies, Chief Executive of Childnet International, stressed that the Internet was "not some moral vacuum" and that there were "some very good existing laws" applicable to providers of both online and offline services.[141]

78.  There are several main approaches to the control of access to illegal or potentially harmful content. These include:

  • Making it illegal to possess or distribute material, thereby prompting Internet service providers that become aware of such material to block access to it or take it down if they host it;
  • For material which is undesirable but not illegal, website owners or hosts may take down content which is unacceptable under their Terms and Conditions or Acceptable Use policies;
  • Filters can be applied at network level;
  • Consumers may exercise discretion by applying filters locally.

Designation of content as illegal

79.  It is already an offence to publish or distribute certain types of content on the Internet or anywhere else: these include indecent images of children, material which is deemed obscene and likely to "deprave or corrupt", words intended to stir up racial hatred, and statements likely to be understood as encouraging terrorism. The Internet Watch Foundation told us that a recent interpretation of the Obscene Publications Act 1959 had determined that the presence of explicit pornography on the front page of a website hosted in the UK, which a child under 18 might see, may fail the test under the Act.[142]

80.  In certain cases it is not merely publication which is illegal: it is an offence to possess indecent images of children,[143] extreme pornography[144] (but not other pornography, even though it may fail the "publication" test under the Obscene Publications Act 1959), and any "terrorist publication" if it can be demonstrated that it is possessed with a view to distribution or provision to others, including by electronic means.[145] Downloading such material from the Internet, including images, is likely to constitute "possession". The Internet Watch Foundation told us that 12% of the reports of alleged criminal obscenity which it received would fail the new offence of possession of extreme pornography.[146] The Mobile Broadband Group confirmed that mobile network operators would respond to "notice and take down" requests from the Foundation which related to extreme pornography hosted in the UK.[147]

81.  Conceivably, a person may come across by chance material on the Internet which it is illegal to possess. The Government told us that the new offence of possession of extreme pornography was not intended to target those who accidentally come into contact with obscene pornography.[148] We note that the Internet Watch Foundation told us that the decision by Internet service providers, mobile operators and search providers to block access to or take down sites displaying harmful or illegal material had been motivated by a desire to protect consumers from "stumbling across" potentially harmful or illegal content by accident.[149]

82.  The Internet Watch Foundation will also prompt Internet service providers to bar access to sites which publish content which would be unlawful if hosted in the UK. In 2007, the Internet Watch Foundation processed 847 cases of content reported as potentially inciting racial hatred and assessed 203 of those as being likely to do so; but only one report identified content which could be traced to a UK host and which could therefore be forwarded to relevant authorities in the UK.[150] It told us that it received reports of sites featuring mutilation or extreme and graphic violence, but such sites were rarely if ever hosted in the UK even though the material was available to consumers in the UK.[151]

83.  The Government told us that it was dedicating resources to identify illegal material on the Internet, such as material which either incited or encouraged terrorism or would prove useful in the preparation and commission of terrorist acts. The Government said that "we now want to work with ISPs and others in the industry to take a more proactive role in identifying this material so that it can be removed if hosted in the UK". For material which is hosted outside the UK, the Government said that it would "work with the industry to identify ways to limit the availability of illegal terrorism-related material".[152]

Removal of content for non-compliance with terms of use policies

84.  Sites hosting content—particularly user-generated content—generally operate according to Terms of Use or their equivalent[153] which specify types of content (images or text) which, although not necessarily illegal, are deemed to be unacceptable.[154] Once such sites become aware that they are hosting material which crosses their threshold for content standards, they will normally take it down. For instance, Mr Galvin, representing BT, told us that a pro-suicide site would cross that threshold "by miles".[155] Microsoft told us that it would take down content which, for example, advocated terrorism, as being in breach of its terms and conditions.[156] The Internet Service Providers Association told us that Acceptable Use Policies helped to combat the illegal and inappropriate use of the Internet and encouraged consumers to behave responsibly online.[157] Mobile network operators similarly told us that they would exercise the right to refuse to host material or to remove a user's right to participate in a chatroom which they hosted if they were "unhappy" with behaviour or content.[158]

85.  YouTube, for example, has developed Community Guidelines and Terms of Use, indicating what types of content are acceptable. The language used is clear, concise and accessible, as the following excerpts show:

  • "There is zero tolerance for predatory behaviour, stalking, threats, harassment, invading privacy, or the revealing of other members' personal information. Anyone caught doing these things may be permanently banned from YouTube.
  • We encourage free speech and defend everyone's right to express unpopular points of view. But we do not permit hate speech (speech which attacks or demeans a group based on race or ethnic origin, religion, disability, gender, age, veteran status and sexual orientation/gender identity).
  • Graphic or gratuitous violence is not allowed. If your video shows someone getting hurt, attacked or humiliated, don't post it.
  • YouTube is not for pornography or sexually explicit content. If this describes your video, even if it's a video of yourself, don't post it on YouTube. Also, be advised that we work closely with law enforcement and we report child exploitation".

We note that, although someone intending to upload a video on YouTube would be prompted to read the Community Guidelines and Terms of Use, they can proceed to upload it without having done so and without even having ticked a box to say that they had done so.[159] We strongly recommend that terms and conditions which guide consumers on the types of content which are acceptable on a site should be prominent. It should be made more difficult for users to avoid seeing and reading the conditions of use: as a consequence, it would become more difficult for users to claim ignorance of terms and conditions if they upload inappropriate content. The UK Council for Child Internet Safety should examine this at an early stage and produce recommendations as to how it is best achieved.

86.  We are also concerned that user-generated video content on sites such as YouTube does not carry any age classification, nor is there a watershed before which it cannot be viewed. We welcome efforts by YouTube to identify material only suitable for adults, such as that containing foul language, and to develop potential controls to prevent children from accessing it.

"Flagging" content

87.  It is common for services which host or enable access to user-generated content to enable consumers to report or "flag" material which seems not to conform to the site's standards. Once content has been flagged, a member of staff will review the video to assess whether it violates the terms of use. Google (which owns YouTube) told us that the large majority of material which users flagged was reviewed (and if necessary removed) within one hour; more time was required in cases when it was not immediately clear whether a video was a documentary or whether it was promoting or glorifying violence.[160] It added that only a small fraction of user-generated content was flagged as harmful or inappropriate and that an even smaller fraction required removal, in which case the user would be notified. Repeated violations of YouTube's Terms of Use by a user will lead to deletion of their account.[161] Google supplied further information on "flagging" in confidence.

88.  We raised with Google (which owns YouTube) a notorious case in which a video of alleged gang rape uploaded to YouTube was viewed 600 times before being removed from the site. Mr Walker, General Counsel for Google, suggested that those 600 views may in fact have included repeat viewings by a smaller number of people; but he accepted that a mistake had been made in that the video had been flagged for review but had not, through human error, been taken down from the site. Mr Walker told us that review procedures had been changed to reduce the likelihood of such an error occurring again.[162] He also said that the error represented "an infinitesimal percentage of the material which we are reviewing".[163] Google pointed out in a confidential supplementary memorandum that the exact details of the case were still under police investigation. Nonetheless, the system failed, and it is difficult to know whether or not this was an isolated incident.

89.  Ofcom noted that it is not possible to determine empirically how effective site review processes actually are. It described YouTube's review process as being "opaque to the public" and added that "because it is impossible to determine what proportion of content is potentially harmful, there is no means to assess the overall effectiveness of the system".[164] Ofcom proposes that the industry might draw up a code under which those who make user-generated content available would increase the transparency of their review processes, for example by reporting on times for dealing with "flags" or reports and on communications with complainants. Ofcom suggested that, ideally, a code would include independent verification of performance.[165] We agree, and we return to this issue in paragraphs 99 and 153 below.

Preview and review of user-generated content

90.  It is not standard practice for staff employed by social networking sites or video-sharing sites to preview content before it can be viewed by consumers. It was put to us that to pre-screen all material before it was published on the Internet would be impractical, because of the sheer volume of material being uploaded.[166] In the case of YouTube, this is approximately 10 hours of video every minute.[167] Instead, YouTube relies upon "millions of active users who are vocal when it comes to alerting us to content they find unacceptable or believe may breach our policies".[168] Google (which owns YouTube) also told us that "we don't, and can't, review content before it goes live, anymore than a telephone company would screen the content of calls or an ISP would edit e-mails".[169] We are not convinced that this is a valid analogy: a person who makes a telephone call or who sends an e-mail does so in the expectation that the content will normally remain private. Content uploaded to many websites is generally intended for public notice and may be accessible by a person of any age in almost any part of the world.

91.  The BBC's guidelines for online services identify services which merit previewing of content, particularly sites which are designed to appeal to children or which invite users to send pictures by e-mail, or sites featuring live chat where users talk to a celebrity guest.[170]

92.  Some companies are working on filters to screen material before it is uploaded to their sites. Mr Walker, General Counsel for Google, told us that technology existed to block automatically the reposting of videos that had already been deemed to violate Terms of Use and had been taken down.[171] He added that work was under way to develop software which could identify pornographic images and prevent them from being uploaded to YouTube.[172] The difficulty will be to refine the technology to the point where, for example, it can distinguish pornography from other images where flesh is exposed. Google observed in its written memorandum that "technology can sometimes help but it is rarely a complete answer".[173] Dr Byron indicated that research was under way to explore ways of scanning both an image and accompanying text to assess material.[174] Kevin Brennan, a Minister at the Department for Children, Schools and Families, clearly saw this as a possible way forward.[175]

93.  Some companies actively review content once it has been uploaded. MySpace told us that it reviewed "each image and video that is uploaded to the MySpace server for compliance with the Terms of Use and Photo policy"; this was confirmed to us when we visited MySpace's offices in the US.[176] Several hundred people are employed by MySpace to review images and videos after they have been posted. Inappropriate material is normally taken down within two hours, although there is a target to reduce that to one hour.[177] Bebo also has a Member Media Review team which searches for inappropriate images and videos already uploaded to the site. Typically, such material is removed within 24 hours after it has been uploaded, although Bebo said that "it is impossible to find all inappropriate content".[178]

94.  Some providers claim that the provisions of the E-Commerce Directive restrict their ability to take down material before there have been complaints about it. Under regulation 17 of the Electronic Commerce (EC Directive) Regulations 2002 (which transpose the Directive into UK law),[179] companies that transmit Internet content on behalf of others (such as a user's profile page on a social networking site) cannot be held liable for anything illegal about the content if they did not initiate the transmission, select the receiver, or select or modify the information contained in the transmission. Nor is a service which hosts Internet content liable for damages or for any criminal sanction as a result of that storage if they do not have "actual knowledge" of unlawful activity or information and if, on becoming aware of such activity, they act "expeditiously" to remove or to disable access to the information.[180]

95.  Some ISPs and operators of social networking sites have expressed a fear that any effort which they make to scan content automatically could be interpreted by a court as meaning that they have "actual knowledge" of all the content they host, causing them to become liable in law for content which wrongly survives the scan. Dr Byron challenged this reasoning, arguing that such an approach "is a bit like saying that it is unfair to ask companies to survey their premises for asbestos in case they find some but fail to remove it safely", and she believes that "on this issue, companies should not hide behind the law".[181] We do not believe that it is in the public interest for Internet service providers or networking sites to neglect screening content because of a fear that they will become liable under the terms of the EC E-Commerce Directive for material which is illegal but which is not identified. It would be perverse if the law were to make such sites more vulnerable for trying to offer protection to consumers. We recommend that Ofcom or the Government should set out their interpretation of when the E-Commerce Directive will place upon Internet service providers liability for content which they host or to which they enable access. Ultimately, the Government should be prepared to seek amendment to the Directive if it is preventing ISPs and websites from exercising more rigorous controls over content.

96.  We found the arguments put forward by Google/You Tube against their staff undertaking any kind of proactive screening to be unconvincing. To plead that the volume of traffic prevents screening of content is clearly not correct: indeed, major providers such as MySpace have not been deterred from reviewing material posted on their sites. Even if review of every bit of content is not practical, that is not an argument to undertake none at all. We recommend that proactive review of content should be standard practice for sites hosting user-generated content, and we look to the UK Council proposed by Dr Byron to give a high priority to reconciling the conflicting claims about the practicality and effectiveness of using staff and technological tools to screen and take down material.

97.  Mr Walker, General Counsel for YouTube, agreed to consider the possibility of using data on users' histories, numbers of people viewing an item, and video file labels to see whether there was an effective way to minimise the amount of controversial material posted onto YouTube.[182] File titles and screening tools can help to identify files which appear to present a particular risk of exposure to inappropriate material. We encourage sites which handle user-generated content to develop as a priority technological tools to screen file titles and prevent the upload of—or quarantine—material which potentially violates terms and conditions of use until it has been reviewed by staff. We also encourage sites to share their knowledge and expertise at the UK Council on Child Internet Safety, with a view to developing codes of practice for prior screening of material.

Take-down times

98.  As already noted, almost all social-networking sites and those hosting user-generated content rely on their users to notify them of inappropriate material that has been posted. Once "flagged", it is then reviewed and if found to be in breach of the terms and conditions, it is taken down. We asked representatives of Internet service providers how long it took to take down content once notified. We were told that, for child abuse content, the industry standard was 24 hours, although that was "best practice" and the Internet Service Providers Association did not give us an assurance that it was consistently met.[183] For content other than child abuse content, there might be a need for "greater reflection".[184] We find it shocking that a take-down time of 24 hours for removal of child abuse content should be an industry standard.

99.  Dr Byron observed that companies may be unwilling to make specific public commitments to respond to breaches of acceptable use policies as quickly as possible (i.e. by taking down material) as they fear that such commitments might become part of a company's contract with users, exposing the company to litigation if it fails to meet the target (perhaps because of a surge in complaints).[185] She nonetheless recommended that sites should be encouraged to sign up to specific commitments on take-down times. The Chief Executive of Ofcom was particularly critical of the lack of clarity about take-down procedures.[186] We believe that there is a need for agreed minimum standards across industry on take-down times in order to increase consumer confidence. We recommend that the UK Council on Child Internet Safety should work with Internet-based industries to develop a consistent and transparent policy on take-down procedures with clear maximum times within which inappropriate material will be removed. This should be subject to independent verification and publication. We return to this issue in paragraph 153.

Filtering software

100.  A large number of software suppliers specialise in providing software to protect customers and their children while on the Internet. All leading Internet service providers in the UK offer filtering products, and the majority of UK broadband subscriptions are offered with a free filtering package.[187] Such software enables parents to block access to websites considered to be unsuitable. Microsoft described its Family Safety Settings service, which allows parents to choose settings which allow, block or warn for a range of content categories, including Web-based chat and e-mail communications.[188] Google has developed its own SafeSearch filter, which will exclude webpages featuring pornographic or explicit content from search results.[189] The filter has three settings: no filter, filter pages with explicit images, and filter pages with explicit images or text.

101.  Search filters can be effective, if crude. One of the inherent dangers is overblocking, which can become a safety issue. The BBC's submission to the Byron Review cited a report from one operator of online parental controls saying that "overblocking was the number one reason why parents turned off their parental controls". One filtering product was found to block access to the BBC's own site for children, CBBC. The BBC noted that a BSI kitemark standard was to be developed for online parental controls, and it urged that software should only receive the kitemark if it could demonstrate low levels of overblocking.[190]

102.   Although most filtering tools under-block or over-block access to some extent, Ofcom cited evidence that such tools were indeed capable of filtering potentially harmful content without seriously degrading children's experience of the Internet.[191] The Internet Watch Foundation conducted a web search using unambiguous and explicitly adult terms with search filters switched off; search results returned over 10 million pages, with all of the links on the first three pages of search results linking directly to content which might be considered as potentially illegal under UK law if hosted in the UK and which would almost certainly be deemed inappropriate for children to view. The same search query with filters applied returned a list of 1.6 million web pages, most of which were online discussions or news items; no direct links to sexually explicit items were found. A similar experiment, this time including the word "teen" as a search term, generated a similar outcome.[192]

103.  BT stressed the flexibility of "device-based controls", which could be applied in the home and which could permit different access regimes for use by different family members.[193] They nonetheless require an effort by parents or carers to install and configure them, particularly if the benefits of that flexibility are to be realised.[194] An analysis of recent research by Ofcom suggested that 83% of parents were aware of the existence of software to filter access to Internet content, but only 54% said that they had installed any.[195]

104.  Filters to control access to Internet content using mobile phones operate at network level rather than being applied directly by consumers from handsets: an account holder simply specifies whether or not a handset may be used for access to adult content. Mobile network operators use age verification techniques to secure authority to lift blocks on access to adult content such as gambling, open chatrooms (those not hosted by network operators' portals) and games. Typically, those techniques will require users to submit credit card details or to present documents at an outlet of the relevant network operator.[196]

105.  Mr Brennan, Parliamentary Under-Secretary of State at the Department for Children, Schools and Families, suggested that the mobile phone industry was in some senses leading the way in the protection of children from images which might be more easily accessible by other means, and he believed that they deserved credit for their approach.[197] Various factors combine to make network level filtering a viable proposition for access to the Internet through mobile devices: the market for filtering packages which can be installed locally on mobile devices is undeveloped; devices are usually used by one person rather than by several different family members, each of whom might need a different access regime; and the volume of Internet traffic using mobile phones is much lower than for fixed Internet connectivity (using a PC).[198]

Age verification

106.  Age verification is imperfect as a control. MySpace noted that there was no effective age verification mechanism "due to technical, legal and data challenges";[199] and Ofcom outlined three flaws:

  • The difficulty of confirming age without a physical link: in Germany, access to regulated providers of online pornography is only granted upon production of identification at a post office, or a demonstration of identity via live webcam, or the physical receipt of a personal ID USB-chip. Requiring use of a credit card as proof of age is one solution but would exclude those who prefer to use debit cards (which are available to people aged under 18);[200]
  • The cost to the provider of obtaining verifiable consent can be very expensive; and
  • Placing the onus on the content provider to verify age may prompt providers to move to another jurisdiction where requirements are less onerous: anecdotal evidence suggests that registered providers of pornography to people in Germany had taken this step.

Nevertheless, Ofcom concluded that age verification, although not entirely secure, could help to control the availability of harmful content.[201]

107.  The BBC requires children who sign up to use message boards on BBC websites to provide their date of birth; but it pointed out in its evidence to the Byron Review that "as there is no accessible national database we can use to check identity and age against, we are unable to confirm these ages, but we do all we can to keep the boards safe and age relevant". The BBC said that it would welcome "a pan-industry initiative to explore the feasibility of developing an age verification system that could "talk" to a secure and reliable database containing relevant children's personal information".[202] Dr Byron endorsed this proposal in her Report.[203]

Conclusions on access to potentially harmful content

108.  There are many types of content on the Internet which are in most people's opinions distasteful or potentially harmful, but which are not illegal. Regulating access to such material largely depends largely on reports from consumers to site hosts and prompt action to take down content if it breaches Terms of Use or their equivalent.

109.  Internet service providers maintain that they should not police the Internet outside that part which they "own" and which is governed by their Terms and Conditions of Use. Google told us that it should not be the arbiter of what does and does not appear on the Internet, maintaining that this was a role for the courts and for the Government.[204] This position was supported by Dr Byron[205] and by mobile network operators, who pointed out that whereas they were free to impose conditions on the content hosted on their portals, they did not see it as their role to block access to sites which were hosted outside those boundaries. They maintained that it was the responsibility of consumers to decide whether or not they wished to retain the ability to view "legal but unpalatable" material.[206]

110.  However, it should not be assumed that Internet service providers are opposed to tighter controls. Indeed, greater clarity about what is legal and what is not would be welcomed by some. Mr Galvin, representing BT, described the position in relation to possession of child abuse material as "unusually clear". He said that it "would be so much easier to produce a list" of sites featuring other types of potentially harmful content which should be blocked if the law were to be as clear in those areas. The Secretary-General of the Internet Service Providers Association stressed that the industry would welcome greater clarity, which would enable businesses to enforce their terms and conditions.[207] Dr Byron believed that the Government should define what was illegal and communicate that definition to ISPs.[208]

111.  The Internet Watch Foundation pointed out that widening the blocking approach to other categories of material would not be easy: there was no common agreement about what might be blocked. For Internet service providers, the mechanics of imposing network level blocking to prevent access to pornography sites, of which there were "tens of thousands if not millions", would be a technical challenge and could slow down Internet traffic. However, the Foundation's Chief Executive, Mr Robbins, told us that "if there is a will, it can be done".[209] Orange, which is both a mobile network operator and an Internet service provider, agreed that the list of child abuse websites could serve as a model for other types of material if the Government were to deem it appropriate and provided that a list could be made available.[210]

112.  Ofcom observed that the broad approach taken by the UK was replicated elsewhere but not necessarily as effectively, and it sketched the process by which access to illegal content is handled in Germany, where a blacklist of web addresses is compiled by Government agencies and by the Freiwilligen Selbstkontrolle Multimedia (FSM), a self-regulatory body. If illegal content is identified as being hosted in Germany, the host is issued with a take-down notice; if the content is hosted outside Germany, the address is notified to search providers, which will not provide links to the website on which the content is hosted. However, people in Germany may still be able to access the content if they know the website address: in the UK this would be impossible, as access would be blocked by Internet service providers rather than by search providers..[211]

113.  Not all witnesses favoured an approach which designates more types of content as illegal and which places an onus upon ISPs and others to prevent access once they become aware of such content. Professor Livingstone said that her understanding was that the industry was already making "all kinds of decisions about when to permit a certain kind of website to continue or to change the content" and that "all kinds of content regulation is going on under the banner of self-regulation". Her preference, rather than to designate more content as illegal would be to have "a more clear and transparent and coherent code" to manage content to make it difficult for a casual surfer to come across.[212] However, the dangers posed by some types of content arise not so much from casual or accidental access but through purposeful dissemination and incitement to illegal acts.

114.  At a speech to the Convergence Think Tank on 11 June 2008, the Secretary of State for Culture, Media and Sport raised the question of whether standards applicable to traditional broadcast material might be applied in some way to online content. He appeared to take a view that they should be, saying "I worry that the online world will simply wash away all of the standards that have built up over time" and rejecting the view that standards have "no place online".[213]

115.  There has been public alarm that Internet sites which provide information about committing suicide could be encouraging vulnerable people to commit self-harm or suicide and could have played a part in increasing the incidence of suicide in certain geographical areas.[214] Papyrus, a charity dedicated to the prevention of suicide by young people, told us that it had recorded 30 cases of suicide in the UK since 2001 in which the Internet had played a significant role by "providing detail of method and also direct encouragement through chatrooms". It drew our attention to two cases of "direct online promotion of suicide", including one in which a person died while filming himself online and being "encouraged" by a number of people.[215]

116.  The Law Commission examined the treatment of assisting suicide in law in 2006 and concluded that the law offered an "adequate" solution to the problem. It found that, in the case of suicide websites, the act of publishing the website could be shown to constitute an offence contrary to the Criminal Attempts Act 1981, if the publisher had the requisite intention. Under section 1 of the Act, a person who intends to commit an offence and who does an act which is more than merely preparatory to the commission of that offence, is guilty of attempting to commit that offence.[216] The Commission also said that publication would be an offence under the Suicide Act 1961 if it could be shown that any single visitor to the website had been aided by the website to commit suicide.[217]

117.  Mr Coaker, Parliamentary Under-Secretary of State at the Home Office, told us that "the Law Commission was clear that aiding and abetting suicide, whether it be online or offline, is illegal". He had a definite view on websites which encouraged or assisted suicide: "something should be done about it and it should be taken down.[218] Mr Wicks, Minister of State at the Department for Business and Enterprise, indicated in June 2008 that the Government was "deeply concerned" about suicide websites and that the Ministry of Justice was looking urgently at whether the law could be strengthened. An announcement is to be made shortly. We await the announcement by the Ministry of Justice on whether the law might be strengthened to help prevent the use of the Internet to encourage suicide. Even if it concludes that the offence of assisting suicide is clear enough in law to enable successful prosecutions of those responsible for websites which assist or encourage suicide, we believe that the law should not be the only means of controlling access. The characteristics of the offence should be clear enough in law to enable access to such sites to be blocked on a voluntary basis, possibly through the procedures established by the Internet Watch Foundation. The UK Council for Child Internet Safety should accord a high priority in its work programme to discussions with the Ministry of Justice on whether the law on assisted suicide is worded clearly enough to include websites which encourage suicide and to enable action to be taken to block access to websites which assist or encourage suicide.

118.  While the Internet Watch Foundation has played a major part in protection from illegal and potentially harmful content, by notifying host sites in the UK that they should take down material, or by prompting Internet service providers to block access to sites hosted overseas, we received evidence warning strongly that this approach was seriously flawed. Dr Richard Clayton, a researcher in the Security Group of the Computer Laboratory at Cambridge University and author of several academic papers on methods for blocking access to Internet content, pointed out that there was no single blocking method which was both inexpensive and discerning enough to block access to only one part of a large website (such as FaceBook). In his view, the fatal flaw of all network-level blocking schemes was the ease with which they could be overcome, either by encrypting content or by the use of proxy services hosted outside the UK.[219] THUS plc, a provider of Internet services under the "Demon" brand in the UK, pointed out that the cost of implementing network-level blocking was "undoubtedly" passed on to consumers via service charges.[220]

119.  Dr Clayton urged us to focus instead on blocking software installed and applied voluntarily by users. He observed that such software, while not perfect, could be customised and "should in principle be unaffected by the use of encryption or of proxy systems".[221] At a time of rapid technological change, it is difficult to judge whether blocking access to Internet content at network level by Internet service providers is likely to become ineffective in the near future. However, this is not a reason for not doing so while it is still effective for the overwhelming majority of users.

120.  The Child Exploitation and Online Protection Centre drew our attention to reports of simulated sexual activity with children on sites such as Second Life. Its Chief Executive Officer, Mr Gamble, believed that this was "absolutely" a matter which needed to be investigated: he reasoned that a person who wanted to engage in sexual activity with a child and was fantasising about it either in the real world or in a virtual one (such as Second Life), demonstrated a propensity for such an activity and posed a potential risk to children in real life.[222] Mr Lansman, the Secretary-General of the Internet Service Providers Association, clearly believed that regulation of the field was needed.[223]

121.  We raised the matter with Ministers, who described the Government's consultation on the issue in 2007. Mr Coaker, Parliamentary Under-Secretary of State at the Home Office, pointed out that it would be a significant step for the law to intervene in an area when no-one had been harmed and no children were involved. He questioned whether, just because he might be disgusted and appalled if a paedophile were to use a site such as Second Life to create an avatar and to enact sexual abuse of children, such action should be made illegal. He noted disagreement about whether a person who acted out such a scenario in a virtual world would necessarily become more likely to do the same in the real world.[224]

122.  Since Mr Coaker gave evidence, the Ministry of Justice has announced that it plans to bring forward proposals to create a new criminal offence of possession of drawings and computer-generated images of under-aged children in sexual activity. Maria Eagle MP, Parliamentary Under Secretary of State at the Ministry of Justice, described the proposals as "helping to close a loophole that we believe paedophiles are using to create images of child sexual abuse".[225]

123.  Given the universal nature of the Internet and its characteristic of enabling access by a person in one country to a website hosted in any other country, access controls applied in isolation in one country, unless they are draconian, are going to be limited in effect. There is, however, no common view internationally on what material should be illegal and what is harmful; and there are no global standards. Action has been taken at European Union level, through the Safer Internet action plan and the Safer Internet plus programme, both of them initiatives lasting several years, to enable the establishment of a European network of hotlines for reporting illegal content, to provide information for parents through independent testing of the effectiveness of filtering software, and to support self-regulatory initiatives by the industry. A proposal has been drawn up for a further programme, with similar aims, to run from 2009 to 2013; this would (amongst other things) aim to reduce illegal online content.[226]

124.  We have not been made aware of any other high-level international forum at which governments or regulators can share best practice or propose minimum standards for controls over access to content. It was pointed out to us that a large number of sites hosting material which would be illegal if hosted in the UK are based in the US, beyond the reach of any EU initiative. We believe that there would be advantage in establishing a forum at which governments or regulators from across the world could try to find common ground on how access to content on the Internet should be treated. This may, in time, lead to a more co-ordinated effort in frustrating access to material which is widely perceived as harmful. We recommend that the Government should take a lead in establishing such a forum.

Controlling contact-based risks

125.  The Home Office Taskforce on Child Protection on the Internet published good practice guidance for providers of social networking and other user interactive services in April 2008. As the Home Secretary notes in her foreword to the guidance, it was drawn up with considerable input from many of the main industry providers, some of which are based outside the UK. The guidelines set out extensive and detailed recommendations to service providers on how safety information should be made available to users, what steps should be taken when users register for a service, what tools users should be offered in maintaining their profiles, and how mechanisms for reporting abuse could best be structured. A further section offers safety tips to parents, carers, children and young people.

126.  We discussed safety controls with operators of social networking services during oral evidence with Bebo and MySpace and during our visit to the United States, where we met representatives of Yahoo!, MySpace, Piczo and Facebook. Both Bebo and MySpace set minimum ages for use: 13 for Bebo and 14 for MySpace;[227] but it became clear during the inquiry that minimum ages are difficult to enforce. In order to track users who have lied about their age, MySpace and Bebo both use technology to search the site for terms commonly used by underage users.[228] Ofcom told us that social networking sites based in the United States had come under pressure from state Attorneys-General to take action of this sort.[229]

127.  It is recognised good practice for social networking services to alert users to the risks of making public information about themselves, particularly information which can allow direct contact. Various forms of protection might be provided, such as:

  • Reminding users that they are not anonymous online;
  • Setting users' profiles to "private" by default for some or all users, thereby preventing access by people other than those specifically permitted: users can choose to remain private or opt to make their profiles public;
  • Enabling users under 18 to block all contact from users over 18 (and vice versa); and
  • Enabling users to control who can see their individual photo albums and copy from them.[230]

Controls can also block the use of webcams, file transfer and access to chatrooms.[231] Professor Livingstone suggested that very often people were confused about how privacy controls on social networking sites work: she noted evidence that "people do not understand or do not take very seriously what those decisions [on privacy settings] are, partly because they are not aware of possible abuses of that information".[232]

128.  It is clear that many users of social networking sites, particularly children, do not realise that by posting information about themselves, they may be making it publicly available for all to see. We recommend that social networking sites should have a default setting restricting access and that users should be required to take a deliberate decision to make their personal information more widely available. We also recommend that consideration be given to alerting users through pop-up displays about the risks involved in submitting personal details without restricting access.

"Report Abuse" buttons

129.  As with sites which provide access to the open Internet, many providers of social networking services provide a Report Abuse button on each content page, enabling users to report to them concerns about inappropriate activity. Bebo provides a link to a Report Abuse form from each profile page; users can then report another user for inappropriate behaviour, in which case that user's profile will be reviewed by a moderator. Users can also submit a police report if they believe that a user is making inappropriate sexual comments: such reports are presently assessed first by Bebo's abuse management team before being forwarded (if the report is filed in the UK) to the Child Exploitation and Online Protection Centre (CEOP). Bebo told us in oral evidence that the turnround time was normally between one and four hours.[233] However, in response to a Parliamentary Question, CEOP undertook a sampling exercise to find out how much time elapsed between receipt of a report by Bebo and onward submission to CEOP. Using figures from the 2007-08 financial year, when Bebo submitted 290 reports of possible child exploitation to CEOP, the time period varied from seven hours to three days and 14 hours, with the average period being one day and 23 hours.[234]

130.  MySpace also offers a facility for reporting abuse, although reports are forwarded automatically to the National Center for Missing and Exploited Children in the US before being referred to the appropriate law enforcement agency.[235]

131.  Some interactive sites offer a direct "Report Abuse" link to the Child Exploitation and Online Protection Centre.[236] Mr Gamble, Chief Executive Officer of the Child Exploitation and Online Protection Centre, described Microsoft Instant Messenger as being "one of the safest environments in the UK because you can report directly".[237] In the week after the direct report button in Microsoft Instant Messenger had gone live, the number of reports to CEOP increased by 113%; in the month that followed, the proportion of reports from people aged under 18 rose from 22% to 54%.[238]

132.  We suggested to witnesses that there was, in certain cases, a need for users to be able to report potential abuse directly to law enforcement agencies rather than to site owners, which might take time to refer a report to an enforcement agency. Mr Walker, General Counsel for Google, speaking with reference to YouTube, said that he would be interested in talking further with CEOP about instituting a direct reporting system, although he said that "we have in many cases a global platform, so we need to find something that will work for law enforcement around the world". He said that he was "not in a position to commit to specific implementation at this point".[239] Bebo told us that it hoped to arrange for reports of abuse to be filed simultaneously to Bebo and to CEOP "in the near future".[240] We note that the Government would "definitely like to see progress" in this area.[241] We commend Microsoft for providing a facility for direct reporting to the Child Exploitation and Online Protection Centre within Windows Live Messenger. We believe that high profile one-click facilities for reporting directly to law enforcement and support organisations are an essential feature of a safe networking site. We recommend that the UK Council for Child Internet Safety should impress upon providers of networking services the value of direct one-click reporting from their websites to law enforcement agencies and voluntary sector organisations with expertise in offering support to vulnerable people. We also believe that facilities for reporting abuse should be obvious to users and should be directly accessible from all relevant pages of a website, close to the entry point. We would expect providers of all Internet services based upon user participation to move towards these standards without delay.

Moderation of interactive sites

133.  Differing regimes exist for moderation of content in interactive sites. Dr Byron noted the importance of human moderation (as opposed to filtering tools or other technology) in sites designed for younger children, such as Club Penguin, or the BBC's CBeebies site, for which she believed the standard of moderation was "brilliant".[242] AOL, for instance, moderates all chatrooms which it operates for children.[243]

134.  Users of social networking sites may be able to moderate some of the content on their sites. For instance, users can screen comments made by other users before allowing them to appear on their own profile page; and they can delete comments posted by others on their page.[244]

135.  Mobile network operators may exercise a fairly high degree of control over their customers' access to social networking sites and interactive sites which they host. Typically, chatrooms for under-18s and blogs are fully moderated.[245] T-Mobile told us that it moderated all interactive services hosted on its portal and that all images uploaded to social networking sites through that portal were moderated within two hours, and that key words in text content uploaded are detected and blocked. T-Mobile told us that it required providers of social networking services, as a minimum, to meet the standards set out in the Home Office Good Practice Guidance for providers of social networking and other user-interactive services published in April 2008; but it added that "although providers might be signatories to best practice guidance, in practice it is often difficult to negotiate safeguards in a contract or to fully understand what safeguards are operational, or even to include a specific notice and takedown provision".[246] We also note the statement by T-Mobile that takedown timescales varied greatly between different providers.[247]

Controlling conduct-based risks and cyberbullying

136.  Attempts to prevent cyberbullying are largely co-ordinated by the Department for Children, Schools and Families (DCSF). Its predecessor, the Department for Education and Skills, published Tackling Cyberbullying guidelines in 2006; and the DCSF commissioned Childnet International to develop further guidance for inclusion in the department's main anti-bullying guidance for schools, issued in 2007.[248] DCSF has mounted a digital campaign, using interactive websites popular with children and young people, to increase awareness of the action which young people can take to minimise the risk of being cyberbullied. DCSF has also convened a Cyberbullying Taskforce, with representation from providers of Internet-based services, mobile phone companies, school staff unions, Becta,[249] children's charities and law enforcement agencies.[250]

137.  T-Mobile told us that where cyber-bullying occurred, it provided practical advice, such as encouraging users not to reply to unwanted messages and not to delete them (as they could help to track an offender), and suggesting that parents involve the school in dealing with the problem. Most network operators have also set up help bureaux for customers who receive nuisance or malicious calls;[251] one solution can be to provide a change of phone number free of charge.[252] In some cases, culprits can be traced and warned; O2 told us that another option was to cut off customers found to be misusing their phones.[253]

138.  We note that mobile phone call records would make it possible to establish that a particular phone had been used to upload content onto a video-sharing website at a particular time but would not necessarily identify the images uploaded or the person who had used the phone to upload them.[254] Given that images or videos taken by mobile devices may be uploaded to social networking sites or video sharing sites on impulse, it would seem important to be able to have a record of the nature of content handled, should it be offensive, harmful or even illegal. It may be that the mobile phone industry could develop technology which would allow images uploaded by mobile devices to be viewed, thereby helping in the process of assembling evidence if inappropriate conduct has taken place. We recommend that network operators and manufacturers of mobile devices should assess whether it is technically possible to enable images sent from mobile devices to be traced and viewed by law enforcement officers with the appropriate authority.

Limiting Internet use and game play

139.  Another form of control of conduct-based risk is to use software to limit Internet use and game play, to counter any threat of "addiction" and to reduce scope for physical inactivity. Microsoft has developed a timer which will programme an Xbox games console to allow only specific amounts of playing time; after that time has expired, the child will not be able to use the console again within a 24-hour period without intervention by the parent.[255] Some governments have sought to introduce incentives to reduce the amount of time spent by players on video games, for instance by requiring or asking the industry to introduce high-scoring elements early in a game, with diminishing returns from prolonged play.[256]

140.  We commend Microsoft for their efforts to ensure that there are varied and effective parental controls built in to their hardware. We believe that other console manufacturers should be encouraged at least to match these. We hope that this matter will also be considered at an early date by the UK Council on Child Internet Safety.

Merits of self-regulatory approach

141.  Protection from potentially harmful content, contact and conduct on the Internet or using Internet-based services has rested until now largely with the industry, which has responded to varying degrees to the different types of threat. One of the questions central to this inquiry has been: is self-regulation by the industry succeeding?

142.  Providers of Internet connectivity and of search services and social networking services spoke up strongly for what had been achieved so far through self-regulation. Mr Lansman, the Secretary-General of the Internet Service Providers Association, said that the industry was "leading the way", and Mr Galvin, speaking on behalf of BT, said that self-regulation had given "a shining example of best practice in the best Internet service providers". Both Mr Lansman and Mr Galvin accepted that risks remained and would continue to emerge from new services.[257] Mr Walker, General Counsel for Google, said that the self-regulatory model for applying controls on access to Internet content had been "relatively successful", and he argued that "the risk of governmental or prescriptive rules being imposed on an industry which is effectively less than three years old runs a significant risk of unintended consequences".[258] MySpace believed that self-regulation was working well and was "the appropriate means to approach safety".[259] Bebo, while it expressed confidence in self-regulatory approaches, also accepted that the rapidly evolving nature of the market meant that businesses needed to review and improve their response continually.[260]

143.  Mr Galvin also contrasted the self-regulatory approach with that of statutory regulation, saying that "I do not think you would have seen the pace of our co-operation with legislation, because inevitably it goes more slowly".[261] He did not, however, discount the need for statutory regulation, to determine where the line between illegal and harmful or unpleasant content should be drawn.[262]

144.  Some of the industry's efforts in self-regulation have been much praised and admired. The Internet Watch Foundation said that it was "consistently referenced as a national and international model of effective self- and co-regulation" and was recognised as an influential and relevant authority by many sectors.[263] Others also spoke highly of the Foundation's work.[264] The Child Exploitation and Online Protection Centre applauded the work of the Foundation and also described the Clean Feed[265] system operated by BT as "an outstanding, world-leading initiative".[266]

145.  We note that 95% of domestic broadband consumers buy Internet access from about seven companies, all of which apply the IWF list. The remaining 5% use a "tail" of perhaps 100 companies some of which do not block access, although some of those may in fact be reselling internet connectivity from larger companies who have already imposed the blocks. The Internet Watch Foundation told us that it believed that there were "issues … in relation to cost and effectiveness" underlying the reluctance of smaller ISPs to block access.[267]

146.  The Internet Service Providers Association said that it encouraged as many Internet service providers as possible to apply the Foundation's list.[268] The Home Office set a target for all ISPs to agree by the end of 2007 to block access to sites on the list: when we raised the issue in oral evidence in May 2008, it was plain that little progress had been made in persuading the remaining 5% to act. Ministers appeared to be losing patience and were considering their options, which include legislation.[269] We expect the Government to apply continuing, and if necessary, escalating pressure on Internet service providers who are showing reluctance to block access to illegal content hosted abroad. In a lucrative market, the cost to Internet service providers of installing software to block access to child pornography sites should not come second to child safety.

147.  Although most witnesses were complimentary about the efforts of the industry, we were struck by the considerable anxiety expressed by the Chief Executive of Ofcom about the effectiveness of self-regulation as presently operated. He believed that current arrangements "cannot persist" and was particularly critical of the lack of clarity about take-down procedures and the lack of transparency about complaints. In his view, "we do not have anything even approaching an effective self-regulatory model for this". He set a timescale of 12 months for Internet-based industries to "come closer to the broadcast world": for him, the test was: would the industry "step up to the plate"?[270] He did not, however, believe that there was a clear case for Ofcom to acquire more regulatory powers in relation to the Internet, or at least not yet. He maintained that Ofcom's instinct "has always been and remains to regulate in the least intrusive, least bureaucratic way possible", and he was satisfied that there was "a real opportunity for self-regulation to be effective".[271]

148.  The Child Exploitation and Online Protection Centre, in its submission to the Byron Review, proposed that there should be "a critical examination of whether the voluntary, self-regulation approach to protecting children and young people works effectively as it stands and actually delivers real and tangible change when it comes to the protection of children, whether it is sustainable in the long-term as new providers come on stream and actually delivers change, including the need to monitor the implementation of good practice guidance by service providers".[272] The Centre's Chief Executive Officer, Mr Gamble, claimed that some elements in the Internet services industry were "difficult", avoided engagement with CEOP, and were unwilling to co-operate with CEOP in its efforts to protect young people from harm.[273]

149.  Childnet International, a charity which seeks to improve the safety of children using the Internet, told us that it accepted the importance of self-regulation to the Internet industry and said that it would be "hesitant" to propose stringent regulation.[274] It described the self-regulatory regime set out under the Communications Act 2003 as being widely considered as successful; but it regretted that compliance with that regime was not monitored. It spoke of the "valuable guidance" drawn up in best practice guidelines set out by the Home Office Taskforce and through best practice guidance drawn up by industry bodies such as the Internet Service Providers Association; but adherence was not mandatory and it believed that there was no way to verify easily how far the industry was conforming to its own guidelines.[275] Professor Livingstone suggested that an independent body might be established to oversee and report on the effectiveness and the accountability of self-regulatory codes.[276] Dr Byron took much the same view, suggesting that the industry should set out "very clear safety codes and general good practice principles" which could then be independently monitored and evaluated.[277]

150.  There are signs that the Government expects the industry to take a stronger line. For instance, in an Adjournment debate on cyberbullying on 20 February 2008, the Rt Hon Beverley Hughes MP, Minister of State at the Department for Children, Schools and Families, said that "we need business to raise its game" in preventing cyber-bullying.[278] We asked Ministers to offer their views on whether the time had come to establish a body to oversee Internet standards, in the manner of the Advertising Standards Authority or the Press Complaints Commission. The Rt Hon Margaret Hodge MP, Minister of State at the Department for Culture, Media and Sport, said that "we always keep an open mind" but that, at present, the Government did not want to over-regulate: rather, it would prefer to work through voluntary mechanisms and existing structures. She said that the Convergence Think Tank was considering the issue.[279]

151.  There appear to be conflicting views within the industry on the practicality of applying certain standards. Divergences in practice are especially noticeable in the allocation of resources by Internet-based services to pro-active screening on sites hosting user-generated content and the provision of facilities to report directly to law enforcement agencies. Many companies publish their own codes and policies for child protection but there are significant variations between these. We also note that, if a company's practice is poor, there is no independent body to which a user might submit a complaint and no body to enforce minimum standards. We note with interest research conducted by Ofcom suggesting considerable uncertainty among parents about who to complain to if they discovered potentially harmful or inappropriate content online. Approximately one-third of the sample said that they would complain to the police; 14% would complain to their Internet service provider; 11% would complain to the website itself; and almost four out of ten did not know who they would complain to.[280] Mr MacLeod, Chair of the Mobile Broadband Group, listed various channels by which complaints about content access through mobile devices were received: through Ofcom, the Independent Mobile Classification Body, the Mobile Broadband Group and the network operators themselves.[281]

152.  This Committee has broadly been a supporter of self-regulation in various fields, such as regulation of on-demand broadcast services, the secondary ticketing market and press standards. We recognise that self-regulation has a range of strengths: a self-regulating industry is better placed to respond quickly to new services; it is more likely to secure "buy in" to principles;[282] and it will bear the immediate cost. We accept that a great deal has been achieved through self-regulation by the various industries offering Internet-based services, but there appears to be too great a disparity in practice between different firms and not enough evidence that standards of good practice are being consistently followed.

153.  We believe that leaving individual companies in the Internet services sector to regulate themselves in the protection of users from potential harm has resulted in a piecemeal approach which we find unsatisfactory. Different practices are being followed and there is a lack of consistency and transparency, leading to confusion among users. Nor is there any external mechanism for complaints about services provided by Internet-based industries to be considered by an independent body. However, we do not believe that statutory regulation should be the first resort. Instead, we propose a tighter form of self-regulation, applied across the industry and led by the industry. We therefore call on the industry to establish a self-regulatory body which would agree minimum standards based upon the recommendations of the UK Council for Child Internet Safety, monitor their effectiveness, publish performance statistics and adjudicate on complaints.

154.  We recognise that a number of companies may choose to set higher standards for their own commercial reasons, but the public need the assurance that certain basic standards will be met. This is particularly important in the area of child protection and Internet safety. However, the new body might also take on the task of setting rules governing practice in other areas such as on-line piracy and peer to peer file-sharing, and behavioural advertising, which although outside the scope of this inquiry are also of public concern. Given the global nature of the industry, it is impossible to make membership compulsory for all service providers, but a widespread publicity campaign should ensure that consumers are aware that they can have confidence in the standards of protection and reputable practice which membership of the body carries with it and that this cannot be guaranteed by those companies that choose not to join.

155.  We considered whether the UK Council for Child Internet Safety should, in time, take on a wider role in leading self-regulation of Internet service industries. Our conclusion is that this would be inappropriate, as the Council has been established as a forum in which Government departments, the industry and relevant voluntary sector bodies can work together in developing a strategy for child Internet safety. Its role should be to agree on recommendations of minimum standards for the industry to meet. The implementation and enforcement of these would be the responsibility of the self-regulatory body. In this respect, the UK Council would have a similar role to the Committee of Advertising Practice which draws up the Code governing the non-broadcast advertising industry. The Code is then enforced by the Advertising Standards Authority Council, which has a mixed membership from the industry and from the voluntary sector, with no presence from either the Government or from Ofcom. Our preferred model for any new body to maintain standards among providers of Internet-based services is that of the Advertising Standards Authority, which is generally successful at securing compliance with codes for advertising standards but which, if necessary, may refer companies which persistently breach those standards to statutory regulators that can apply penalties.

The role of Government

156.  Although we have identified a role for the Government in bringing forward legislation to define certain types of content on the Internet as illegal, we do not believe that, in general, this is a field in which the Government should be very closely involved. Its chief role should be to ensure that bodies with a regulatory, advisory or law enforcement role in protection from harmful content on the Internet and in video games have priorities which are in line with Government policy and are resourced to carry out their duties effectively.

157.  The Government has performed a service by bringing the issue of protection from harm on the Internet to the fore. The creation of the Home Secretary's Taskforce on Child Protection on the Internet and, more recently, the Child Exploitation and Online Protection Centre, signalled how seriously the Government took the protection of children from risks from the Internet. Commissioning the Byron Review was another big step forward. We commend the Government for the action it has taken to motivate the Internet industry, the voluntary sector and others to work together to improve the level of protection from risks from the Internet, particularly for children. However, we regret that much of this work remains unknown and has therefore done little to increase public confidence. We look to the UK Council to build on the existing agreements and to ensure a much greater public awareness of what has already been achieved.

158.  We note criticism that the Government has not been "joined up" in its approach.[283] This is an easy criticism to make but a difficult one to refute, especially in an area where so many Departments have an interest. For instance, the Department for Children, Schools and Families has policy responsibility for schools, which have a role in educating children in safe use of information technology and the Internet; the Department for Culture, Media and Sport has an interest in the Internet as a medium through which creative content is disseminated; the Home Office has a role in the protection of the public and in law enforcement; and the Department for Business and Enterprise has responsibilities for the regulatory structure within which Internet-based services operate, including regulation based upon the E-Commerce Directive. The Ministry of Justice has taken the lead in policy on possession of computer-generated images of under-aged children in sexual activity.

159.  Ministers sought to persuade us that the Government was "very joined up" in its approach to the matters considered by this inquiry.[284] Certainly, there have been good examples of cross-Departmental collaboration, for instance in the establishment of the Byron Review. However, we recall Dr Byron's statement that the Internet industry was "very fatigued" at dealing with different Government departments which, individually, were having "several sometimes contradictory conversations" with the industry.[285] We also note that the Government originally suggested that four different Ministers should give evidence to our inquiry and it does seem that there is scope for improved co-ordination of activity between different Government departments. We recommend that a single Minister should have responsibility for co-ordinating the Government's effort in improving levels of protection from harm from the Internet, overseeing complementary initiatives led by different Government departments, and monitoring the resourcing of relevant Government-funded bodies.


90   See for instance Byron Review Executive Summary, paragraph 12 Back

91   Ev 144 Back

92   Q 525 Back

93   Annex 3 to Ofcom memorandum [not printedBack

94   Ev 253 Back

95   Managing Director, Global Customer Experience Programme Back

96   Q 252 Back

97   Q 432 Back

98   Ev 1 Back

99   Ev 1 Back

100   Q 347 Back

101   Ev 1 Back

102   Q 440 Back

103   Q 441 Back

104   Q 580 Back

105   Q 588 Back

106   Q 511 Back

107   Q 589 Back

108   Q 534 Back

109   Mr Brennan Q 591 Back

110   Mrs Hodge Q 579 Back

111   Q 379 and 380 Back

112   Q 597-598 Back

113   Byron Review Action Plan page 6 Back

114   Ev 87 Back

115   Ev 87 Back

116   Q 168 Back

117   Dr Byron Q 337 and Q 354 Back

118   Q 595 Back

119   Q 177 and 178 Back

120   Q 179 and Q 180 Back

121   Ev 402 Back

122   Q 615 and 620 Back

123   Ev 76, Ev 42 Back

124   Ev 42 Back

125   DCMS memorandum Ev 347 Back

126   Q 208 Back

127   Q 59 Back

128   Ofcom, Ev 262 Back

129   See Ev 44. Microsoft told us that the list of URLs on the Foundation's list had been applied to its Live Search filters so that none of them would ever appear within search results using Live Search:Ev 35 Back

130   Ev 42 Back

131   Q 53-4 Back

132   Home Office, April 2008 Back

133   Ev 66; see also www.europa.eu.int Back

134   O2 memorandum Ev 68 Back

135   Ev 61 Back

136   Ev 61 Back

137   Q 109 Back

138   Ev 62 Back

139   Q 32 Back

140   Q 505 Back

141   Q 19 Back

142   Q 59  Back

143   Section 160 of the Criminal Justice Act 1988 Back

144   Section 63 of the Criminal Justice and Immigration Act 2008, expected to be in force from January 2009: HL Deb 25 June 2008 col. 250 (WA);  Back

145   Section 2 of the Terrorism Act 2006 Back

146   Ev 43 Back

147   Ev 76 Back

148   Ev 346 Back

149   Q 53 Back

150   Ev 43 Back

151   Ev 44 Back

152   Ev 346 Back

153   Such as Community Guidelines, or Acceptable Use Policies, or taste and decency criteria Back

154   See Q 223 Back

155   Q 228 Back

156   Q 65 Back

157   Ev 100 Back

158   Q 107 Back

159   Q 276 to 278 Back

160   Q 296 Back

161   Ev 117 Back

162   Q 316 to 321 Back

163   Q 317 Back

164   Ev 254 Back

165   Ev 255 Back

166   Mobile Broadband Group Q 123 Back

167   Q 313 Back

168   Ev 119 Back

169   Ev 119 Back

170   Ofcom, Ev 273 Back

171   Q 279 Back

172   Q 298. See also Q 305 and 306 Back

173   Ev 119 Back

174   Q 375 Back

175   Q 608 Back

176   Ev 150 Back

177   Information supplied to the Committee by MySpace in the US. Back

178   Ev 147 Back

179   SI 2002 no. 2013 Back

180   See Ofcom, Ev 249 Back

181   Byron Review paragraphs 4.16 to 4.18 Back

182   Q 315 Back

183   Q 224 and 225 Back

184   Q 224 Back

185   Byron Review paragraph 4.20 Back

186   Q 512 Back

187   Ofcom Ev 259 Back

188   Ev 35 and 36 Back

189   Ev 118 Back

190   Ev 380 Back

191   Ev 259 Back

192   Ev 43-44 Back

193   Ev 104 Back

194   Ofcom Ev 259 Back

195   Ev 231 Back

196   See for instance T-Mobile Ev 61, O2 Ev 67, Orange Ev 71 Back

197   Q 602 Back

198   Ev 261 Back

199   Ev 150; also CHIS Ev 4 Back

200   Ev 255 Back

201   Ev 235 Back

202   Ev 377 Back

203   Byron Review paragraph 4.89 Back

204   Ev 115 Back

205   Q 376 Back

206   Q 101 to 107 Back

207   Q 203 Back

208   Q 376 Back

209   Q 79 to 81 Back

210   Ev 69 Back

211   Ev 262-3 Back

212   Q 33 Back

213   Speech available at www.culture.gov.uk  Back

214   HC Deb 7 February 2008 col. 1220 Back

215   Ev 399 Back

216   Section 1 of the Act defines the offences to which these provisions apply Back

217   Inchoate Liability for Assisting and Encouraging Crime, Law Commission No. 300, Cm 6878 Back

218   Q 600 Back

219   Ev 368 to 370 Back

220   Ev 381 Back

221   Ev 370 Back

222   Q 174 Back

223   Q 201 Back

224   Q 600 Back

225   Ministry of Justice news release 28 May 2008 Back

226   See European Commission proposal, 29 February 2008, COM(2008) 106 final Back

227   Q 392 and Ev 150 Back

228   Ev 150 and Q 394 Back

229   Ev 257 Back

230   Bebo Ev 147, MySpace Ev 150 Back

231   See for instance BT memorandum Ev 103,Microsoft Ev 36 Back

232   Professor Livingstone Q 8 Back

233   Q 422 to 425 Back

234   HC Deb 20 May 2008, col. 225W Back

235   Q 409 Back

236   See for instance Microsoft Windows Live Messenger Ev 36 Back

237   Q 169 Back

238   Q 167 Back

239   Q 263 Back

240   Q 422 Back

241   Q 592 Back

242   Q 368 Back

243   Q 233 Back

244   Bebo Ev 147 Back

245   O2 Ev 69; T-Mobile Ev 62, Orange Ev 70. Back

246   Ev 62 Back

247   Ev 64 Back

248   Ev 344 Back

249   Becta was formed in 1998 and describes itself as the Government's lead agency for Information and Communications Technology (ICT) in education in the United Kingdom. Back

250   Ev 344 Back

251   Mr Bartholomew Q 119 Back

252   T-Mobile Ev 63, O2 Ev 66, Orange Ev 71 Back

253   Ev 66 and Q 119 Back

254   Q 126 to 132 Back

255   Ev 34. See also BT Ev 103 Back

256   Mr Carr Q 3, Ev 2 Back

257   Q 252 Back

258   Q 269 Back

259   Q 431 Back

260   Ev 147 Back

261   Q 201 Back

262   Q 253 Back

263   Ev 46 Back

264   ISPA Ev 101; Dr Byron Q 337 Back

265   Filtering software used by British Telecom to block access to child pornography websites Back

266   Q 168 Back

267   Q 55 to 57 Back

268   Q 202 Back

269   Q 612-3 Back

270   Q 512 Back

271   Q 517 Back

272   Ev 87 Back

273   Q 187 and 189 Back

274   Ev 12 Back

275   Ev 12 Back

276   Q 34-35 Back

277   Q 349 Back

278   HC Deb 20 February 2008 col.493 Back

279   Q 607. The Convergence Think Tank was set up jointly by the Department for Culture, Media and Sport and the Department for Business, Enterprise and Regulatory Reform to examine the implications of technological development for the media and communications industries, and the consequences for both markets and consumers. Back

280   Ev 211 Back

281   Q 145 Back

282   See T-Mobile Ev 61 Back

283   Byron Review paragraph 3.112 Back

284   Mrs Hodge Q 578 Back

285   Q 347 Back


 
previous page contents next page

House of Commons home page Parliament home page House of Lords home page search page enquiries index

© Parliamentary copyright 2008
Prepared 31 July 2008