Select Committee on Science and Technology Fifth Report


CHAPTER 3: The network

The prospects for fundamental redesign of the Internet

3.1.  The Internet as we know it today, the network of networks using the IP protocol, was designed almost 30 years ago, when the current uses to which it is put could not have been imagined. But just as the road network was not planned to accommodate the volumes of traffic that now use it, but grew incrementally over many years, so the networks supporting the Internet have continued to grow and develop. And just as a wholesale redesign of the road network might in principle be desirable, but is in practice simply not feasible, so there are formidable barriers to a wholesale redesign of the Internet.

3.2.  The problems that derive from the fundamental design of the Internet are profound. While the Internet supports astonishing innovation and commercial growth, it is almost impossible to control or monitor the traffic that uses it. This leads in turn to many of the security problems that we have explored in this inquiry. So we have had to ask the question, whether it is possible to redesign the Internet more securely? If not, are the incremental improvements that might make it more fit for purpose being taken forward by the industry, or is intervention, by Government or regulators, needed? Or do we just have to accept a certain level of insecurity as the inevitable corollary of the level of creativity and innovation, the "generativity" of the Internet and the innumerable services that rely on it?

3.3.  The response of most of our witnesses was that however desirable it might be in theory to redesign the Internet from scratch, in practice, as with the road network, it was very unlikely to happen. The Internet has over a billion users, and their equipment and applications, their knowledge of how the network functions, represent a huge capital investment. As a result, the Internet will have to change by means of gradual evolution, not radical overhaul.

3.4.  Professor Mark Handley summed up this point of view: "The idea of coming up with something different without getting there incrementally from where we are here is simply not going to happen." He did concede that there were two sets of circumstance in which a more radical approach might be required—either "if the current Internet fell in a large heap for some reason and we had to rebuild it from scratch … or if something came along which was radically better in terms of cheaper or could do things the current Internet cannot do" (Q 663). But both these scenarios are very unlikely.

3.5.  A similar point was made by James Blessing, of the Internet Service Providers Association (ISPA). Asked whether it would be possible to introduce an "identity layer" into the Internet, he replied, "The simple answer is that it would be incredibly difficult to rectify that problem because you are talking about rewriting, on a global scale, the entire Internet" (Q 724).

3.6.  We are also conscious that there are many layers to the Internet, and that fundamentally redesigning the core network may not be the most economically efficient way to improve security throughout the layers. Professor Ross Anderson illustrated this point by returning to the analogy with the road network: "You do not expect that the M1 itself will filter the traffic … There are one or two security properties—we do not want terrorists to blow up the bridges—but many of the bad things that happen as a result of the M1's existence are dealt with using other mechanisms. If a burglar from Leeds comes down and burgles a house in London, then there are police mechanisms for dealing with that" (Q 663). The same general principle—that you need to find the most efficient, lowest-cost solution to a given security problem—applies to the Internet.

3.7.  This is not to say that researchers are not looking at the design of the network. Professor Handley conceded that he and others were "doing research into network architectures which are radically different". However, the purpose of such research was to provide pointers to "where we might want to go in the future". Getting there would be an incremental process. In the meantime most of the security problems being experienced were "with systems connected to the Internet and not with the Internet itself"; in the short to medium term "what we are going to have is basically a variation on the current Internet" (Q 663).

RECOMMENDATION

3.8.  We see no prospect of a fundamental redesign of the Internet in the foreseeable future. At the same time, we believe that research into alternative network architectures is vital to inform the incremental improvements to the existing network that will be necessary in the coming years. We recommend that the Research Councils continue to give such fundamental research priority.

The "end-to-end principle" and content filtering

3.9.  Even if fundamental redesign of the Internet is not feasible, it may still be the case that specific security concerns are best addressed at the network level. However, this approach would seem to run up against the "end-to-end principle". This was described by LINX, along with the abstraction of network layers, as one of the key principles upon which past and future innovation on the Internet depends. The LINX policy paper defines the principle as requiring "that the network core should simply carry traffic, and that additional services should always be delivered at the edges of the network, by end-points, not within the network core."

3.10.  There can be no doubt that the "end-to-end principle" has served the Internet well, and goes a long way to explaining why the network is so flexible and powerful. However, it has become more than a practical or technological description of how the network is built. In the words of Professor Zittrain, in a paper published in 2006, and which he copied to the committee along with his written evidence, "Many cyberlaw scholars have taken up end-to-end as a battle cry for Internet freedom, invoking it to buttress arguments about the ideological impropriety of filtering Internet traffic."[11]

3.11.  The most obvious application of the end-to-end principle is to the filtering of content. Here it could be argued that the purity of the principle has already been tarnished by the interventions of policy-makers. For example, the Government have required that by the end of 2007 all ISPs offering broadband connectivity in the United Kingdom should have implemented systems to block access to child abuse images and websites. Most ISPs already provide such a blocking service; this is achieved by blocking all sites listed on the database maintained by the Internet Watch Foundation (IWF). In other words, ISPs are not required actively to screen images and filter out those which are judged to be child abuse images; they simply take a list of websites from a trusted source and bar direct access to them.

3.12.  This is a far from perfect solution to the Government's objective of preventing paedophiles from accessing child abuse images online. It relies on the IWF list being wholly accurate (an impossible task, since in reality new sites are posted online every day); the blocking schemes continue to be relatively simple to evade; and the approach also fails to address other types of communication, such as "Peer-to-Peer" file sharing between paedophiles. There is also a risk, in the words of Matthew Henton of the ISPA, that it will "drive paedophile activities underground into the so-called dark net where it is impossible to actually trace their activities. That could have consequences in terms of trying to secure prosecutions against such people" (Q 763).

3.13.  The threat to the end-to-end principle is clear, even though it may be justified by the need to protect the safety of children online. At present the blocking of websites listed in the IWF database has been accepted by the industry—largely because of what Matthew Henton called "the trust that ISPs have in the IWF and in the authenticity of that database and what it contains." However, the principle that ISPs should block certain types of site could potentially be extended more widely—as James Blessing commented, "In theory [you] can block anything as long as you know what you are blocking." This could include websites blocked for political reasons—which, as Mr Blessing argued, "completely destroys the end-to-end principle" (Q 764).

3.14.  Still more controversial would be a requirement for ISPs not merely to block websites contained on a given database, but actively to screen and approve the content of the traffic passing over their networks. This would be immeasurably more complex technically, though in time it may become more practical—it is worth comparing, for instance, the latest versions of some anti-virus software, which have moved from recognition of samples held on a central database to a more dynamic, "behavioural" analysis, intended to pick up code that looks like malware, even if it has never been encountered before.[12]

3.15.  In addition, any requirement on ISPs to screen content would also create the difficulties that are encountered by any email filtering system today—namely, the need to avoid both false positives (blocking good traffic) and false negatives (failing to block the bad). Inevitably the ISP would come across a lot of material that it did not recognise as either good or bad, and it would be unable to make an informed decision either way. As Malcolm Hutty told us, "If the ISP is held legally responsible for blocking access to illegal material, of whatever nature, then the only practical recourse for it as a business would be to block that material that it does not recognise" (Q 764). In such circumstances the Internet could become unusable.

3.16.  It should be emphasised that such developments are not currently envisaged in the United Kingdom, or in most other countries. Indeed, the regulation of content provided across electronic networks is specifically excluded from the remit of the regulator, Ofcom, by virtue of section 32 of the Communications Act. This makes the Government's insistence that consumer ISPs block sites listed on the IWF database all the more striking, in that it marks an intervention in an area specifically excluded from the remit of the industry regulator by Parliament.

3.17.  The public and political pressure to protect children online continues to grow as Internet use grows, and Ofcom too has now demonstrated its interest in content, developing in partnership with the Home Office a British Standards Institute (BSI) kite mark for Internet content control software. This development of this standard was announced by the Home Secretary in December 2006, and the first kite marks will be issued in 2007.

3.18.  Clearly the development of a kite mark to help parents identify effective and easy-to-use content control software that they can then install on their end-user machines, is very different from the regulation of content delivered across electronic networks. However, it does demonstrate the Internet is not a static medium—the goal-posts move all the time, and Ofcom has as a result been obliged to intervene in an area not directly envisaged in its remit. Taken in conjunction with the requirement placed upon ISPs to block child abuse images, the development of the kite mark demonstrates the growing interest across the board in content screening, which, if the emphasis moved more towards blocking within the network, rather than on the end-user machines, could ultimately lead to the erosion of the end-to-end principle.

3.19.  Internationally, blocking of content for political reasons was highly publicised with the controversial deal reached between Google and the government of the People's Republic of China in January 2006, in which Google agreed to censor certain information in exchange for access to the Chinese market. Less overt filtering is also applied by search engines in other countries, including the United Kingdom. Thus, although the end-to-end principle continues to carry weight, globally, adherence to it is increasingly challenged.

Who is responsible for Internet security?

3.20.  In the previous section we discussed content screening and blocking. However, this discussion masks the fact "content" is not easily definable. Common sense suggests a simple distinction between "content"—that is, text, sounds or images, the presentation through a computer or other device of information that is easily understood, and which could indeed be presented in other formats, such as books, speech, newspapers or television programmes—and what, for lack of a better word, could be described as "code"—computer programs, malware, and so on. But in the context of Internet traffic, this distinction collapses. All information that passes via the Internet is disassembled into packets of data. In the words of Professor Ian Walden, "It is all zeros and ones which go across the network, whether it is a virus, a child abuse image or a political statement" (Q 391).

3.21.  This has profound implications for personal Internet security. It means that the end-to-end principle, if it is to be fully observed, requires that security measures, like content filtering, should always be executed at the edges of the network, at end-points. We have already quoted Malcolm Hutty's assessment of the risks inherent in requiring ISPs to screen content. Similar risks, but arguably still more fundamental, would apply to any requirement that ISPs screen for security risks. If ISPs, to protect themselves against possible legal liability, block unknown code, this would, in Mr Hutty's words, "prevent people from deploying new protocols and developing new and innovative applications" (Q 764).

3.22.  However, the presumption that the network should simply carry traffic, and that end-points should apply security, along with other additional services, carries, in the words of Professor Zittrain a "hidden premise". It implies that "the people at the end points can control those end points and make intelligent choices about how they will work". Neither of these assumptions, he believed, was necessarily true any longer: not only were many devices that appeared to be "end points" in fact controlled by third parties (for instance so-called "tethered devices", like mobile phones, that could be remotely re-programmed), but it was unavoidable that "people will make poor choices". He therefore argued that it was time to adopt a "more holistic approach to understand the regulatory possibilities within the collective network" (Q 979).

3.23.  Moreover, we heard over and over again in the course of our inquiry that the criminals attacking the Internet are becoming increasingly organised and specialised. The image of the attention-seeking hacker using email to launch destructive worms is out of date. Today's "bad guys" are financially motivated, and have the resources and the skills to exploit any weaknesses in the network that offer them openings. For such people the principle of "abstraction of network layers" cuts no ice. As Doug Cavit, Chief Security Strategist of Microsoft, told us in Redmond, attacks are now moving both up and down through the layers—exploiting on the one hand vulnerabilities in the application layer, and on the other working down through the operating systems, to drivers, and into the chips and other hardware underpinning the whole system.[13]

3.24.  We therefore asked almost all our witnesses, in one form or another, the ostensibly simple question, "who is responsible for Internet security"? We were hoping for a holistic answer, though we by no means always got one.

3.25.  The Government, for example, appeared to place responsibility firmly on the individual. In the words of Geoff Smith of the DTI, "I think certainly it is to a large extent the responsibility of the individual to behave responsibly." He compared the safe behaviours that have grown up around crossing the road with the absence of an "instinct about using the Internet safely". He acknowledged that it was "partly the responsibility of Government and business … to create this culture of security," but reiterated that it was ultimately an individual responsibility: "if you give out information over the Internet to someone you do not know … and they take all the money out of your bank account, it is largely due to your behaviour and not the failure of the bank or a failure of the operating system, or whatever" (Q 62).

3.26.  ISPA, the trade association representing the network operators, expressed whole-hearted support for the Government's position. They expressed their willingness to support education initiatives, but there was no doubt that they saw ultimate responsibility residing with end-users. In the words of Camille de Stempel of AOL, "ISPA agrees very strongly with the Department of Trade and Industry approach to dealing with cyber security … ISPA members are committed to working with their consumers to help address this by highlighting the way in which users can minimise the threat and informing their customers how they can best protect themselves" (Q 717).

3.27.  In marked contrast, the written evidence from MessageLabs, a leading manufacturer of email filtering technology, argued that security was "fundamentally a technical problem and as such will always require a technical solution, first and foremost". The problem should be addressed "in the cloud" at Internet level, through "protocol independent defensive countermeasures woven into the fabric of the Internet itself" (p 158). In oral evidence, Mark Sunner, Chief Security Analyst, repeated the argument that relying on end-users to detect and defeat security threats was unrealistic—"it has to be done by spotting the malicious code … which you can only achieve with Internet-level filtering" (Q 464).

3.28.  The views of Symantec, which manufactures anti-virus and firewall software (supplied in large part to individual end-users), were subtly different again. Roy Isbell, Vice-President, agreed with Mr Sunner that there had to be "technical countermeasures to technical attacks", but argued in favour of "a multi-layered defence … to give you some defence in depth" (Q 464).

3.29.  Nevertheless, the prevailing view from within the IT industry (with the exception of those representing the ISPs), was one of scepticism over the capacity of end-users to take effective measures to protect their own security. Professor Anderson told us, "In safety critical systems it is well known on the basis of longer experience than we have here, that if you have a system that is difficult to use the last thing you should do is 'blame and train' as it is called. What you should do instead is to fix the problem" (Q 706).

3.30.  In the course of an informal discussion with industry experts hosted at Cisco Systems in California, the Internet was compared with water supply: consumers were not required to purify or boil water, when the source of contamination was within the water supply infrastructure itself. Instead suppliers were required to maintain a secure network, and treated water up to exacting standards. The end-user simply had to switch on the tap to get pure, drinkable water.

3.31.  The analogy with the water network is not, of course, exact—it was immediately pointed out to us that there is no consensus on what, in the online world, is "poisonous". Nevertheless, the analogy illustrates the oddity of thrusting so much responsibility upon end-users, who may well be incapable of protecting themselves or others. Thus Bruce Schneier responded to our question on responsibility as follows: "There is a lot of responsibility to go around. The way I often look at it is who can take responsibility? It is all well and good to say, 'You, the user, have to take responsibility'. I think the people who say that have never really met the average user" (Q 529). He then proceeded to outline the many people and organisations who might reasonably take a share of responsibility for Internet security—the financial services industry, the ISPs, the software vendors (a term which we use in the sense universal within the IT industry, namely the manufacturers of software and other products, rather than the retailers), and so on.

3.32.  Jerry Fishenden, of Microsoft, also outlined a "collective responsibility" for end-user security, embracing end-users themselves, the technology supplied to them, and the ways in which the laws governing Internet use were enforced through the courts (QQ 261-262). This view was echoed by Doug Cavit, who argued that traditional defences, anti-virus software and firewalls, were no longer adequate—every layer of the system had to be defended. We support this broader interpretation of responsibility for Internet security.

3.33.  It is difficult to escape the conclusion that in the highly competitive market for Internet and IT services, in which the importance and economic value or cost of security are increasingly apparent, companies have strong incentives either to promote solutions from which they stand to profit, or, as the case may be, to argue against solutions which might impose additional costs upon them. We therefore have no choice but to treat the evidence from the industry with a degree of scepticism. But this makes it all the more disappointing that the Government appear to have accepted so unquestioningly the views of one part of the industry, the network operators and ISPs, and have in the process lost sight of the technical realities of online security.

CONCLUSION

3.34.  The current emphasis of Government and policy-makers upon end-user responsibility for security bears little relation either to the capabilities of many individuals or to the changing nature of the technology and the risk. It is time for Government to develop a more holistic understanding of the distributed responsibility for personal Internet security. This may well require reduced adherence to the "end-to-end principle", in such a way as to reflect the reality of the mass market in Internet services.

Network-level security

3.35.  The remainder of this chapter looks at areas in which practical improvements to personal security could be achieved through action at the level of the network or of the provision of Internet services.

3.36.  One such area is the security of routers and routing protocols. Routers are the main building block of the Internet—they determine where packets are to be forwarded. Criminals who gained control of major routers would be able to block traffic, or forward traffic via routes where unencrypted content could be compromised, or to spoofed websites where phishing attacks could be mounted. It is thus essential that routers are fully secure. Cisco, a major manufacturer of routers, told us that they had still not ensured that their routers shipped without fixed values for default passwords—problematic because many users failed ever to change this default. More positively, they told us that their bigger systems, such as those used at ISPs and on backbone networks, provided "two factor" authentication (see paragraph 5.17) as standard. However, although they recommended use of two factor authentication as "best practice" they were not able to compel ISPs to use it.

3.37.  Routers use the Border Gateway Protocol (BGP) to swap information about routes and which ISP has been allocated particular blocks of IP addresses. However, BGP is somewhat insecure, and it is possible for a rogue ISP (or one that has been misled by a fraudulent customer) to "announce" someone else's addresses and thereby reroute traffic. There exist variants of the BGP protocol which permit the cryptographic signing of announcements, but they are not generally used. Cryptography can also be used to ensure that the friendly, human-readable names typed into web browsers are correctly translated into computer addresses, that email is not being passed to a machine that impersonating a real server's identity, and to ensure that email travels over the network in encrypted tunnels. However, none of these systems is widely deployed, despite the potential for email to be intercepted, or websites to be spoofed.

3.38.  Professor Handley argued that these network issues were a matter primarily for the technical community, not the end-user: "I think that these mechanisms or similar ones will eventually find their way out there because the requirement really is there, but they are probably not the largest part of the problem, at least from the point of view of the end user. From the point of view of those, there is a worry about keeping the network itself functioning" (Q 664). He believed that "the industry is moving in the right direction to address them."

3.39.  However, Malcolm Hutty, of LINX, described these systems as "immature" and "experimental", before adding, "I hope you did not understand my answer when I was saying it is 'experimental' to mean it is not something that is important or coming or going to happen; I was not being dismissive of it" (Q 759). James Blessing of ISPA suggested that they were not being used because of a lack of "stable vendor support", which we understand to mean that the manufacturers of routers and other network equipment are not yet providing systems suitable for use by ISPs. He also pointed out the need for co-ordination between networks: "If one side says 'I am going to use this' and the other side will not support it, those two networks will not talk to one another" (Q 757).

3.40.  Malcolm Hutty also argued that ISPs had every incentive to invest in more secure systems: "What more incentive could you offer an ISP to protect themselves against an attack on their core infrastructure than the fact that if it is attacked and it fails then they have lost what they are providing?" (Q 758). Nevertheless, we remain concerned that the systems that individuals rely upon to have their traffic correctly routed, to browse the correct websites, and to keep their email secure, are reliable only because no-one is currently attacking them. This seems to us to be an area where Ofcom should be looking to develop best practice, if not regulatory standards.

INTERNET SERVICE PROVISION

3.41.  There appears to be still greater scope for intervention at the level of the Internet Service Provider (ISP). ISPs do not typically operate the network; instead they sell access to the network to their customers, often bundled together with a range of other services, such as web-based email, telephone (conventional or VoIP), cable television and so on. They sit, in other words, near the edges of the network, providing a link between the end-user and the network.

3.42.  While the broadband infrastructure is largely in place, the market for Internet services continues to grow and is highly competitive. Internet services in the United Kingdom are marketed largely on price: indeed, since 2006 the advent of "free" broadband (although in reality, as David Hendon of DTI told us, all the ISPs have done is "re-partition the costs in a certain way") has given such competition a new intensity (Q 70).

3.43.  Regulation of Internet services is the responsibility of Ofcom. However, the evidence we received from Ofcom (evidence which was only provided late in the inquiry, as a result of a direct approach by the Committee), suggests that there is very little regulation in practice. This is not entirely the fault of Ofcom—we have already noted that content is specifically excluded from Ofcom's remit by virtue of the precise definitions of what they regulate in section 32 of the Communications Act 2003. However, questions remain over Ofcom's interpretation of its residual remit.

3.44.  Ofcom appears to have taken the broadest possible view of what constitutes "content" under the Act, to embrace security products as well as text or images. In the words of their written evidence: "Although security products are valuable tools for consumers they are not a part of the regulated Internet access service—any more than are the PCs which are typically used as the access device. Antivirus software, firewalls etc. largely run on customer equipment and are in practice outside the control of the Internet service provider" (p 320). Elsewhere the memorandum echoes the Government's position that "ultimately the choice of the level of security to apply to one's data is a choice for the end user which is why some consumers choose to apply their own security at the application layer rather than relying on the network to maintain security and integrity" (p 325).

3.45.  We find Ofcom's argument entirely unconvincing. It simply describes the status quo—security products are at present largely run on customer equipment, and are thus outside the control of the ISPs. But this falls well short of a convincing rationale for Ofcom's conclusion that security products "are not a part of the regulated Internet access service." Why are they not a part of the regulated service? Would it not be in the interests of consumers that they should be made a part of the regulated service? Ofcom failed to provide answers to these questions.

3.46.  Ofcom went still further in resisting any suggestion that its responsibility for enforcing security standards should be extended. The Society for Computers and Law (SCL) expressed concern over the enforcement of Regulation 5 of the Privacy and Electronic Communications Regulations 2003. This requires that ISPs should take "appropriate technical and organisational measures to safeguard the security" of their services. But the SCL pointed out not only that the Regulations and the parent Directive offered "no guidance or standards" on what technical measures might be appropriate, but that enforcement was the responsibility not of Ofcom but of the Information Commissioner's Office (ICO), which lacked both resources and powers to act effectively. The SCL recommended that enforcement "should be a matter for Ofcom" (p 128).

3.47.  This proposal was firmly rejected in a letter from Ofcom, which stated that "Ofcom does not have a remit in the wider area of personal Internet security or indeed the necessary expertise." Ofcom insisted that the ICO was best placed to enforce the Regulations, and drew our attention to a forthcoming "letter of understanding" which would set out how the two regulators would collaborate in future (p 312).

3.48.  Ofcom's interpretation of what constitutes a "regulated Internet access service" was, perhaps unsurprisingly, echoed by the ISPs themselves. Asked whether ISPs should not be obliged to offer virus scanning as part of their service, John Souter, Chief Executive Officer of LINX, asked a question in reply, "What would be the authoritative source that you would mandate as the thing to check against?" (Q 733) This is a legitimate question, and would be very pertinent if ISPs were given a statutory duty to provide a virus scanning service, but in reality companies developing and selling security software have to answer it every day, so it is not immediately apparent why ISPs should not make use of their well-established expertise and provide users with a scanning service that is appropriate to their circumstances. Indeed, ISPs in the United States are obliged to offer a basic level of security as part of their service to customers.

3.49.  In this country, on the other hand, it is left entirely to end-users, confronted as they are by bewildering and often conflicting sources of information, to take these crucial decisions. As we have noted, Ofcom treats security as an add-on, not an integral part of Internet services. As for long-term improvements in the level of security, it is assumed that the market will provide. In the words of James Blessing: "If it is a problem I would suggest that maybe it is time to change your ISP. That is simple advice but from our members' point of view they are out there to provide you with a service as a customer that you would want. If you say I want anti-virus, I want anti-spam on my account and they do not provide it, then they are not the ISP that you require" (Q 738).

3.50.  Mr Blessing's argument is plausible as far as it goes. However, it overlooks the fact that the individual choices that customers make regarding Internet services affect not just themselves but society as a whole. The Society for Computers and Law, after acknowledging the force of the free-market argument, provided a convincing rebuttal: "users with unprotected PCs who choose to obtain access via an ISP that has no controls or security measures are more likely to be attacked by botnet herders, who can then expand their botnet to the detriment of all other (protected/secure) users of the Internet and of the public, if such botnets are used for criminal purposes" (p 126).

3.51.  At the opposite end of the spectrum from the ISPs, Bruce Schneier argued forcefully that ISPs should take more responsibility for security. We have already quoted his belief that the major players in the online world should take more responsibility for assisting the "average user". As far as the ISPs were concerned, his arguments were based not on abstract principle, but on practicalities:

"I think that the ISPs for home users very much should be responsible. Not that it is their fault, but that they are in an excellent position to mitigate some of the risk. There is no reason why they should not offer my mother anti-spam, anti-virus, clean-pipe, automatic update. All the things I get from my helpdesk and my IT department … they should offer to my mother. I do not think they will unless the US Government says, 'You have to'" (Q 529).

3.52.  This prompts a key question: is it more efficient for basic security services such as spam or virus filtering to be offered at the ISP level or at the level of the individual end-user? It is worth noting that although, according to a 2006 survey conducted by Symantec, some 90 percent of end-user machines in the United Kingdom have anti-virus software installed, this figure includes a significant number of users who never update their software, which is therefore rendered useless. John W Thompson, CEO of Symantec, told us in the course of a private discussion that he thought some 20-25 percent of computers worldwide were at risk because their users were indifferent to security. Whatever the attractions of placing responsibility upon end users, the fact is that a huge number of them are not currently exercising this responsibility. That responsibility could possibly be more efficiently exercised, and with economies of scale, by ISPs.

3.53.  A second question is, whether imposing upon ISPs a responsibility to provide a basic level of security to customers would lead to the dire consequences predicted by the ISPs, in particular the stifling of innovation across the sector as a whole? We see no reason why it should, as long as a "light touch" is maintained, rather than a blanket imposition of legal liability for every security breach, however caused.

3.54.  We have already drawn attention to developments in the field of content regulation—not only the insistence that ISPs block websites containing child abuse images, listed on the IWF database, but also the development of a BSI kite mark for content control software. Given that, as we have also noted, the distinction between "content" and other forms of Internet traffic is blurred, we see a strong case for introducing similar initiatives to cover personal security. Existing anti-virus and firewall technology is capable of blocking all traffic containing samples of known malicious code (using databases which companies like Symantec update daily). Such technology is not fool-proof, but it has proved its value over many years, without stifling innovation, and we can see no reason why it should not be routinely applied at ISP level.

3.55.  Indeed, deployment of security software at ISP level could have one crucial benefit. Firewalls and spam filters generally work in one direction only: they are designed to prevent bad traffic reaching the end-user, but they do not always filter outgoing traffic. In particular, once the end-user machine has been infected, and is either propagating malware, or is being used as part of a botnet to send out spam, the firewall and anti-virus software will be turned off by the malware, and updating will be disabled. Moreover, the end-user himself will in all probability not be aware that his machine has a problem, and even if he is made aware of the problem (for instance, that his machine is part of a botnet), he has no incentive to fix it—he himself suffers no significant harm if his machine is sending out spam. The recipients of the spam, and the network as a whole, if the botnet is used to launch DDoS attacks, are the ones to suffer harm.

3.56.  ISPs, on the other hand, are well placed to monitor and, if necessary, filter outgoing traffic from customers. If unusual amounts of email traffic are observed this could indicate that a customer's machine is being controlled by a botnet sending out spam. At the moment, although ISPs could easily disconnect infected machines from their networks, there is no incentive for them to do so. Indeed, there is a disincentive, since customers, once disconnected, are likely to call help-lines and take up the time of call-centre staff, imposing additional costs on the ISP.

3.57.  This is not to say that some ISPs do not already act in this way. Matthew Henton, of the ISP Brightview, confirmed that his company will "disconnect [an infected user's] machine from the network, we will contact that user and normally they would be entirely unaware … and we will work with them to disinfect their machine and ensure that they are adequately protected against future infection" (Q 744). We applaud this approach—but are conscious that it is not universal. Doug Cavit, at Microsoft, told us that while most (though not all) ISPs isolated infected machines, they generally found it too expensive actually to contact customers to fix the problem. Nor is this service well advertised—indeed, any ISP which advertised a policy of disconnecting infected machines would risk losing rather than gaining customers.

3.58.  There is thus at present a failure in incentives, both for end-users and ISPs, to tackle these problems. We do not therefore see any prospect of the market delivering improved security across the board. At the same time, we see no reason why the sort of good practice described by Mr Henton should not, by means of regulation if necessary, be made the industry norm.

3.59.  We do not advocate immediate legislation or heavy-handed intervention by the regulator. Nor do we believe that the time has yet come to abandon the end-to-end principle once and for all. But the market will need to be pushed a little if it is to deliver better security. The example of the Ofcom-sponsored kite-mark for content control software indicates one possible way forward; a similar scheme for ISPs offering security services would give consumers greater clarity on the standards on offer from suppliers, and would help achieve greater uniformity across the market-place, particularly if backed up by the promise of tougher regulatory requirements in the longer-term.

3.60.  The Government did in fact indicate that they were discussing options for improving security with the Internet services industry. As Geoff Smith, of the DTI, told us: "We are also in discussion with the ISP community about a new initiative. I am not sure one would describe it as self-regulation, but certainly to develop a better understanding of what ISPs can offer as, if you like, a minimum service or what we would see as a code of practice around the security they are offering to their consumers" (Q 70).

3.61.  We welcome the fact that the Government have at least started to think about these issues. However, the discussions described by Mr Smith appear wholly open-ended; the fact that he was not even prepared to describe what was envisaged as "self-regulation", let alone "regulation", inspires little confidence. In short, the Government's actions so far have been toothless.

THE "MERE CONDUIT" DEFENCE

3.62.  A specific legal consequence of the approach we are recommending would be the erosion of the "mere conduit" principle, embodied in the E-Commerce Regulations of 2002[14]. This principle provides a defence for network operators against legal liability for the consequences of traffic delivered via their networks. The principle can be caricatured, in Professor Zittrain's words, as the ability of the ISP to say, "I'm just the conduit. I'm just delivering the ticking package. You can't blame me." We would not wish to see the mere conduit defence, any more than the end-to-end principle, abandoned. However, we agree with Professor Zittrain that it is now appropriate to "take a nibble out of the blanket immunity". In particular, once an ISP has detected or been notified that an end-user machine on its network is sending out spam or infected code, we believe that the ISP should be legally liable for any damage to third parties resulting from a failure immediately to isolate the affected machine (QQ 961-963).

3.63.  This carries a risk. It could create a disincentive for ISPs proactively to monitor the traffic emanating from their customers—they might conclude that it was in their interests to remain ignorant of compromised machines on their network until notified by others. This would be counter-productive, and could compound existing legal constraints to do with data protection and interception of communications, which already affect security research. To guard against such an outcome, not only should ISPs be encouraged proactively to monitor outgoing traffic, but in so doing they should enjoy temporary immunity from legal liability for damage to third parties.

VOICE OVER INTERNET PROTOCOL

3.64.  We raise here one further issue that emerged in out inquiry, which relates to the robustness of the network—although it is largely distinct from the other issues discussed in this chapter. This is the regulatory framework for Voice over Internet Protocol (VoIP) suppliers, and in particular their ability to offer an emergency "999" service. When we spoke to Kim Thesiger, of the Internet Telephony Service Providers' Association (ITSPA), he said that "I do not know of a single ITSPA member who does not want to offer 999 services and would like to do so as soon as possible, but there are some significant regulatory and bureaucratic problems" (Q 782). In particular, VoIP companies have to satisfy the requirements imposed upon Publicly Available Telephone Service (PATS) providers.

3.65.  Kim Thesiger expressed particular concern over the "network integrity clause" of the PATS requirements. In a "copper-based" world it was clear what "network integrity" meant. In the world of the Internet—in which, as we have noted, packets of data travel across a network of copper, fibre-optic cable, wireless signals, and so on—it is far less clear what either what constitutes "network integrity", or what control the VoIP provider can have over it. He said that the message from Ofcom was that "you must decide yourselves whether you have network integrity or not"—which, if the wrong decision was made, could expose providers to unacceptable risks in the event of network failure.

3.66.  VoIP is a relatively new technology, and Ofcom's position on emergency services is still evolving. In written evidence, Ofcom drew attention to a new Code of Practice for VoIP providers, which would require them to make clear to potential customers "whether or not the service includes access to emergency services", and the level of dependence on externalities such as power supply. However, this does not address the issue of network integrity, or Kim Thesiger's point that Ofcom believed that "in order to offer 999 calls you must be PATS-compliant". In fact Ben Willis, Head of Technology Intelligence at Ofcom, told us that the regulator had recently, in effect, toughened the rules, bringing to an end a policy of forbearance on emergency services, which had been based on the principle that "it was better to have some 999 access than none at all" (Q 1030). Instead Ofcom was initiating a new round of consultation, due to be completed in summer 2007—but with no apparent commitment to clarity the position.

Recommendations

3.67.  The current assumption that end-users should be responsible for security is inefficient and unrealistic. We therefore urge the Government and Ofcom to engage with the network operators and Internet Service Providers to develop higher and more uniform standards of security within the industry. In particular we recommend the development of a BSI-approved kite mark for secure Internet services. We further recommend that this voluntary approach should be reinforced by an undertaking that in the longer term an obligation will be placed upon ISPs to provide a good standard of security as part of their regulated service.

3.68.  We recommend that ISPs should be encouraged as part of the kite mark scheme to monitor and detect "bad" outgoing traffic from their customers.

3.69.  We recommend that the "mere conduit" immunity should be removed once ISPs have detected or been notified of the fact that machines on their network are sending out spam or infected code. This would give third parties harmed by infected machines the opportunity to recover damages from the ISP responsible. However, in order not to discourage ISPs from monitoring outgoing traffic proactively, they should enjoy a time-limited immunity when they have themselves detected the problem.

3.70.  The uncertainty over the regulatory framework for VoIP providers, particularly with regard to emergency services, is impeding this emerging industry. We see no benefit in obliging VoIP providers to comply with a regulatory framework shaped with copper-based telephony in mind. We recommend instead that VoIP providers be encouraged to provide a 999 service on a "best efforts" basis reflecting the reality of Internet traffic, provided that they also make clear to customers the limitations of their service and the possibility that it may not always work when it is needed.


11   Jonathan L Zittrain, "The Generative Internet", Harvard Law Review, 119 (2006), p 2029. Back

12   For instance SONAR (Symantec Online Network for Advanced Response). Back

13   See Appendix 5. Back

14   See Regulation 17 of the Electronic Commerce (EC Directive) Regulations 2002. Back


 
previous page contents next page

House of Lords home page Parliament home page House of Commons home page search page enquiries index

© Parliamentary copyright 2007