Select Committee on Science and Technology Fifth Report

CHAPTER 4: Appliances and applications

4.1.  Having a well designed and maintained road network is one thing; but if the vehicles driving on the roads are badly designed, there will be no benefit to safety. So in this chapter we turn from the Internet itself, the network and the companies who provide Internet services to end-users, to the appliances and applications, the PCs and programs, that run on that network.

Usability vs security

4.2.  Despite the innovation and creativity that have characterised the development of the Internet, there is a remarkable uniformity in the products that most individuals buy and use. The introduction of IBM's "personal computer" (or PC) in 1981 led to a standardisation of processors, components and overall system design that has continued through numerous generations to the present day. In recent time, "laptops" or "notebook computers" have become popular as alternatives to "desktop" machines, but their fundamentals are essentially identical. While there is intense competition for market share between companies such as HP, Dell and a host of competitors, the technology they are selling is highly uniform. PCs are the white goods of the IT world. The only really successful rival to the PC has been the Apple Macintosh, introduced in 1984, and its successors, but they have never been dominant, and their current market share is between 10 and 15 percent.

4.3.  The operating systems running on these computers are equally uniform. Microsoft's Windows operating system is almost invariably pre-loaded on PCs and laptops; Microsoft controls up to 90 percent of the operating system market. Other vendors have smaller shares. The Apple operating system has since 2000 been based on Unix; Apple's own applications run on this platform. Linux, an open-source Unix derivative, has a much smaller share of the operating system market, made up largely of more expert users.

4.4.  The greatest diversity is in the applications that run on the operating systems. Here Microsoft can claim some credit: the company has generally sought to maximise the interoperability of its operating systems, and this has without doubt contributed to diversity and innovation in the development of applications. However, Microsoft has not always adopted this approach—indeed, the company's decision in 1996 to bundle Internet Explorer free of charge along with its Windows operating system destroyed the market dominance of its major rival at that time, Netscape. Moreover, many users of Microsoft operating systems do not look beyond Microsoft applications such as the Microsoft Office suite.

4.5.  How has this uniformity come about, and what bearing does it have on personal security? The first point to be made, which was argued forcefully by Ross Anderson in his presentation to our introductory seminar, is that the economics of the fast-moving IT market in the 1980s and 90s, which enabled Microsoft to establish its extraordinary dominance, placed a high premium on speed and flexibility. New products had to be rushed out quickly, and in an era when the problems now associated with the Internet were almost unknown, ease of use and adaptability generally trumped security. A similar point was made to us by Laura K Ipsen and John Stewart at Cisco, who argued that Microsoft had begun by focusing on usability, later on reliability, and only now on security.

4.6.  In today's market what Professor Anderson termed "network externalities" continue to play a key part. For instance, the functionality of, say, Internet Explorer, cannot be decided by Microsoft alone. Professor Anderson noted that web browsers can be set to permit JavaScript to run. JavaScript increases functionality, making it simpler to construct intricate e-commerce websites where users can purchase complex products such as airline tickets; but it also creates vulnerabilities, for example allowing users to be redirected from legitimate bank websites to phishing sites. He concluded that the Internet was riddled with—

"Sub-optimal ways of working … because of hundreds of thousands of little design decisions taken by third parties. It is these externalities which cause most of the stickiness which stops us improving things directly. If Bill Gates were to ship Windows from next week with JavaScript turned off by default there would be a huge outcry from people who could not book flights … It is this kind of inertia that we are up against" (Q 686).

4.7.  There is thus, as Adam Laurie told us, "always a trade-off between usability and security" (Q 311). Or as Alan Cox put it, "the really secure systems have always been produced for things like military use where usability is not a factor" (Q 323). In marked contrast, as Jerry Fishenden of Microsoft told us, Windows "is part of a complex eco-system … the end user … can add on many thousands of different third party hardware devices and many thousands of different applications that people make available" (Q 269). The JavaScript example demonstrates how the existence of such third party applications can harm security.

4.8.  The temptation therefore, particularly for Microsoft, given its dominant position in the market, is to improve the security of its product by locking out third party applications. This would reduce the likelihood that these applications, whose security they cannot vouch for, could have a damaging impact upon customer security. Microsoft products, which would be permitted to run, would then be purchased instead. In essence, as the evidence from Professor Anderson's Foundation for Information Policy and Research (FIPR) said, companies that have established a dominant position "may then add excessive security in an attempt to lock in their customers more tightly" (p 211).

4.9.  There have already been some signs that the major companies are seeking to "lock in" of customers through security features. The recent high-profile dispute between Microsoft and the European Commission centred on security features proposed for the Vista operating system, which the Commission contended would be anti-competitive. Microsoft's appeal against some of the changes imposed by the Commission is still to be decided by the Court of First Instance, and we are not in a position to comment on the merits of the dispute. Matt Lambert, of Microsoft, insisted that the company had "always worked with other companies, including competitors, to try to make our systems as inter-operable as possible." However, as the example of Netscape (itself subject to anti-trust litigation, though not until it was too late to salvage Netscape's position in the market) demonstrates, the Windows operating system can be a powerful tool to extend Microsoft's dominance into new sectors of the market.

4.10.  In contrast, Bud Tribble told us that Apple went out of its way not to ask users security questions to which they would not know the answers. Whereas Microsoft might seek to maximise flexibility at the expense of possible insecurity, Apple would sometimes make decisions on behalf of users even if that made it more difficult to download and run third party applications. At the time we talked to Mr Tribble, Apple had decided that it would go even further with the new iPhone and make it a "closed platform", so that it would not be possible to execute any non-Apple applications. However, at the time of writing this decision was being revisited. It is argued that Apple's approach makes its machines more secure, though the precise cause and effect behind the relatively low rate of security breaches on Apple machines is unclear. There may be many other factors at play as well, not least the fact that the company's limited market share makes it a less attractive target.

4.11.  We believe that it would be enormously damaging if the major software vendors were to seek to "lock in" customers and prevent the use of third party applications. The interoperability of operating systems is a key driver for innovation. Without interoperability the constant stream of new applications, many developed by the open source community, would dry up, and the Internet would ossify.

4.12.  Moreover, as we have already noted, and as Alan Cox reminded us, software developers "genuinely do not know how to build a perfectly secure, useable operating system" (Q 311). Mr Cox regarded it as a research problem which would one day be solved, but until that day comes a balance will have to be struck, and end-users will inevitably have to manage a degree of insecurity. At the same time, they have the right to expect that software vendors will make every effort possible to keep this insecurity to a minimum.

Maintaining security—patching and security software

4.13.  Security, as Bud Tribble told us at Apple, begins with good design. At the early stages of design, decisions will have to be made on new features, and usability, reliability and security will have to be balanced and reconciled. In all major software companies, security is now a top priority, and the latest versions of the major operating systems, Windows Vista and Apple Mac's Leopard, are generally accepted as being by far the most secure yet.

4.14.  But software is not, like a car, a complete product that is finished the moment it leaves the production line. New features are rolled out all the time, flaws identified and fixes (or "patches") produced and distributed. Moreover, the criminals operating online—the "bad guys"—are well funded, typically by organised crime groups in eastern Europe, and can call on the services of expert programmers (often as a result of blackmail or coercion). They are as skilled in disassembling and analysing code as Apple or Microsoft are in developing it. The phrase we heard over and over again in our inquiry was that it was an "arms race"—never static or stable, but involving a constant testing out of the opposition, a constant raising of the stakes.

4.15.  The result is that new security threats emerge at a startling rate. Symantec, for example, documented 2,526 new vulnerabilities in the second half of 2006, higher than for any previous six-month period (for comparison, the figure for the first half of 2005 was just 1,237). Furthermore, the vulnerabilities are being used by the "bad guys" far more quickly. The company's evidence notes that in late 2005 "the average vulnerability-to-exploit window was just 5.8 days" (p 149). So keeping security software up to date is crucial to maintaining good online security. If it is out of date it is not just useless, but arguably dangerous, because it gives the user an unjustified sense of security.

4.16.  In addition, operating systems and appliances must be fully patched—in other words, the security updates issued by vendors, with a view to fixing vulnerabilities, need to be regularly installed. The responsibility for installing such updates is shared between vendors and end-users. The key question for this inquiry is whether the vendors are doing enough to help end-users. In the case of Microsoft, for example, security updates are typically issued on "patch Tuesday", the second Tuesday of each month. It used to be the responsibility of users to download patches from the Microsoft website; if they failed to do so, the "bad guys" could quickly disassemble and analyse the patches, and design malware to exploit the vulnerabilities thus identified. This gave rise to the corresponding phrase "exploit Wednesday".

4.17.  However, all the major vendors, including Microsoft, now give end-users the option to configure their system to down-load security updates automatically. This was described by Microsoft as their "recommended option"—though the company also provides other options, ranging from notification that patches are available to switching off automatic updates entirely (see Q 289).

4.18.  This prompts a number of questions. The first is whether a "recommended option" is sufficiently robust to protect consumers. The Society for Computers and Law were clear that computers should be "supplied with the default security settings … 'turned on', with suitable guidance and warning to end-users on the risks associated with reducing the security settings" (p 126). Microsoft has itself slowly moved towards a default "on" setting for security, and, as Adam Laurie noted, "are now shipping secure by default settings" (Q 311). The open source community has moved in the same direction.

4.19.  The provision of secure settings by default begs a further question, which is whether end-users adequately understand either the limitations that a high level of security places on functionality, or the implications of lowering that level from, say, "high" to "medium". As Adam Laurie continued, vendors "have to provide the tools, advice, timely updates and advisories when there is a problem in order for the user to make their own choice" (Q 311).

4.20.  More generally, security prompts are notoriously obscure, and seem to be widely ignored by users—arguably justifying Apple's approach of eliminating prompts wherever possible. Doug Cavit assured us that Microsoft was making every effort to ensure that prompts and messages were transparent, but it was clear that Microsoft's belief was that some users would sometimes find it necessary to choose potentially risky behaviour, and therefore Windows would continue to use prompts and allow end-users to make the final decision on security. The use of simple, jargon-free language is absolutely critical if Microsoft's approach is not to undermine security.

4.21.  A further concern is over the state in which PCs and operating systems are actually supplied to customers. It is one thing expecting users to update operating systems and security software, but it is another matter is these systems are not up-to-date at the time of purchase. We have not received clear evidence that out-of-date software is a major problem, but can readily see that, as proposed by the FIPR, a statement accompanying the PC, stating the date up to which the software was fully patched—in effect, a "best before" date—would be of use to purchasers (p 210). At the very least, we see no reason why operating systems should not be programmed to provide such information when run for the very first time, and why they should not automatically update themselves so as to fix any security problems when they are first connected to the Internet.

Emerging threats and solutions

4.22.  We have already given a short overview of the kinds of threats facing Internet users. Attacks continue to increase in sophistication. MessageLabs, for instance, reported the emergence of "targeted Trojans", unique examples of malware targeted at particular organisations or individuals. Trojans typically masquerade as innocent programs or files, and rely on social engineering to persuade the recipient to run the file, so installing the malware "payload". This payload might be, for instance, a keylogger, which allows the author of the Trojan to capture passwords and other data. The targeted Trojan, by definition a new and unique piece of software, is particularly difficult for security software, relying as it does largely on databases of known malware, to detect.

4.23.  The number of such bespoke Trojans intercepted by MessageLabs has risen from about two per week in January 2006 to one a day by January 2007. This is still a very small number, but Mark Sunner of MessageLabs noted that towards the end of 2006 "toolkits" to make such Trojans appeared online, so that criminals could "buy this capability from certain nefarious Russian websites" (Q 461). Such developments demonstrate that in the ongoing Internet arms race the "bad guys" will continue to search for and find ways to outwit the security professionals.

4.24.  However, the "arms race" works both ways. New security technologies are likely to emerge in the coming years. Bud Tribble, for example, told us that Apple was conducting research into the possibility of including within the operating system a "sand-box"—a secure area in which untested programs can be executed. The Java programming language has used a sand-box to restrict individual programs for many years, but it is likely to be two or three years before a more general form of sand-box appears in mass-market operating systems designed for personal use.

Vendor liability

4.25.  The preceding discussion leads onto one of the key issues raised in this inquiry—liability. At present, even if software is shipped with major flaws which give rise to security vulnerabilities, end-users who suffer loss as a result have no legal recourse against the vendors—end-user license agreements generally exclude any legal liability. As Professor Anderson put it, the Internet way of doing business is that "liability gets dumped as much as possible on the end user" (Q 646). The absence of liability, in contrast, means that there is little incentive, particularly given the high degree of uniformity across the marketplace, for vendors[15] to raise security standards. A key question therefore is whether a liability regime would create an incentive for vendors to raise standards.

4.26.  Liability is a hugely controversial issue within the IT industry. The witness to speak most forcefully in favour of a vendor liability regime was Bruce Schneier. He argued that "We are paying, as individuals, as corporations, for bad security of products"—by which payment he meant not only the cost of losing data, but the costs of additional security products such as firewalls, anti-virus software and so on, which have to be purchased because of the likely insecurity of the original product. For the vendors, he said, software insecurity was an "externality … the cost is borne by us users." Only if liability were to be placed upon vendors would they have "a bigger impetus to fix their products" (Q 537). Thus Mr Schneier had no doubt that liability was the key to creating incentives for vendors to make more secure software.

4.27.  Most other witnesses, however, were opposed to the introduction of any form of liability regime. Jerry Fishenden, of Microsoft, insisted that his colleagues were "making our platform as secure as we possibly can within the complex nature of software". He drew an analogy with the physical world: "People do not tend to immediately look for liability towards lock or window companies because houses are still being burgled. The tendency is to want to blame the perpetrator" (Q 273).

4.28.  Alan Cox, a developer of open source software, focused on the possibility that a liability regime would stifle interoperability and innovation: "you buy a PC, you add a word processor, you add a media player, and you add a couple of games. All these can interact in strange and wondrous ways and as you add more software the combination increases. The rational thing for a software vendor to do faced with liability would be to forbid the installation of any third party software on the system" (Q 313). Bruce Schneier, on the other hand, argued "that the companies protest a little bit too much … in fact innovation is so profitable and so valuable that you will see it" (Q 530).

4.29.  Legal barriers were also raised. Nicholas Bohm argued that those who suffered harm as a result of flaws in software often had no contractual relationship with the vendor that would entitle them to claim damages: "the risks and losses are diffused by the Internet and it is not an environment in which beefing up direct liability is an easy thing to do". At the same time, he agreed that there was currently an "incentives problem", in that "the suppliers and the creators by and large do not suffer the adverse consequences to the same extent as their customers" (Q 394).

4.30.  Mr Bohm's objection to a liability regime is certainly legitimate, though Bruce Schneier, while acknowledging the problem, argued that the courts would have to manage it, as they had done in other areas, where there were already "complicated case-histories of partial liability" (Q 540). Professor Anderson also concluded that "you are going to end up eventually with some hard cases for courts to decide where ascribing liability to this vendor or that vendor or to the user who misconfigured the machine will be a complicated question of fact" (Q 658). Analysing such questions of fact and reaching a judgment is what the courts do every day.

4.31.  At the same time, we accept that the pace of innovation and change in the industry means that a comprehensive liability regime may not yet be feasible. New ways to use the Internet—for instance, new applications of "Peer-to-Peer" and or other types of file sharing—emerge at bewildering speed. Online fashions and behaviours change just as fast. Professor Zittrain's comment on liability was a qualified "not yet"—" I would at least like to buy us another five or ten years of the generative status quo and then see if it turns out that things have slowed down and we pretty well know the uses to which the network will be put" (Q 971). Alan Cox, while arguing against liability, did concede that there might be "an argument in the longer term that as technology improves and as we get better at writing secure software that the law does need to hold software companies to higher standards, at least in terms of negligence" (Q 313).

4.32.  In principle, technological constraints could slow the rate of innovation, creating a more stable and mature market for software, at any time. "Moore's Law", originally an empirical observation that computing power per unit cost of silicon chips doubled approximately every 24 months, has continued to hold good for over 40 years, and has supported an astonishingly innovative industry—but there is no guarantee that this rate of progress will be sustained in future. As this Committee noted in 2002, fundamental physical constraints will at some point limit the miniaturisation potential of conventional computer chips.[16]

4.33.  We are not however in a position to predict if and when the pace of change in the online world will slow. Nor can we answer a related question, namely when the industry will, in Alan Cox's words, "get better at writing secure software". But we have no doubt that at some point in the future the IT industry, like other industries, will mature: more consistent standards for software design will emerge; the rate of innovation will slow. At that point, if not before, clearer definitions of the responsibility of the industry to customers—including a comprehensive liability regime—will be needed.

4.34.  In the meantime, there are many areas in which vendor liability is already appropriate. One such is where vendors are demonstrably negligent in selling products which they know to be insecure, but which they advertise as secure. In Adam Laurie's words, "potentially there should be some issue of liability for companies shipping products that are known not to be secure and selling them as secure products" (Q 315). As an example, he mentioned WiFi systems, where security protocols were claimed to be secure long after they had in fact been broken.

4.35.  Professor Handley also argued very succinctly for imposing liability where negligence could be shown: "If your PC, for example, gets compromised at the moment there is no real liability for the software vendors or the person who sold them the PC or anything else. The question then is: did the person who sold you that software or the person who wrote that software or whatever actually do the best job industry knows how to do in writing that software? If they did then I really do not think they should be liable, but if they did not then I think some liability ought to be there" (Q 654). We agree.

4.36.  Any imposition of liability upon vendors would also have to take account of the diversity of the market for software, in particular of the importance of the open source community. As open source software is both supplied free to customers, and can be analysed and tested for flaws by the entire IT community, it is both difficult and, arguably, inappropriate, to establish contractual obligations or to identify a single "vendor". Bruce Schneier drew an analogy with "Good Samaritan" laws, which, in the United States and Canada, protect those attempting to help people who are sick or injured from possible litigation. On the other hand, he saw no reason why companies which took open source software, aggregated it and sold it along with support packages—he gave the example of Red Hat, which markets a version of the open source Linux operating system—should not be liable like other vendors (Q 541).

4.37.  Finally, we note that moves towards establishing vendor liability would be much more effective if they were made internationally rather than by the United Kingdom alone. There is a significant cross-border market in software products, so imposing liability onto United Kingdom companies, without making foreign companies accept similar responsibilities, would risk undermining competitiveness. In addition, regulatory intervention at United Kingdom level might risk creating distortions in the internal market, so falling foul of European Union law. We were therefore encouraged by the cautious welcome given to the prospects of vendor liability by Viviane Reding, Commissioner for Information Society and Media at the European Commission:

"We will follow the development of the industry-led initiatives in this area … If industry, if the market can sort out the problem we leave the market to do that, but we also say to the market or to the industry, 'We do not want this to happen for a very long period of time, so if you can sort it out, do it, and if after one or two years you have not managed to sort it out then we will have to come in with regulation,' because here we believe that self-regulation is the best way out, if it is possible. If not, then we have to go to a binding regulation which is potentially costly to the industry" (Q 947).

Conclusions and recommendations

4.38.  The IT industry has not historically made security a priority. This is gradually changing—but more radical and rapid change is needed if the industry is to keep pace with the ingenuity of criminals and avoid a disastrous loss of confidence in the Internet. The major companies, particularly the software vendors, must now make the development of more secure technologies their top design priority. We urge the industry, through self-regulation and codes of best practice, to demonstrate its commitment to this principle.

4.39.  In particular, we urge the industry to endorse the following as best practice:

4.40.  However, efforts to promote best practice are hampered by the current lack of commercial incentives for the industry to make products secure: companies are all too easily able to dump risks onto consumers through licensing agreements, so avoiding paying the costs of insecurity. This must change.

4.41.  We therefore recommend that the Government explore, at European level, the introduction of the principle of vendor liability within the IT industry. In the short term we recommend that such liability should be imposed on vendors (that is, software and hardware manufacturers), notwithstanding end user licensing agreements, in circumstances where negligence can be demonstrated. In the longer term, as the industry matures, a comprehensive framework of vendor liability and consumer protection should be introduced.

Readers are reminded that the word vendor is used in the sense universal within the IT industry, namely the manufacturers of software and other products, rather than the general English sense of retailer. Back

16   See Chips for Everything: Britain's Opportunities in a Key Global Market (2nd Report, Session 2002-03), paragraphs 4.18 ff. Back

previous page contents next page

House of Lords home page Parliament home page House of Commons home page search page enquiries index

© Parliamentary copyright 2007