Select Committee on Science and Technology Minutes of Evidence


Examination of Witnesses (Questions 309 - 319)

WEDNESDAY 10 JANUARY 2007

MR ALAN COX AND MR ADAM LAURIE

  Q309  Chairman: Welcome Mr Cox and Mr Laurie. Thank you very much for coming today and addressing our questions. Would you like to introduce yourselves first please.

  Mr Cox: My name is Alan Cox and I am here to represent the open source community in general. That is a bit different perhaps to representing a company but it is a broad church and I am trying to summarise its views rather than being a dictator of those views.

  Mr Laurie: My name is Adam Laurie and I am a director of a secure hosting data centre called The Bunker and I am also an independent security researcher and a participant in security conferences around the world.

  Q310  Chairman: Thank you. Would either of you like to make an opening statement or should we go into the questions?

  Mr Laurie: I would just like to say when I look at Internet security I tend to look at it not just from the point of view of securing the Internet but also how the Internet can be used as an attack vector against other areas of life. I take a very broad view. We had some examples of credit card details being stolen by skimming and then used on the Internet and the same can be vice versa, so I may drift off into other areas of communication and personal security.

  Q311  Chairman: Right, good, thank you for that. Let me open with the first question and that is: who should be responsible for keeping end user machines secure?

  Mr Cox: I would take a similar line to the Microsoft people. Certainly there is at least a moral duty on people providing software to do their best to produce software which is secure and as best as possible that is fit for purpose. The problem we have throughout all computer sciences is that we genuinely do not know how to build a perfectly secure, useable operating system. It is a research problem which one day will get solved and somebody will get very rich out of it, but the current state of affairs is that we cannot do that. Much like the way you have to maintain a car, we rely, to an extent, on end users being able to apply updates, in the same way your tyres wear out and you need new tyres on a car. That is not a manufacturer responsibility; it is an operator responsibility so you have to spread it across the two.

  Mr Laurie: I broadly agree with that. I think it is the duty of manufacturers to make it easy for you to make things secure. I am very pleased to see that Microsoft are now shipping secure by default settings so the file latch is switched on, which has been something that the open source world has also worked their way up to. They did not start doing things that way round but we found it is better to make the machine secure by default. However, you do have to provide the tools, advice, timely updates and advisories when there is a problem in order for the user to make their own choice as to whether or not they want to apply a particular patch or how secure they want to be. As has already been said, there is always a tradeoff between usability and security. What I think is very important is that we do not try and make manufacturers or vendors responsible because there are just too many factors for them to be aware of in an installed user base. You ship in an operating system and users will then stick third party products on top of that which may affect the security of the system. There is no way that a vendor can be aware of that and you will tend to end up in a situation where there is finger-pointing going on, the operating system vendor will say it is the third party software that is at fault and the third party software vendor will say it is the operating system that is at fault, and there will be grey areas where unexpected interactions cause insecurity when both products on their own are perfectly secure. It would not be fair to put that burden on the vendor. In the open source world as well there is a particular issue with liability. How can you make an open source vendor liable for a product that he has given away for free? There is no contract, there has been no consideration for delivering the software, so there is no way to enforce liability on an open source product.

  Q312  Chairman: At the same time a larger community works on it so do you think we should encourage people to use open source software?

  Mr Laurie: Absolutely. I believe that we are going to talk about the relative security of open source and closed source but I do not think the issue of liability should actually prevent you from using the software. With open source software you can see for yourself whether it is secure or not and do something about it if it is not. With the closed source software there is more of a promise by the vendor that a piece of software does what it is claiming it does. With open source it tends to be, "Well, this is the product and you can see for yourself what it does, but we make no promises. If you want to improve it then join the project and help to improve the software."

  Q313  Chairman: So you would say ultimately that manufacturers supplying the software should accept legal liability for losses caused as a result of security holes?

  Mr Laurie: No, I am saying they should not be held liable.

  Mr Cox: I do not think they can be. The response of any rational software vendor if they were told they were liable because of this business about adding software in combination would create a combatorial explosion. So for example, you buy a PC, you add a word processor, you add a media player, and you add a couple of games. All these can interact in strange and wondrous ways and as you add more software the combination increases. The rational thing for a software vendor to do faced with liability would be to forbid the installation of any third party software on the system. That would be the only behaviour that would be sensible. I do not know, however, whether there is an argument in the longer term that as technology improves and as we get better at writing secure software that the law does need to hold software companies to higher standards, at least in terms of negligence and areas where there is room for interaction. When talking about the open source offer, although the open source offer is generally given away by the people who develop it, there are companies around that software and most people who are end users or business users probably buy some kind of package of CD software and support rather than necessarily just getting the software for free. The question relates to how liability moves with the services and with the other parts of the product. At the moment there is a strange behaviour where if the CD is faulty you clearly have recourse. If the software on the CD is faulty the situation is clear, but that is the same throughout all sorts of services and non-physical goods in the European Union. It is not just a software problem, it is a very big problem. There are certain security people within the open source community—although it is not by any means a universally held view—who hold the view that one of the reasons that we have problems with computer security, as we have problems with many other things, is where those things have no recourse, no liability attached to them. There is nothing forcing standards to be reached or forcing quality to happen, and providing some kind of mechanism to persuade vendors to do the job right.

  Q314  Chairman: Do you have feelings about Wi-Fi and the fact that many people install it without any real security on it, in terms of logging into a local network if not being able to get into the computers themselves? Do you think Wi-Fi should be compulsorily pre-installed with security?

  Mr Cox: I think Wi-Fi is a perfect example of why you should have security by default. Wi-Fi is in the state today as most vendors of Windows products were seven or eight years ago in that the default is that security is not turned on in most cases. Some of them now turn security on by default and we should be glad of that because the default for any product which has the ability to cause harm should be that the harm-causing features are disabled, so you have to learn what you are doing and understand what you are doing before you put yourself at risk.

  Mr Laurie: It should definitely not be compulsory because in some cases you want your Wi-Fi access point to be wide open. There are many people who are part of a community of Wi-Fi sharing so they not running hot-spots in a commercial environment but they are running the equivalent of a Wi-Fi hot-spot at home where they invite neighbours or anyone in the area to use their Wi-Fi and in return they expect other Wi-Fi points to be available on their travels wherever they want to use it.

  Q315  Chairman: You don't think Starbucks would like to give an access code to everybody they sell a cup of coffee to?

  Mr Laurie: Starbucks might want to worry about people sitting in their cafes sending spam and so on so there will always be commercial considerations as to why you might want to protect your network. I think the question of liability again here is potentially there should be some issue of liability for companies shipping products that are known not to be secure and selling them as secure products. There are some specific issues with Wi-Fi where protocols like the WEP protocol was broken a long time ago and yet they carried on shipping it and telling the public that that was a secure protocol when it demonstrably was not.

  Q316  Lord Howie of Troon: I think Mr Laurie has come fairly close to answering my question already which is; is open source software more secure than closed source software and if it is can you tell me why?

  Mr Laurie: That is the $64 million question and that debate will rage forever and you will get arguments on both sides. From the open source perspective, we believe it is more secure because it is subject to more scrutiny and peer review and so on. You can look at the code yourself and see if it is secure or not. You can cross-check against known problems, so for example when an issue arises in one product you can look at how that issue actually came to be and whether other products are subject to the same problem. If it is a low-level library that has been used in the compilation of that product that is at fault, you could look at other open source products which use the same libraries and see if they are also affected. That is a process that routinely goes on. If there is a publication of a security issue in product A anyone who knows the source code will immediately go and check their own products to see if they are also affected. So you get a very rapid dissemination of security fixes in the open source world. As an example of the security of an open source product, there is a web server many people will not have heard of called Apache. Quite often when I am speaking at a high-level conference I actually ask the question of the room, "Who here has heard of Apache?" and maybe 10% of the people in the room will know. I will then ask, "Who has heard of Microsoft?"—big laugh, of course everyone knows Microsoft, and then it surprises them to learn that Apache SSL, which is the secure web server version of it, has 70% of the world market in secure servers. In its ten-year history there have only been three security alerts and two of those were because of external libraries that were being used, so there has only ever in its 10 year history been one issue specific to Apache SSL itself. On the other side of the coin, the closed source world relies on what we call "security by obscurity", so you are secure because the problem is not visible. That does not mean the problems are not there; they are just harder to find. So talking about the bad guys, if there is money to be made out of a security issue the bad guys are going to find it. They will poke around and they will dig around. Securing software by simply hiding the source code is like putting paper on the wall and saying you cannot find the windows. If you poke enough eventually you will pop through a window. The other issue with closed source is there are often commercial factors involved in whether or not they release security information or they fix a problem. If they believe that they are the only people who know that there is this particular security problem they may choose to do some damage limitation not to admit to the problem because it will damage their image too much. They do a risk versus reward calculation and decide the three people who are likely to find this problem or report this problem are not going to come forward so they are not going to bother fixing it unless it turns into a live issue. The open source world has no such limitations because we do not care, we have no liability, so as soon as an issue comes to light we will publish, and usually that will be within hours of the problem coming to light. Again the closed source world tends to take longer to react. They have a lot of infrastructure that needs to catch up, notifying their paying customers, printing manuals possibly, distribution and so on, so there are a lot of factors that affect the distribution of the fix that are harder for the closed source world to deal with than the open source world.

  Q317  Lord Howie of Troon: When you mentioned covering over windows earlier on, you were referring to, I presume, the security holes?

  Mr Laurie: Yes so a brick wall with windows in it and a sheet of paper over it.

  Q318  Lord Howie of Troon: If the windows which are the security holes are not covered over are they not therefore by definition easier to find?

  Mr Laurie: Absolutely, easier to find and easier to fix. The open source world relies on scrutiny so obviously you believe that your open source product is secure, you have released it into the world and people are using it. If an unexpected issue comes along it is much easier to see what is going on. You have many eyes looking at the problem and very often an open source problem will be fixed literally within minutes or hours of the problem coming to light, and that fix will then be available to the public because of the distribution mechanisms that open source employs. It will not have to wait a month or until the next patch Tuesday to get released to the public.

  Q319  Lord Howie of Troon: That makes you wonder. The security holes are found by what you call the bad guys but are they necessarily identified by the good guys at the same time?

  Mr Laurie: There is a lot of research being done not just by the bad guys but also by the good guys. The ones the bad guys find obviously they will try and keep to themselves but they also need to exploit them and as soon as they start exploiting them they start leaving a trail that this problem exists and then the good guys can come along and try and find how that problem came to be. There is also a huge community commonly referred to as "hackers" who are not hacking for bad, they are hacking for good. We refer to the bad hackers as "crackers" and everyone else as "hackers". They do it as an intellectual challenge. This is another debate which will rage and rage, what is the definition of a hacker, but there are hackers who do it for the good of the community and trying to find security problems before the bad guys do.


 
previous page contents next page

House of Lords home page Parliament home page House of Commons home page search page enquiries index

© Parliamentary copyright 2007