Examination of Witnesses (Questions 980
WEDNESDAY 18 APRIL 2007
Q980 Earl of Erroll:
Can I just ask, did they actually do that, because surely there
are in a sense some legal jurisdiction problems and trade implications
Professor Zittrain: There are wonderful problems
waiting to be taken up -
Q981 Earl of Erroll:
Did they fry the machines?
Professor Zittrain: The order is stayed pending
an appeal. The order was issued last August but the case has been
on appeal since, so no, nothing has been fried yet. I will say
that the basis of the appeal is not on the draconian nature of
the remedy, instead it is that the jury got it wrong and the patents
are not really infringing, that sort of thing.
Q982 Earl of Erroll:
There must be a major legal problem of jurisdiction if you start
frying people's computers which have been sold legally in other
jurisdictions? If that happened in Britain, we could sue all sorts
of people up the line.
Professor Zittrain: I think that may be so.
It can certainly create potential causes of action between the
users and the vendor, EchoStar. On the other hand, from a pure
jurisdictional point of view, it is possible but not certain that
the plaintiffs will say, "You, court, have jurisdiction over
this defendant EchoStar and you can order EchoStar on pain of
contempt or additional damages to perform any action around the
world in order to bring itself in compliance." Now, you might
say an EchoStar box installed overseas is not infringing a US
patent, but should they find that the patent extends to that,
that that caused a sale of a TiVo to be lost, I do not think the
issuance of the order alone need occasion a jurisdictional conflict.
But it is a good question, I agree.
Q983 Earl of Erroll:
It is, yes. I am going to explore one of the other things a bit
further, which is that UKERNA's written evidence argued that imposing
safety can make users psychologically dependent on others for
their safety and thus highly risk-averse and intolerant of any
failure." We see this, for instance, in other people's attitudes
to the relative risks of air and train travel versus car travel,
but even car travel is actually highly regulated in many ways
and in terms of car design, driving standards, road standards,
policing, all those sorts of things. A balance needs to be struck,
but is the balance that prevails in Internet services the right
Mr Cormack: I think it is heading in the right
direction. Since writing the submission, I think I have probably
modified my view. I think it is now a bicycle rather than a car,
but I think the question actually makes the point very nicely
that safety in a car depends on multiple factors. It depends on
the individual, it depends on the individual being qualified to
drive, it depends on the car being checked annually for safety
once it gets to three years old and it depends on roads being
well built. Some of those the market will deliver, I believe.
As I have already said, I think the ISP market is now on a spiral
heading towards well-designed networks: at different speeds, but
I doubt that anybody's security systems are going to get worse.
Whether we want to introduce Internet driving tests or compulsory
annual testing at the owner's expense on PCs, I do not know. Having
done time on help desks, I would love to be able to say to a user,
"You are just too incompetent to use this system. Go away,"
which is what a driving licence would allow you to do. There is
something which I did not mention earlier. There are actually
some regulatory pressures against doing the right thing. The mere
conduit defence, I think you have mentioned, where in fact the
definition of that means that because it is a binary switch you
are either a mere conduit or you are not, and you cease to be
a mere conduit when you select the information which is delivered.
There is at least a concern that an ISP, certainly if it introduced
filtering, is selecting the information which is delivered. I
do not know about ISPs, but I know colleges have expressed their
concern that if they filter, it is actually worse to try to filter
and get it slightly wrong than not to try to filter at all. That
is actually the law working contrary to what we would like to
do, so I think there is a problem there.
Q984 Earl of Erroll:
Just to explore the ISP a bit furtherwe have actually discussed
it, but to make it slightly more specific, we know that the ISPs
can detect when attacks are coming from insecure machines and
maybe we should require them to do more. Specifically, do you
think it would be a good idea to force the ISPs to do more to
fix the machines proactively, as we were talking about then, and
if we did do that should it really be through incentives or through
Mr Cormack: I think forcing the ISP to fix the
machine feels like a very, very bad idea, if nothing else because
there would be a huge liability. As Professor Zittrain has suggested,
you have no idea what is on that computer. The user could have
downloaded absolutely anything. Your attempt to fix it could well
start the whole thing to stop working.
Q985 Earl of Erroll:
Because you could not select a target, the botnet?
Mr Cormack: You could attempt to remove the
botnet software. However, the software is likely to have added
things at a sufficiently low level in the operating system to
conceal its own existence from the user. Removing that could well
make the operating system extremely unstable and crash at some
later point, and I think the users would then have a reasonable
complaint against their ISP. I have some sympathy for the idea
that if an ISP sees a large amount of traffic coming from an individual
machine it should reduce that machine's ability to harm others,
whether by blocking its content entirely or whether by reducing
the band width, or whatever. I think those are possible. Whether
the solution would scale to the size of large ISPs, I do not know.
Professor Zittrain: To start to get at that
question, I want to stick with the traffic analogy for a moment.
I have been intrigued by a movement in traffic management called
"Unsafe is Safe" and in a number of cities in Europe,
including the Dutch city of Drachten, they have implemented something
called VerkeersbordvrijI know I am not pronouncing that
properlywhich is the absence of road signs, and I actually
think there might be a neighbourhood in Kensington where this
is being tried as well. It is completely counterintuitive, at
least to me, but by removing in the Dutch case nearly every sign
and having only two rules (one is to generally be careful and
the other is not to park your car in a way that other cars get
blocked, but otherwise they have eliminated even parking spaces,
you just park the car wherever it is not blocking somebody else)
there has been a remarkable decline in traffic accidents. Part
of what they attribute that decline to is that it compels people
to actually be much more aware of their environment and of other
drivers. They have to take responsibility for their own safety,
and it means that they do. Whether that would work in every city
in the world is highly dubious and trying to transplant that to
the Internet context, where it is much harder to make eye contact
with other users and when the harm that one can cause is not as
symmetricif you get into a car accident you do not only
dent the other car, you dent your ownthese are some of
the puzzles to cure, but I actually think there do exist cures
that we can try and code which represent what you describe as
the incentives route before trying the outright regulation route.
One example of that would beI mentioned StopBadware beforewe
are working with a number of companies, including Google, so that
when we see one of these websites which is spewing malware, and
we can detect it automatically, we add it to a list. Google shares
the list with us. We also make it available to other search engines.
When somebody performs a search and one of these sites which has
the badware on it comes up as a hit, it does come up but then
it has an extra line in the hit provided by Google which says,
"Warning, this site may harm your computer." If you
click on the link anyway, instead of going to the site it takes
you to an interstitial page provided by Google which says, "No,
we really mean it. We think there's badware on this site. We recommend
that you back up and try another result. If you really want to
continue, okay, you may highlight, copy and paste the URL and
go on." In our experience partners like Google have found
approximately 30,000 sites in the past month which meet these
criteria. The webmasters of the sites see an over 90 per cent
drop off in traffic when the interstitial is added. At that point
the webmaster goes from a level of priority for fixing the site
which was achieved by only warning but not having the interstitial,
I would say tenth on the list of things to do that day, to it
is the number one thing, and I do not care if it is a weekend
or a holiday, that webmaster was desperate to get the site back
up. It may be an over-incentive, but that is a great example of
a collaborative effort run under a .ac or .org rubric in cooperation
with dotcom, hopefully spread out enough so we are not creating
some new gatekeeper which might then abuse the power to put up
the interstitial, which allows us to have people making the eye
contact and expressing, "Actually, your car is blocking an
entire river of traffic."
How much of the useful, valuable generative function of a PC would
be lost if you only allowed it to communicate on the terms of
the user of that PC, if you did not leave the door open completely?
You would lose this sort of grid computing potential we hear about,
but people could sign up for grid computing if they wanted. If
they did not want to sign up their computer to be in a grid, I
cannot see why you cannot stop people coming into that computer
and using it as a botnet, as a zombie. I just do not understand
that and I am not convinced by anybody yet. Perhaps you can convince
me. Richard tries all the time, but can you convince me that that
is just not possible, so that the PC will communicate when you
ask it to communicate? I am told you have absolutely no idea,
your computer is sitting there, spewing out Viagra ads to millions
of people and you do not even know it. I find that ridiculous.
You should be able to provide that capability and tell the person,
"This computer is spewing out Viagra ads. Do you want it
to do that?" I just do not understand this.
Professor Zittrain: I think each of us is eager
to take a crack at it.
Mr Cormack: Two points. The initial software
gets there by invitation, almost universally. The initial software
is not software, it is an email message saying, "Here is
something attractive, something you want. Please download it."
So the user is fooled into inviting the software in. On the question
of subsequent transmission, there is plenty of software available
already which you can put on your own computer which says, "The
following program is trying to communicate with the Internet.
Do you wish it to do so?"
I had that on my machine, I know. Why do not all machines have
that? Why is it not compulsory?
Mr Cormack: I would ask high street computer
vendors. I would love to see that sort of software on machines
It is a bit like turning the engine off. I do that. One of the
troubles is that the handshaking time to connect on a lot of things,
particularly wireless networks, is tedious. It is like the internal
combustion engine, it does not turn off every time you go to the
lights because you have got to crank up the electric starter and
get it going again, whereas an electric car can do that. I do
not see why we cannot have a system where you have that option,
that your communication ports are closed when you do not want
to use them.
Professor Zittrain: Yes. I have two and a half
answers to your question. Answer number one is, you are absolutely
right, this is exactly what a firewall is by definition. The computer
has different ports. These are virtual constructs but the computer
can come to understand them, and the firewall says, "I'm
not going to let any data out of these ports except only these
other ports and I'll only allow data in if I see that it has been
invited by a previous communication out by my own port."
So if I see my computer send a signal to a web page and say to
the web page, "Call me back on this port," I'll then
open it because of that invitation. That is why firewalls, to
the extent that they are effective, can be effective. It is also
why, I completely agree with you, in the short to medium term
Internet service providers are in a good position to detect traffic
patterns that are machines which appear to have slipped their
leads, precisely because of the way, and the volume, they are
communicating. There is the off-chance, when you have a million
machines, even a small percentage will represent a good absolute
number of people who have some reason to be communicating that
way and we could see them being able to say, "No, no, it's
fine," but in the short term I think it would be very helpful.
Now, on the other handand here is the halffirewalls
themselves are not cure-alls and a way to understand that is,
you may remember the era of cookies, when people were very worried
about cookies and their browsers, and browsers responded to the
market by giving you an option to individually approve every cookie
that is about to be set. It turned out that setting cookies is
so useful, especially to support the multi-billion pound advertising
economy, that cookies are getting set all the time and if you
do set your browser for that you are asked in a way in which you
have no way of making an informed decision to accept every ten
seconds one cookie or another and it becomes overwhelming. This
leads to the second and a half point, which is that a number of
applications now that you may find yourself using only occasionally
really do benefit you and others by having a fairly continuous
set of communications over the Internet, and I will give three
very fast examples. One is Skype. You would think that Skype,
to do computer to computer calling, when I make a call, is making
the connection and when I have hung up it might as well be disconnected.
But it turns out that thanks to firewalls and some other issues
like so-called Network Address Translation, tricks are needed
to make Skype work. In those cases, Skype uses lots of other people's
idle connections to help it route calls from one machine to another.
This is a very interesting use of the generative PC and network.
If you were to have every Skype machine disconnect whenever it
was not in use, from the point of view of the Skype implementation
it would be a selfish thing to do that would actually bring down
much of Skype as a network, and that feature is exactly what the
people who made Skype (who are also the people who made Kazaa)
are now putting into Joost, which is IP television routing around
the bottlenecks of traditional television networksbut they
are only able to make it work because they can harvest the so-called
grid computing. So just as with so many of these applications,
they start off obscure and then enter the mainstream, I think
peer to peer computing starts off obscure, grid computing to help
chart hurricanes or to look for extraterrestrial life, and then
they become very mundane to help Skype or Joost do their thing,
or the very definition of "end point" turns out to be
flexible. I have one Ethernet drop in my house, but using a service
like a phone or something else I might find it socially valuable,
and others would too, to share that connection and from the point
of view of the ISP they do not know from which computer behind
that access point there is use taking place and if I turn off
my computer the way I turn off a car when I am not driving it,
everybody dependent upon my connection now loses the ability to
I would still argue that the individual should have the option
of not having their machine used as a slave, even for Skype.
Professor Zittrain: Absolutely. I think that
is right, and in fact I encourage people sometimes to open up
one of those command windows, if they are using Windows, or the
terminal window in Macintosh, and just type "netstat"
and you can see all of your extant network connections, and if
you are running Skype you are communicating with 100 different
machines all around the world and you have no idea what data is
going into or out of the machine, and frankly Skype has the keys
to the kingdom. It is only because we trust the guys who made
Kazaa that we choose to run it.
Q990 Earl of Erroll:
Is there not a big problem, though? If people are only paying
for limited band width, then they are actually potentially paying
for that Skype connection as well?
Professor Zittrain: That is true. To the extent
that the economics of network connectivity turn out to be that
one pays in a metered fashionand that tends to be more
typical, say, in the UK than elsewhereyou have got that
one gigabyte limit, or something, the act of sharing one connection
can end up exceeding that band width.
Q991 Earl of Erroll:
And therefore the users are inadvertently charged for the thing
they did not think they were using?
Professor Zittrain: I think that is true, and
that would certainly help to at least inform the user, "This
is the amount of usage you have," just like it is good to
know how many minutes have been used on your phone, especially
if you are lending it out to people all the time.
Chairman: We have to move on. Lady Sharp,
Q992 Baroness Sharp of Guildford:
Really on the same sort of topic, you kind of argued in your evidence
that to achieve the full potential of the Internet it is necessary
to ensure that individuals know how to keep themselves safe online.
How effectively do you think we as a society are instilling this
knowledge, whether into the young or those of an older age, and
is it really feasible (as you do suggest at one point) that they
should train the kids to teach their grandparents?
Mr Cormack: I did not teach my grandparents.
I taught my parents how to use it safely and that was fairly painless.
That we are getting there is possibly overstating it. There is
a number of very good things happening. There was a QCA proposal
on curriculum for key skills in schools, for which I expected
to have to re-write huge sections of the draft. I did not. It
was all there. I think the only two things missing were the ability
to recognise deceptive communication and the ability to maintain
your own computer; to run antivirus, to keep it up to date, those
sorts of things. Other than that, that was all there, so the curriculum
can exist. Getting teachers, not just to teach Internet security
one hour a week but to themselves behave correctly, that is hard.
There is a nice series of websites and DVDs produced by Childnet,
which is the cross-generation thing. There are lots of good initiatives
happening. I did spot a Symantec figure which suggests something
has improved because in 2004 twenty-five per cent of all the botnet
infected computers in the world were in the UK. Last year we were
down to four per cent.
Professor Zittrain: Probably more computers,
just more other computers!
Mr Cormack: Yes. I could not find the absolute
Q993 Baroness Sharp of Guildford:
Can I ask a supplementary to this? We understand that many experts
ignore advice about encrypting wireless networks and are not in
favour of changing passwords regularly, yet these are regarded
as being the basics of good security practice. There must be some
advice that everyone agrees upon, but how can normal users tell
what is good advice and what is not good advice?
Mr Cormack: Run antivirus and keep it updated.
Run regular patching, Windows update, or whatever. Turn on a firewall,
both inbound and outbound, and be as suspicious and as cynical
as you are in the real world. Do not suddenly become innocent
and trusting when you go online.
Professor Zittrain: I think that advice is difficult
to quarrel with, but it also to me expresses the gap between where
we are with the state of the art in advice to the mainstream users
and where we need to be if they are not to ultimately migrate
away, and for that we really do needit might be a little
bit too colourful to call it a Manhattan project, but we need
to recognise that the market did not provide for the Internet
to begin with. The market provided networks, but they were these
proprietary networks. Government subsidies to academia and interested
moonlighting commercial entities and research arms did the trick,
and then the commercial forces came in to smooth off the rough
edges; a very nice two-step. I think we are in the same situation
right now, that subsidising a set of tools which do not exist
right now but which could be brought online to actually use the
generativity of the Internet and PC to create new tools can help
us give, in the long term, much better advice to users. Just one
quick example of that is a tool that users can download and it
would measure certain vital signs off the machine. It is not hard
to say how happy the machine is. How often is it re-starting?
How many pop-up windows over a time interval is it getting? There
are certain metrics you can gather and then compare with other
machines in the herd. You are on a network, you can query nearby
machines, so that when you encounter new code, just like a new
cookie, seamlessly the computer can say, "Has this code been
seen before? Has this code been floating around the network for
two years or did it just pop up yesterday, and for those computers
on which it is already running, did their happiness levels as
machines drop, stay the same or go up?" Those are the kinds
of instruments which could go onto a dashboard which could help
users make informed (but not overwhelming to them) choices about
what code to run, and it would be respectful of different levels
of risk tolerance for different users. What you might choose to
run at home would be different from what a merchant might want,
or a cyber café owner, et cetera. It is not one-size-fits-all.
So I would love to see some money and some momentum put behind
the collective development and experimentation with those tools.
Baroness Sharp of Guildford: Thank you.
Q994 Lord Harris of Haringey:
We are about to hear from Ofcom and I know that you have already
talked about the sort of "removing road signs" model,
but I want to ask you how effectively do you think the Internet
security is regulated in the UK, and in particular do you think
Ofcom takes regulation of the Internet services seriously enough
and are the provisions of the Communications Act (which exclude
the regulation of content from Ofcom's remit) sustainable in the
long-term, particularly in the context of convergence?
Professor Zittrain: One of the real blessings
for me of having taken up residence in the UK now much of the
year for two years has been to come to know the staff and the
people at Ofcom. The kind of ability I have seen with Ofcom staff
to take a sober view of what is going on, the curiosity that I
see, the intellectual curiosity that I see that they possess to
what is going on and the appropriate level of caution with which
they treat interventions they might be in a position to make,
all of those to me bode very well. I think content regulation
is a briar patch that they would not want to be thrown into because,
especially once you go there and have to start making truly content-based
decisions, security is now a side issue, at least technical security,
and it will be much harder to devote attention to. So this is
not a paean to inaction, but I think so far Ofcom has been keeping
an eye on the situation, understanding that the interventions
are delicate enough right now that they require cooperation from
a lot of parties. There is not just one regulatory point of intervention
that can solve the problem.
Mr Cormack: I had interpreted the question differently.
I think I would agree with your possibly unstated thing, that
regulating content is going to be incredibly complicated because
it is so hard. Regulation automatically involves drawing lines
and defining where those lines are. We have recently been looking
at IP television and when different licensing regimes come in,
and the question appears to be how much delay from the original
broadcast there is. That is a completely arbitrary decision which
is going to be made and wherever you draw the line people will
either be one second ahead or one second behind. The impact of
that, actually defining something, I think would be highly disruptive.
Q995 Lord Harris of Haringey:
Can I just finally ask about the sharing out of responsibility
between Ofcom and the Information Commissioner. Do you think that
works satisfactorily? Should the Information Commissioner be given
Mr Cormack: I would certainly be very pleased
to see the Information Commissioner able to deal more effectively
with particularly spam, which I know is a concern of that office.
It is interesting that in the past few months it has become apparent
that actually the most effective way to deal with spam in the
UK is through the civil courts, not by the regulator at all, which
is really rather depressing.
Professor Zittrain: I cannot yet share a useful
answer to that question, given my own comparative ignorance of
the governance here.
Lord Harris of Haringey: Thank you.
We are nearly at the end. There is one very quick question and
I would like a very quick answer. Is there a problem with researchers
crossing the law when they are researching these topics?
Mr Cormack: As of today, no, because the amendments
to the Computer Misuse Act (1990) brought in by the Police and
Justice Acts 2006 are not yet in force. I think as currently drafted
and lacking further explanation, I have had a lot of concerns
expressed, not just by researchers but by teachers, asking whether
they have to stop undergraduate teaching where they are teaching
people to code securely by getting them to write an Internet server
and then exposing that server to hacking tools. An excellent way
to teach programming, but they are saying -
It is a good way to teach hackers, too?
Mr Cormack: No, but to teach undergraduates
to actually think about security when they are coding, which is
all too rare in professional programmers, by clearly demonstrating
what happens if you do not, exposing it to the typical background
noise of the Internetright through to people who are at
masters level, teaching penetration testing as professional development.
They have contacted me, saying, "Do we have to pull this
course next year?" My current answer is, "I am afraid
I don't know."
We are going to have to bring it to an end. Thank you very much
indeed. You can sense our interest in this and we do not have
enough time, but we never have enough time when we really get
interesting witnesses. So thank you very much indeed for your
contributions. If you think of anything else which you think might
be useful for us, please let us know.
Professor Zittrain: Thank you.
Mr Cormack: The thing I had forgotten halfway
through was that if an ISP takes action to degrade or modify a
user's connection, they must provide the user with information
so the user can fix his own problem.
Chairman: All right, we note that. Thank