INTERNET SERVICE PROVISION
3.41. There appears to be still greater scope
for intervention at the level of the Internet Service Provider
(ISP). ISPs do not typically operate the network; instead they
sell access to the network to their customers, often bundled together
with a range of other services, such as web-based email, telephone
(conventional or VoIP), cable television and so on. They sit,
in other words, near the edges of the network, providing a link
between the end-user and the network.
3.42. While the broadband infrastructure is largely
in place, the market for Internet services continues to grow and
is highly competitive. Internet services in the United Kingdom
are marketed largely on price: indeed, since 2006 the advent of
"free" broadband (although in reality, as David Hendon
of DTI told us, all the ISPs have done is "re-partition the
costs in a certain way") has given such competition a new
intensity (Q 70).
3.43. Regulation of Internet services is the
responsibility of Ofcom. However, the evidence we received from
Ofcom (evidence which was only provided late in the inquiry, as
a result of a direct approach by the Committee), suggests that
there is very little regulation in practice. This is not entirely
the fault of Ofcomwe have already noted that content is
specifically excluded from Ofcom's remit by virtue of the precise
definitions of what they regulate in section 32 of the Communications
Act 2003. However, questions remain over Ofcom's interpretation
of its residual remit.
3.44. Ofcom appears to have taken the broadest
possible view of what constitutes "content" under the
Act, to embrace security products as well as text or images. In
the words of their written evidence: "Although security products
are valuable tools for consumers they are not a part of the regulated
Internet access serviceany more than are the PCs which
are typically used as the access device. Antivirus software, firewalls
etc. largely run on customer equipment and are in practice outside
the control of the Internet service provider" (p 320).
Elsewhere the memorandum echoes the Government's position that
"ultimately the choice of the level of security to apply
to one's data is a choice for the end user which is why some consumers
choose to apply their own security at the application layer rather
than relying on the network to maintain security and integrity"
(p 325).
3.45. We find Ofcom's argument entirely unconvincing.
It simply describes the status quosecurity products
are at present largely run on customer equipment, and are thus
outside the control of the ISPs. But this falls well short of
a convincing rationale for Ofcom's conclusion that security products
"are not a part of the regulated Internet access service."
Why are they not a part of the regulated service? Would it not
be in the interests of consumers that they should be made a part
of the regulated service? Ofcom failed to provide answers to these
questions.
3.46. Ofcom went still further in resisting any
suggestion that its responsibility for enforcing security standards
should be extended. The Society for Computers and Law (SCL) expressed
concern over the enforcement of Regulation 5 of the Privacy and
Electronic Communications Regulations 2003. This requires that
ISPs should take "appropriate technical and organisational
measures to safeguard the security" of their services. But
the SCL pointed out not only that the Regulations and the parent
Directive offered "no guidance or standards" on what
technical measures might be appropriate, but that enforcement
was the responsibility not of Ofcom but of the Information Commissioner's
Office (ICO), which lacked both resources and powers to act effectively.
The SCL recommended that enforcement "should be a matter
for Ofcom" (p 128).
3.47. This proposal was firmly rejected in a
letter from Ofcom, which stated that "Ofcom does not have
a remit in the wider area of personal Internet security or indeed
the necessary expertise." Ofcom insisted that the ICO was
best placed to enforce the Regulations, and drew our attention
to a forthcoming "letter of understanding" which would
set out how the two regulators would collaborate in future (p 312).
3.48. Ofcom's interpretation of what constitutes
a "regulated Internet access service" was, perhaps unsurprisingly,
echoed by the ISPs themselves. Asked whether ISPs should not be
obliged to offer virus scanning as part of their service, John
Souter, Chief Executive Officer of LINX, asked a question in reply,
"What would be the authoritative source that you would mandate
as the thing to check against?" (Q 733) This is a legitimate
question, and would be very pertinent if ISPs were given a statutory
duty to provide a virus scanning service, but in reality companies
developing and selling security software have to answer it every
day, so it is not immediately apparent why ISPs should not make
use of their well-established expertise and provide users with
a scanning service that is appropriate to their circumstances.
Indeed, ISPs in the United States are obliged to offer a basic
level of security as part of their service to customers.
3.49. In this country, on the other hand, it
is left entirely to end-users, confronted as they are by bewildering
and often conflicting sources of information, to take these crucial
decisions. As we have noted, Ofcom treats security as an add-on,
not an integral part of Internet services. As for long-term improvements
in the level of security, it is assumed that the market will provide.
In the words of James Blessing: "If it is a problem I would
suggest that maybe it is time to change your ISP. That is simple
advice but from our members' point of view they are out there
to provide you with a service as a customer that you would want.
If you say I want anti-virus, I want anti-spam on my account and
they do not provide it, then they are not the ISP that you require"
(Q 738).
3.50. Mr Blessing's argument is plausible
as far as it goes. However, it overlooks the fact that the individual
choices that customers make regarding Internet services affect
not just themselves but society as a whole. The Society for Computers
and Law, after acknowledging the force of the free-market argument,
provided a convincing rebuttal: "users with unprotected PCs
who choose to obtain access via an ISP that has no controls or
security measures are more likely to be attacked by botnet herders,
who can then expand their botnet to the detriment of all other
(protected/secure) users of the Internet and of the public, if
such botnets are used for criminal purposes" (p 126).
3.51. At the opposite end of the spectrum from
the ISPs, Bruce Schneier argued forcefully that ISPs should take
more responsibility for security. We have already quoted his belief
that the major players in the online world should take more responsibility
for assisting the "average user". As far as the ISPs
were concerned, his arguments were based not on abstract principle,
but on practicalities:
"I think that the ISPs for home users very much
should be responsible. Not that it is their fault, but that they
are in an excellent position to mitigate some of the risk. There
is no reason why they should not offer my mother anti-spam, anti-virus,
clean-pipe, automatic update. All the things I get from my helpdesk
and my IT department
they should offer to my mother. I
do not think they will unless the US Government says, 'You have
to'" (Q 529).
3.52. This prompts a key question: is it more
efficient for basic security services such as spam or virus filtering
to be offered at the ISP level or at the level of the individual
end-user? It is worth noting that although, according to a 2006
survey conducted by Symantec, some 90 percent of end-user machines
in the United Kingdom have anti-virus software installed, this
figure includes a significant number of users who never update
their software, which is therefore rendered useless. John W Thompson,
CEO of Symantec, told us in the course of a private discussion
that he thought some 20-25 percent of computers worldwide were
at risk because their users were indifferent to security. Whatever
the attractions of placing responsibility upon end users, the
fact is that a huge number of them are not currently exercising
this responsibility. That responsibility could possibly be more
efficiently exercised, and with economies of scale, by ISPs.
3.53. A second question is, whether imposing
upon ISPs a responsibility to provide a basic level of security
to customers would lead to the dire consequences predicted by
the ISPs, in particular the stifling of innovation across the
sector as a whole? We see no reason why it should, as long as
a "light touch" is maintained, rather than a blanket
imposition of legal liability for every security breach, however
caused.
3.54. We have already drawn attention to developments
in the field of content regulationnot only the insistence
that ISPs block websites containing child abuse images, listed
on the IWF database, but also the development of a BSI kite mark
for content control software. Given that, as we have also noted,
the distinction between "content" and other forms of
Internet traffic is blurred, we see a strong case for introducing
similar initiatives to cover personal security. Existing anti-virus
and firewall technology is capable of blocking all traffic containing
samples of known malicious code (using databases which companies
like Symantec update daily). Such technology is not fool-proof,
but it has proved its value over many years, without stifling
innovation, and we can see no reason why it should not be routinely
applied at ISP level.
3.55. Indeed, deployment of security software
at ISP level could have one crucial benefit. Firewalls and spam
filters generally work in one direction only: they are designed
to prevent bad traffic reaching the end-user, but they do not
always filter outgoing traffic. In particular, once the end-user
machine has been infected, and is either propagating malware,
or is being used as part of a botnet to send out spam, the firewall
and anti-virus software will be turned off by the malware, and
updating will be disabled. Moreover, the end-user himself will
in all probability not be aware that his machine has a problem,
and even if he is made aware of the problem (for instance, that
his machine is part of a botnet), he has no incentive to fix ithe
himself suffers no significant harm if his machine is sending
out spam. The recipients of the spam, and the network as a whole,
if the botnet is used to launch DDoS attacks, are the ones to
suffer harm.
3.56. ISPs, on the other hand, are well placed
to monitor and, if necessary, filter outgoing traffic from customers.
If unusual amounts of email traffic are observed this could indicate
that a customer's machine is being controlled by a botnet sending
out spam. At the moment, although ISPs could easily disconnect
infected machines from their networks, there is no incentive for
them to do so. Indeed, there is a disincentive, since customers,
once disconnected, are likely to call help-lines and take up the
time of call-centre staff, imposing additional costs on the ISP.
3.57. This is not to say that some ISPs do not
already act in this way. Matthew Henton, of the ISP Brightview,
confirmed that his company will "disconnect [an infected
user's] machine from the network, we will contact that user and
normally they would be entirely unaware
and we will work
with them to disinfect their machine and ensure that they are
adequately protected against future infection" (Q 744).
We applaud this approachbut are conscious that it is not
universal. Doug Cavit, at Microsoft, told us that while most (though
not all) ISPs isolated infected machines, they generally found
it too expensive actually to contact customers to fix the problem.
Nor is this service well advertisedindeed, any ISP which
advertised a policy of disconnecting infected machines would risk
losing rather than gaining customers.
3.58. There is thus at present a failure in incentives,
both for end-users and ISPs, to tackle these problems. We do not
therefore see any prospect of the market delivering improved security
across the board. At the same time, we see no reason why the sort
of good practice described by Mr Henton should not, by means
of regulation if necessary, be made the industry norm.
3.59. We do not advocate immediate legislation
or heavy-handed intervention by the regulator. Nor do we believe
that the time has yet come to abandon the end-to-end principle
once and for all. But the market will need to be pushed a little
if it is to deliver better security. The example of the Ofcom-sponsored
kite-mark for content control software indicates one possible
way forward; a similar scheme for ISPs offering security services
would give consumers greater clarity on the standards on offer
from suppliers, and would help achieve greater uniformity across
the market-place, particularly if backed up by the promise of
tougher regulatory requirements in the longer-term.
3.60. The Government did in fact indicate that
they were discussing options for improving security with the Internet
services industry. As Geoff Smith, of the DTI, told us: "We
are also in discussion with the ISP community about a new initiative.
I am not sure one would describe it as self-regulation, but certainly
to develop a better understanding of what ISPs can offer as, if
you like, a minimum service or what we would see as a code of
practice around the security they are offering to their consumers"
(Q 70).
3.61. We welcome the fact that the Government
have at least started to think about these issues. However, the
discussions described by Mr Smith appear wholly open-ended;
the fact that he was not even prepared to describe what was envisaged
as "self-regulation", let alone "regulation",
inspires little confidence. In short, the Government's actions
so far have been toothless.
THE "MERE CONDUIT" DEFENCE
3.62. A specific legal consequence of the approach
we are recommending would be the erosion of the "mere conduit"
principle, embodied in the E-Commerce Regulations of 2002[14].
This principle provides a defence for network operators against
legal liability for the consequences of traffic delivered via
their networks. The principle can be caricatured, in Professor Zittrain's
words, as the ability of the ISP to say, "I'm just the conduit.
I'm just delivering the ticking package. You can't blame me."
We would not wish to see the mere conduit defence, any more than
the end-to-end principle, abandoned. However, we agree with Professor Zittrain
that it is now appropriate to "take a nibble out of the blanket
immunity". In particular, once an ISP has detected or been
notified that an end-user machine on its network is sending out
spam or infected code, we believe that the ISP should be legally
liable for any damage to third parties resulting from a failure
immediately to isolate the affected machine (QQ 961-963).
3.63. This carries a risk. It could create a
disincentive for ISPs proactively to monitor the traffic emanating
from their customersthey might conclude that it was in
their interests to remain ignorant of compromised machines on
their network until notified by others. This would be counter-productive,
and could compound existing legal constraints to do with data
protection and interception of communications, which already affect
security research. To guard against such an outcome, not only
should ISPs be encouraged proactively to monitor outgoing traffic,
but in so doing they should enjoy temporary immunity from legal
liability for damage to third parties.
VOICE OVER INTERNET PROTOCOL
3.64. We raise here one further issue that emerged
in out inquiry, which relates to the robustness of the networkalthough
it is largely distinct from the other issues discussed in this
chapter. This is the regulatory framework for Voice over Internet
Protocol (VoIP) suppliers, and in particular their ability to
offer an emergency "999" service. When we spoke to Kim
Thesiger, of the Internet Telephony Service Providers' Association
(ITSPA), he said that "I do not know of a single ITSPA member
who does not want to offer 999 services and would like to do so
as soon as possible, but there are some significant regulatory
and bureaucratic problems" (Q 782). In particular, VoIP
companies have to satisfy the requirements imposed upon Publicly
Available Telephone Service (PATS) providers.
3.65. Kim Thesiger expressed particular concern
over the "network integrity clause" of the PATS requirements.
In a "copper-based" world it was clear what "network
integrity" meant. In the world of the Internetin which,
as we have noted, packets of data travel across a network of copper,
fibre-optic cable, wireless signals, and so onit is far
less clear what either what constitutes "network integrity",
or what control the VoIP provider can have over it. He said that
the message from Ofcom was that "you must decide yourselves
whether you have network integrity or not"which, if
the wrong decision was made, could expose providers to unacceptable
risks in the event of network failure.
3.66. VoIP is a relatively new technology, and
Ofcom's position on emergency services is still evolving. In written
evidence, Ofcom drew attention to a new Code of Practice for VoIP
providers, which would require them to make clear to potential
customers "whether or not the service includes access to
emergency services", and the level of dependence on externalities
such as power supply. However, this does not address the issue
of network integrity, or Kim Thesiger's point that Ofcom believed
that "in order to offer 999 calls you must be PATS-compliant".
In fact Ben Willis, Head of Technology Intelligence at Ofcom,
told us that the regulator had recently, in effect, toughened
the rules, bringing to an end a policy of forbearance on emergency
services, which had been based on the principle that "it
was better to have some 999 access than none at all" (Q 1030).
Instead Ofcom was initiating a new round of consultation, due
to be completed in summer 2007but with no apparent commitment
to clarity the position.
Recommendations
3.67. The current assumption that end-users
should be responsible for security is inefficient and unrealistic.
We therefore urge the Government and Ofcom to engage with the
network operators and Internet Service Providers to develop higher
and more uniform standards of security within the industry. In
particular we recommend the development of a BSI-approved kite
mark for secure Internet services. We further recommend that this
voluntary approach should be reinforced by an undertaking that
in the longer term an obligation will be placed upon ISPs to provide
a good standard of security as part of their regulated service.
3.68. We recommend that ISPs should be encouraged
as part of the kite mark scheme to monitor and detect "bad"
outgoing traffic from their customers.
3.69. We recommend that the "mere conduit"
immunity should be removed once ISPs have detected or been notified
of the fact that machines on their network are sending out spam
or infected code. This would give third parties harmed by infected
machines the opportunity to recover damages from the ISP responsible.
However, in order not to discourage ISPs from monitoring outgoing
traffic proactively, they should enjoy a time-limited immunity
when they have themselves detected the problem.
3.70. The uncertainty over the regulatory
framework for VoIP providers, particularly with regard to emergency
services, is impeding this emerging industry. We see no benefit
in obliging VoIP providers to comply with a regulatory framework
shaped with copper-based telephony in mind. We recommend instead
that VoIP providers be encouraged to provide a 999 service on
a "best efforts" basis reflecting the reality of Internet
traffic, provided that they also make clear to customers the limitations
of their service and the possibility that it may not always work
when it is needed.
11 Jonathan L Zittrain, "The Generative Internet",
Harvard Law Review, 119 (2006), p 2029. Back
12
For instance SONAR (Symantec Online Network for Advanced Response). Back
13
See Appendix 5. Back
14
See Regulation 17 of the Electronic Commerce (EC Directive) Regulations
2002. Back