APPENDIX 5: VISIT TO THE UNITED STATES |
Members of the Sub-Committee taking part in the visit
were Lord Broers (Chairman), Lord Harris of Haringey, Baroness
Hilton of Eggardon, Lord Howie of Troon, Lord Mitchell, Dr Richard
Clayton (Specialist Adviser) and Christopher Johnson (Clerk).
Washington DC, Monday 5 March
Federal Trade Commission
The Committee was welcomed by Hugh Stevenson, Associate
Director for International Consumer Protection, and colleagues
Katy Ratté, Nat Wood and Jennifer Leach. The FTC had around
1,100 staff, including some 300 in the Bureau of Consumer Protection.
It was noted that the US had no comprehensive, over-arching
data protection or privacy legislation. There was however a requirement
for all companies to put in place reasonable processes to assure
security of personal datathis approach was preferred to
the setting of detailed technical requirements. The assessment
of "reasonableness" was flexible, depending on the size
of the company, the sensitivity of data, and so on.
The role of the FTC was to monitor proactively the
security measures put in place by financial institutions (including
all companies providing financial services, but excluding the
major national banks, which were regulated by the Federal Reserve),
and to investigate specific complaints with regard to other companies.
The FTC had discretion to decide which complaints to pursue, based
on the seriousness of the issues raised. If companies did not
have "reasonable" processes in place, the FTC could
either make an order requiring improvements, or could seek civil
penalties. The FTC had yet to enter into litigation on the scope
of reasonableness, but voluntary enforcement orders had been entered
into by a number of companies, including Microsoft, with regard
to its Passport programme.
The FTC received over 450,000 complaints of identity
theft each year, and surveys put the total number of cases at
8-10 million a year in the US. Work to disaggregate ID theft from
simple card fraud was ongoing. The FTC now required a police report
to be filed, which in turn triggered investigation by financial
institutions. However, the numbers of cases investigated were
Data breach notification laws in over 30 states had
had a marked impact, driving many investigations, notably the
Choicepoint case, which resulted in the company paying $10 million
in civil penalties and $5 million in redress to customers. However,
the inconsistency between state laws created some difficulties,
and Congress was now looking at a federal data breach notification
On spam, the "Can-Spam" Act had provided
for suits by private individuals or companies, and Microsoft and
other companies had brought cases; the FTC itself had brought
around 100 cases. The approach was normally to focus on what spam
was advertising, and thus who profited from it, rather than seeking
to identify the source of spam emails themselves.
The Committee was welcomed by Mr Richard C Beaird,
Senior Deputy Co-ordinator for International Communications &
Information Policy. The role of the State Department was to co-ordinate
international initiatives on cybersecurity, such as the "Information
Society Dialogue" with the European Commission. The State
Department also advocated the Council of Europe's Convention on
Cybercrime, which the US had now ratified.
Co-ordinated action was difficult, given the asymmetry
between legal systems around the world. However, cybersecurity
was an increasingly high priority internationally. Bodies such
as the OECD, the International Telecommunications Union (ITU)
and Asia-Pacific Economic Cooperation (APEC), were engaging with
issues such as spam and malware, and with capacity building designed
to help less developed countries confront these problems. The
UK was a strong partner in such international initiatives.
The top priority was to develop laws within domestic
legislation that put people in jail. In so doing, technical measures
to help identify sources of, for example, spam, would be valuable.
On mutual legal assistance, which figured in the
Council of Europe Convention, the US participated actively in
the work of the first UN Committee on police co-operation. However,
in pursuing cases internationally there had to be a balance between
pursuing criminality and protecting freedom of speech.
The Committee attended a lunch hosted by the Deputy
Head of Mission, Alan Charlton. Guests included Stephen Balkam,
CEO of the Internet Content Rating Association; Peter Fonash,
Department of Homeland Security; Liesyl Franz, IT American Association;
Michael R Nelson, Internet Society; and Andy Purdy, President
of DRA Enterprises, Inc.
The Committee spoke to Jerry Martin, Research Fellow,
who said that Team Cymru had begun as a think-tank, before being
incorporated in 2005. It now employed a network of researchers
dedicated to supporting the Internet community in maintaining
security; it was funded by grants and a small number of commercial
contracts, but was non-profit making.
On one day, the preceding Saturday, Mr Martin had
detected over 7,000 malicious URLs, over half of these hosted
in China. These were identified through a database of malicious
code samples, currently being added to at an average rate of 6,200
a day. Of these samples around 28 percent were typically being
identified by anti-virus software; the information was then made
available to Symantec, and by the end of the month the average
detection rate increased to 70 percent.
If all the examples of malicious code were to be
reported to the police, they would be overwhelmed. There were
legal process in place, both nationally and internationally, to
investigate themthe problem was one of time and resources.
The FBI cybercrime division employed relatively few people. Well
qualified staff soon found they could earn a lot more in the private
sector, leading to large numbers of vacancies in government agencies.
Mr Martin then illustrated the working of the underground
economy in stolen identities, credit card details etc., using
examples from Internet Relay Chat (IRC) rooms.
The official reported loss to banks of $2.7 billion
a year was under-reportedthere was an incentive in the
financial community to down-play the problem. Education of consumers
was not really a solutionyou would never be able to stop
people from clicking on links to corrupt websites. The key for
banks and others was:
- To introduce two-factor authentication;
- To ensure that companies were familiar with all
their address space, rather than bolting on new areas, for instance
when acquiring new subsidiaries;
- To be more demanding of software manufacturers.
Progress and Freedom Foundation
The Committee met Tom Lenard, Senior Vice President
for Research, and colleagues. Mr Lenard approached the issues
as an economist, recognising the huge benefits derived from the
Internet, and asking whether there was market failure or harm
to consumers, and whether government action was needed to remedy
any such problems.
The best available statistics (e.g. Javelin and the
Bureau of Justice) indicated that levels of identity theft had
on most measures been in decline in the last three years, and
that the overall problem was smaller than normally represented.
On the other hand, the retention of information by companies was
what often allowed them to identify anomalous transactions so
quickly, and so benefited consumers. Mr Lenard accepted that
the reliability of the available data was open to question, but
cautioned against assuming that a lack of data meant an increasing
On the security of operating systems, companies such
as Microsoft and Apple were spending a huge amount on security,
and there was no evidence that new incentives were needed. Governments
were not well placed to decide levels of security, encryption
and so on. The approach of the FTC, requiring reasonable standards
of security, was a better approach. In addition, the FTC had launched
major litigation, for instance against Choicepoint. These had
created a significant deterrent to private sector companies from
persisting with poor security practices. However, Government was
almost certainly not spending enough on security, and this would
be an appropriate area to regulate.
On spam, the Can-Spam Act had had no effect on levels
of spam. Intervention on spam was technically difficult, but the
Internet was young and evolving new technical solutions. Government
intervention had not helped.
Washington DC, Tuesday 6 March
Department of Justice
The Committee met John Lynch, Deputy Chief, Computer
Crime and Intellectual Property, and colleagues Chris Painter
and Betty-Ellen Shave. The Department of Justice itself had around
40 attorneys working on cybercrime and intellectual property.
It also supported a network of 200 federal prosecutors around
the US specialising in high-tech crime, working closely with the
FBI and local law enforcement.
The FBI now had cybercrime as its number three priority,
after international terrorism and espionage. At the same time
the US, like all countries, lacked resources to deal with cybercrime;
in particular many local police forces had difficulty conducting
computer forensics. These problems were compounded by the loss
of qualified investigators to the private sector.
Moreover, there were no unified definitions or reporting
systems for cybercrime or identity theft, and statutes varied
from state to state. Victims who reported small cybercrimes to
local police, who lacked expertise, were unlikely to get anywhere.
This created particular problems in investigating small crimessay,
under $1,000which would not justify federal prosecutions.
However, if victims reported small crimes to the "IC3"
(Internet Crime Complaint Center), the FBI would "triage"
them, which meant there was a chance of linking up many small
cases so as to turn them into larger, potentially federal, cases.
The President had asked for a report on identity
theft, and the DoJ was co-operating with the FTC, FBI and Secret
Service in considering the issues. The report was likely to appear
in the next two or three months. The FTC was pressing for uniform
reporting procedures for ID theft, and this might well figure
in the report.
Reporting rates were low, and many crimes were swallowed
up by the credit card companies. The general feeling was that
law enforcement was not keeping up with cybercrime, and this appeared
to be having a damaging effect on the growth of e-commerce. While
there were prosecutions, only a small percentage of crimes ended
up in court. Whereas ten years ago cybercrime was the domain of
experts, now the general criminal, with no special abilities,
could commit crimes online.
The UK had been prominent in multi-lateral actions,
and was probably ahead of the US in protecting critical IT infrastructure.
However, whereas the US had ratified the Council of Europe convention,
it was still urging other states (including the UK) to ratify.
The creation of a 24/7 emergency network meant that law enforcement
officers from around 50 countries could at any time request assistance
from US experts; there was no guarantee that requests would be
granted, but they would be considered without delay.
As for the Mutual Legal Assistance and hot pursuit
provisions of the convention, the US was slower than some other
countries in closing down rogue websites. In particular, the 1st
Amendment, guaranteeing freedom of speech, dictated a cautious
approach. At the same time, law enforcement had developed good
relations with ISPs, who could close sites that breached their
terms and conditions.
The key recommendations were, first, for the UK to
ratify the Council of Europe convention, and, second, to increase
resources for law enforcement.
The Committee was welcomed by Shane Tews, Senior
Washington Representative, who outlined the role of Verisign.
The company ran two of the thirteen top-level roots (the "A"
and "J" roots) of the Internet. It also supported the
database registry for the .com and .net domains. It employed just
under 4,000 people globally, and maintained servers around the
world. This allowed regional resolution of "bad traffic"in
effect, bad traffic emanating from, say, Russia, could be sunk
in a regional "gravity well", rather than slowing down
the Internet as a whole.
Verisign could not specifically identify the IP addresses
of the originators of bad traffic, such as spoof emails, but it
could identify the IP addresses of serversin effect, the
wholesalersand engage with them.
Personal Internet security could not be separated
from the integrity of the infrastructure as a whole. The volume
of bad traffic, much of it targeted ostensibly at individual users,
affected the entire network. The originators were variously organised
criminals, terrorists and rogue states. Secure, government-run
financial networks now handled around $3 trillion of traffic every
day. These networks did not interact directly with the public
Internet, but such transactions would not be possible if public
sites, such as the New York Stock Exchange, or the Bank of England,
were not operating. The Internet had to be viewed holisticallythe
costs of insecurity were potentially huge.
The level of bad trafficfor instance, the
DOS attack on the .uk root server in February 2007was now
peaking at 170 times the basic level of Internet traffic; by 2010
it was likely to be 500 times the basic level. Massive over-capacity
and redundancy was needed to allow enough headroom in the network
to accommodate such traffic. Verisign alone was now able to handle
four trillion resolutions per day on its section of the network,
some eight times the normal current volume across the entire network.
More broadly, Verisign was a private sector company,
in effect performing a public service in maintaining the network.
The Internet had not been designed to support the current level
of financial trafficit had just happened that way. Authentication
of websites was a service offered by Verisign, and the process
of securing authentication for major companies such as Microsoft
was very thorough. But in the longer term the question would arise
of whether, and if so when, individuals would be prepared to pay
for authentication of Internet-based services, such as email,
which were currently free.
Internationally, certain states in eastern Europe
and Asia were turning a blind eye to organised crime operating
via the Internet from within their borders. Although the Council
of Europe convention was a huge step forward, it was essential
to engage local authorities and agencies in combating this phenomenon.
California, Wednesday 7 March
University of California, Berkeley Center for
Information Technology Research in the Interest of Society (CITRIS)
The Committee was welcomed by Gary Baldwin, Executive
Director of CITRIS. CITRIS had been established some six years
ago, on the initiative of former Governor Gray Davis. It was an
independent research centre, reporting directly to the President
of the University. A small amount of money, sufficient to cover
operating costs, came from the State of California. Funding for
research came from partner organisations in industry and federal
government (such as the National Science Foundation). Of the staff,
over half were from electrical engineering and computing; engineering,
other sciences, and social sciences, made up the remainder.
Professor Sastry said that the point of CITRIS was
to bring together technologists with experts in the social science
field to develop a co-ordinated approach to cybersecurity research.
CITRIS itself was an umbrella organisation, which sheltered a
number of different research priorities.
Many companies had made pledges (typically $1.5 million
a year) to support research, making good these pledges by buying
membership in particular research centres, such as TRUST, rather
than by contributing to a central pot. These centres, with 5-10
researchers, were fluid, normally breaking up and re-forming over
a five-year cycle.
CITRIS took the view that new technologies should
be put in the public domain. The results of research were published
and made available by means of free licensing agreements (in other
words, not open source). Industry partners had to leave their
intellectual property behind when engaging in CITRIS research
projects; however, they were free to make use of the results of
these projects to develop new products with their own IP.
TRUST (the Team for Research in Ubiquitous Secure
Technology) was one of the research centres, and organised its
work on three planes: component technologies; social challenges;
and the "integrative" layer between them. Issues investigated
included phishing and ID theft, with particular emphasis on the
collection of reliable data. Statistics were currently based largely
on self-selected surveys, and banks still regarded ID theft as
marginal. However, the growth rate was exponential, and in recent
years, through a "Chief Security Officer Forum", a number
of companies, such as Wells Fargo, Bank of America and Schwab,
were taking the issue more seriously.
TRUST had established a test-bed for network defence
systems, in which different kinds of attack could be simulated.
Technological transfer included anti-phishing products such as
SpoofGuard, PwdHash and SpyBlock.
CITRIS research centres were constantly looking for
international partners, and a symposium was being organised in
London in July. The question was raised as to whether British
universities should establish a similar research centre, in collaboration
Dr Paxson outlined his research detecting and collating
network intrusions. The goal of information security policy was
risk management. False positives and false negatives were the
Achilles' heel of all intrusion detection, and, scaled up, undermined
assessment of the risks. His laboratory focused on real Internet
traffic, rather than simulations, and in so doing detected from
the high 100s to low 1,000s of attacks each day.
Analysis of packets as they passed required highly
specialised hardware, which ISPs did not have access to. This
meant that ISPs were simply not in a position to filter Internet
traffic and achieve an adequate level of false positives and false
Mass attacks were targeted at large parts of the
network at oncethey were not targeted. Botnets were the
key problemthe cost of renting a compromised platform for
spamming was currently just 3-7 cents a week. The total number
of compromised machines was unknowna guess would be around
five percent, or 10-20 million. There was no evidence to suggest
that some countries were significantly worse than others.
The research raised legal problems. One was the restriction
on wire tapping. More fundamental was the fact that a platform
that allowed incoming traffic but barred outbound traffic could
be easily finger-printed by the "bad guys"; but to allow
outbound traffic risked infecting other platforms, and could make
the centre liable for negligence.
Mr Hoofnagle noted that the US now had 34 state laws
on security breach notification, and a federal law covering the
Veterans' Agency. Within these there were various definitions
of what constituted a security breach, with the California law
the most demanding. In contrast, some states required evidence
of potential for harm. There was now pressure for a federal law
on security breach notification, which was likely by the end of
2007. It appeared that the FTC would be responsible for implementation.
The Center had collected 206 examples of notification
letters, and was coding them under various criteria. However,
the collection was by no means completeonly a few states
(around five) required centralised notification to a specified
regulator. These also required the use of standardised forms,
which were crucial to providing good data.
There was some evidence that the media had lost interest
in security breach notification, reducing the incentive to raise
security levels to avoid tarnishing company image. However, a
central reporting system, bringing together information on company
performance in a generally accessible form, would help counteract
Data on ID theft were also very poor. The Javelin
survey estimated 8 million cases in 2006, but relied on telephone
surveys. Online polling put the figure at nearer 15 million. Estimates
of the damage ranged from $48-270 billion in 2003. Data were also
lacking on "synthetic ID theft", where a stolen social
security number was combined with a made-up name. Assertions that
most ID theft was perpetrated by persons close to the victim (family
members etc.) were based on very small samples.
Professor Schwartz drew attention to the split in
the US, as in most countries, between law enforcement and intelligence
agencies. While there was good information on the former, little
was known about the latter.
There were two levels of law: constitutional and
statutory or regulatory. The main constitutional law derived from
the Fourth Amendment, on the requirement for a warrant for searches
and seizures, based on probable cause. Until 1967 there had been
no privacy for telecommunications, but at that point the Supreme
Court had established the requirement for a search warrant for
tapping, on the basis of the individual's "reasonable expectation
of privacy". This had since been curtailed by rulings that
the Fourth Amendment did not apply either to information held
by third parties (e.g. bank records) or to "non-content",
such as lists of numbers dialled.
Modern communications meant that ever more information
was being held by third parties, such as emails stored on servers.
In addition, information is not communicated in real time (as
telephone conversations were in 1967), with the result that the
Fourth Amendment does not apply. The result was that there was
little protection under the US Constitution.
Ms McCormick drew attention to the need for the technology
companies that operate the network to lead in tackling the problems.
A common complaint was that universities were not training enough
graduates to support these companies, and Science, Technology
and Society Center was therefore developing an industry-backed
security curriculum, with web-based modules covering such issues
as risk management, policy and law.
Around 85 percent of the critical infrastructure
was developed, owned and maintained by the private sector. The
Center was exploring how decisions were taken by the companies
involved, the roles of Chief Security Officers and Chief Privacy
Officers, how they were qualified, what sorts of technologies
they acquired, and how internal security policies were set. Security
and privacy were not profit-generating, but drew on resources
generated by other profit-making sectors. The Center was looking
at how security breach notification laws impacted on decision-making
in this area.
Finally, researchers were looking at the barriers,
in particular the difficulty of accessing network traffic data.
The US legal regime (e.g. the Stored Communications Act) was having
a chilling effect on research.
Electronic Frontier Foundation
The Committee met Gwen Hinze, Daniel O'Brien, Seth
Schoen and Lee Tien from the Electronic Frontier Foundation, a
not-for-profit organisation founded in 1990, with 13,000 paying
members, dedicated to representing innovators and supporting civil
liberties for the consumer on the Internet.
The focus of the EFF was increasingly on litigation
and education, rather than policy-making. The biggest case currently
being undertaken was a class-action lawsuit against AT&T for
their involvement in the National Security Agency's programme
of wire-tapping communications. The EFF employed 12 attorneys,
but also leveraged support from other organisations. Cases were
taken on a pro bono basis.
There EFF had three positive recommendations: to
focus on prosecuting real Internet crime; to explore possible
changes to incentive structures to address market failures in
the field of Internet security; to empower and educate users,
rather than following the emerging trend to lock down devices.
On the last of these, the EFF was concerned by the
increasing tendency to take control over their own systems away
from users. While such control, exercised by, say, network operators,
might be exercised from benign motives, it effectively imposed
a software monopoly upon users, limiting innovation. At the same
time, insecurities often resided within operating systems and
applications themselves, so that the current focus on firewalls
and anti-virus software was misplaced. The key was to empower
and educate users to manage their own security intelligently,
rather than to adopt a paternalistic approach which would only
store up problems for the future.
There were many well-documented security problems
that the market had not fixed. New incentives were therefore needed.
Vendor liability risked encouraging companies to shift liability
to users, by exerting ever more control over end users, and the
EFF was therefore equivocal on the desirability of such a regime.
It would also impact on innovation, open source software, small
companies and so on. More research and analysis of new incentives
A significant percentage of computers had already
been compromised by organised crime. However, botnets were not
affecting end users directly, but were being used for spam and
DOS attacks. As a result end users needed more information, not
less, so that they could evaluate the position more intelligently.
They needed a reason to care.
The Committee attended a dinner hosted by Martin
Uden, the Consul General. The guests were Whit Diffie, CSO, Sun
Microsystems, John Gilmore, Founder, Electronic Frontier Foundation,
Jennifer Granick, Executive Director of the Stanford Center for
Internet and Society, and John Steward, CSO, Cisco Systems.
California, Thursday 8 March
Silicon Valley Regional Computer Forensic Laboratory
The Committee was welcomed by Mr Chris Beeson, Director
of the Laboratory, and then heard a presentation from Special
Agent Shena Crowe of the FBI. She began by commenting on the availability
of data. The FTC led on ID theft, and individuals were required
to report theft to local police in the first instance. The FBI
ran the Internet Crime Complaint Center (IC3), and individuals
were encouraged to report offences to this site by other agencies
and police, but the police report was the fundamental requirement.
Reporting to IC3 was voluntary.
In 2006 complaints to IC3 reached 20,000 a month.
Losses reported in 2005 were $183.12 million, with median losses
of just $424. Over 62 percent of complaints related to online
Cybercrime was a maturing market. There was a lot
of money to be made, and although there were some individual criminals
organised crime led the way. Underneath this level there were
many specialists in such areas as rootkits. Communications within
the criminal world were conducted through IRC (Internet Relay
Chat), P2P (Peer-to-Peer), and tor (The Onion Router). Typically
first contacts would be made via IRC, and deals would then be
made in other fora. Team Cymru and other volunteer groups played
a critical part in monitoring this trafficthe FBI, as a
Government agency, could not lawfully monitor or collect such
data, whereas researchers were able to do so.
In terms of security, the key players were the industry
itself, the Department for Homeland Security, FBI, IT Information
Sharing and Analysis Centers (IT-ISAC, in which company security
specialists shared best practice), and the Secret Service. Within
the FBI the cybercrime division was established six years ago,
and staff and resources had in recent years shifted from conventional
criminal work to the top priorities of counter-intelligence, terrorism
International action was difficult and often informal.
Requests for help could be ignored or subject to barter. There
were few reliable data on the main centres of organised cybercrime,
though Russia and China were commonly cited as major sources.
Security breach notification laws had been beneficial
in helping companies to normalise the issues. Rather than sweeping
breaches under the carpet they were now more likely to assist
investigations. However, the reality of investigations was that
from an attack on a particular target, to tracking down the drones
and the botnet, to reaching the source, could take months. Investigations
were not operating in digital time. Ms Crowe then took the committee
through the various stages of one particular investigation, which
had taken about a year to complete.
ISPs were now beginning to sand-box infected computers
used to send spam and so on. However, the reality was that criminal
innovation was a step ahead of enforcement. In 2005 six major
US companies experienced theft of personal identifying information,
with insiders increasingly being implicated. These cases were
all reported to the FBI by the companies concerned.
Mr Beeson then told the Committee that there were
14 Regional Computer Forensic Laboratories. The volume of data
processed had increased from some 40 Tb in 2000 to over 1,400
Tb in 2005. Processing this volume of data required specialised
laboratories, focusing solely on computer forensics. The RCFLs
were set up in partnership with local law enforcement, who provided
personnel. In return, the RCFL would provide forensic analysis,
at no cost to the local police. Federal funding supports running
costs, such as premises and equipment.
Mr Beeson then gave the Committee a short guided
tour of the facility.
The Committee met Bud Tribble, Vice President, Software
Technology, and Don Rosenberg, Senior Vice President and General
Counsel. Dr Tribble noted that while in the 1980s no-one had anticipated
the security issues associated with the Internet, security was
now a top priority not just for Apple but for every other company
in the industry. In 2000 Apple had replaced its existing operating
system with a Unix-based system, which had been covered with a
usable top layer to create a secure platform.
Security started with good design. Security had to
be easy to use, or else people would not use it. Apple went out
of its way not to ask users security questions, to which they
would not know the answers. There was no simple fix for security,
no "seat belt" for Internet users, but overall security
continued to improve incrementally.
Asked about vendor liability, Dr Tribble argued that
there were many causes for, say, a virus infectionthe virus
writer, the user who downloaded the virus, and so on. It was difficult
to assign responsibility or liability. The key was to incentivise
continuing innovationit was not clear that vendor liability
would create such an incentive.
However, by taking decisions away from users Apple
was implicitly taking on more liability. The company took decisions
which could prevent users from downloading and running materialindeed,
on the iPhone it would not be possible to download any applications.
People had protested, and Microsoft systems certainly allowed
more freedom, but they also created more problems. Looking to
the future, Apple was conducting research into the possibility
of including a sand-box in which applications could be run securely,
but this was two or three years away.
Ultimately the market would decide. The problem was
that at present there was not enough transparency or information
within the market to enable consumers to take such decisions.
Security and usability had to be balanced. There were technical
fixes to security issuesPGP encryption, to address the
traceability of email, had been around for more than 10 yearsbut
they were not usable for general users.
Spam was a major issue. The reason there was so much
spam was that there was an economic incentive to create it. In
addition, Can-Spam had been ineffectiveit was not enforceable,
and many of the spammers were operating outside the law anyway.
Apple used a filtering technology to filter out spam. Although
there were reports of Macs in botnets, they appeared to be very
rare, and the evidence was largely based on hearsay. The company
had yet to see a Mac botnet. The most fruitful avenues for dealing
with botnets appeared to be technologies that, first, prevented
bots getting onto end-user systems, and, second, detected bots
running and alerted users to the problem.
The latest Mac operating system, Leopard, would raise
the bar for security. Technologically it was on a par with Windows
Vista, assuming Vista did everything it was supposed to dobut
it was ahead on ease of use.
The Committee met Laura K Ipsen, VP of Global Policy
and Government Affairs, and John Stewart, VP and Chief Security
Officer. They argued that the industry was still inexperienced
in understanding what the Internet meant for society. Practice
varied: Microsoft had begun by focusing on usability, later on
reliability, and now on security. In this respect the market had
proved effectivethe danger was that regulation would not
be able to keep up as effectively with the developing threats.
The market was very different to that for cars, where the technology,
and the risks, were very stable and well-known.
There were only six or seven operating system vendors,
and their security was improving. The challenge would be to reach
the thousands of application vendors, whose products were increasingly
targeted by the bad guys. The Government should focus on setting
and applying penalties for those who abused the system; the role
of industry should be to educate users. Time, and the development
of the younger generation, would solve many of the problems. At
the same time, standards of privacy would change.
Increasing volumes of data on the Internet were good
for Cisco's business, but the volumes of bad traffic carried a
cost in reducing the usability of many parts of the Internet.
On internal Cisco security, Mr Stewart confirmed that routers
did provide the facility of two-factor authentication, but that
this was only advised as best practice, not mandated. Cisco's
approach was to provide the capability, but not to dictate the
implementation by ISPs.
More broadly, the motives and incentives to fix security
problems were very involved. Most users did not know what a botnet
was. If they got a message saying they were linked to a botnet
they would just ring the helpline, so impacting on, for instance,
Apple's profits. The best approach was not to focus on technological
risks in piecemeal, when these were constantly changing, but to
track down and prosecute the criminals.
Cisco Systems hosted a lunch for the Committee and
the CyberSecurity Industry Alliance (CSIA). Attendees were Pat
Sueltz, Max Rayner, Matt Horsley and Amy Savage (all from SurfControl),
Ken Xie (Fortinet), Kit Robinson (Vontu), Adam Rak (Symantec)
and Thomas Varghese (Bharosa).
In discussion, attention was drawn to the number
of reports of Internet crime on the IC3 website, and it was argued
that this represented the tip of the iceberg. The only reliable
thing about the data was the rate of increasethe actual
figures were grossly under-reported. Overall, the position appeared
to be getting worse rather than better. Although there had been
no major outbreaks in the last year or two, this was attributed
to the fact that criminals increasingly chose to remain out of
sight, using botnets to make money rather than distributing high-profile
Asked whether there was a down-side to security breach
notification laws, it was suggested that some companies might
not monitor breaches in order to avoid a duty to report themthe
law should include a duty to monitor as well as to report. In
addition, those receiving notifications should be given better
information on what to do about them. More broadly, the effect
of breach notification laws was seen as positive, but there was
a view that they should be extended to cover printed as well as
electronic material. Most security breaches remained physical,
for instance employees losing laptops etc. Finally, it was argued
that any such laws in the UK should not repeat the mistakes made
in some US states, by making it clear that the duty to notify
was universal, rather than being focused on UK citizens.
There was some discussion on overall responsibility
for security. On the one hand it was argued that too much responsibility
was being placed on end usersas if they were to be required
to boil or purify water to avoid being poisoned, when in fact
the infrastructure itself was the source of contamination. ISPs
in particular should take a greater role in filtering traffic.
On the other hand, it was argued that the analogy with water was
misleading, as there was no consensus in the Internet field on
what was "toxic".
Matt Carey, Chief Technology Officer, welcomed the
Committee. Rob Chesnut, Senior Vice President, Trust and Safety,
said that he had formerly been a federal prosecutor; several other
former federal law enforcement officers worked for the company.
He argued that eBay had a very strong incentive to improve security,
as the company's whole business model was based on trust and the
fact that customers had a good experience of the site.
Law enforcement was a key challenge: scammers might
be deterred if they thought there was a chance of going to jail.
The fact that Internet fraud crossed jurisdictions created difficulties,
and authorities in some countries simply weren't interested in
pursuing offenders. eBay devoted considerable resources to building
up relationships with law enforcement around the world, providing
advice, records and testimony as required. The company had played
a part in over 100 convictions in Romania alone.
eBay also reported all frauds to the IC3 website,
and encouraged customers to do the samethis meant that
the IC3 data (showing 63 percent of complaints related to online
auctions) were skewed. However, this reporting was essential to
allow individually small individual cases to be aggregated. In
addition, the company provided training to law enforcement, and
hotlines that officers could call.
The number one problem facing eBay was phishing,
which undermined confidence in the company and in e-commerce.
eBay was targeted because it had the highest number of account
holders, and therefore the best rate of return, and because holding
an eBay account generated trustwhich the scammers could
make use of. eBay was working to make stolen accounts worthless,
by detecting them and locking them down. However, the victims
did not seem to learn from their mistakesthey would give
up account details time after time. Most cases involved cash payments,
e.g. via Western Union, rather than credit cards or PayPal.
The most worrying trend was the increased popularity
of file-sharing. People did not appreciate the risk that the bad
guys could then go on to search all the data in their personal
files for account details, passwords and so on.
The company's major recommendations would be as follows:
- Provision of better training for
- Diversion of resources within law enforcement
towards combating e-crime.
- Reappraisal of the penalties applied to those
convicted of e-crime.
- Relaxation of the laws of evidence, to make the
giving of affidavits or testimony by victims in different jurisdictions
- Aggregating of offences across jurisdictions.
- A requirement that money transfer companies prove
the ID of those using their services.
Redmond, Friday 9 March
Doug Cavit, Chief Security Strategist, drew attention
to the powerful economic motivation to encourage Internet use.
Security was key to this. At the same time, software development
differed from, say, car manufacture, in that software was adaptiveit
was not just a case of adding features at a fixed cost, but of
an incremental process of development and manufacture.
Asked whether ISPs could do more, he noted that most
ISPs currently isolated machines detected as belonging to botnets.
However, actually contacting owners to fix the problem was too
expensive. Microsoft offered a "malicious software removal
tool" (MSRT) free of charge, which had been available for
a year. Data on use were published.
The nature of the threat had changed. It was now
about making money and, to some extent, attacking national security.
Those behind the threat were expert and specialised. Attacks were
moving both up and down the "stack"exploiting
on the one hand vulnerabilities in the application layer, and
on the other working down through chips and drivers to hardware
exploits. As a result traditional defences, anti-virus software
and firewalls, were no longer adequateevery layer of the
system now had to be defended. MSRT data also showed that there
were now relatively few variants on viruses; on the other hand
there were thousands of variants on back-door or key-logger exploits,
designed to get around anti-malware programmes.
More broadly, the Microsoft platform had always been
designed to enable interoperability and innovation. This would
continue, though within Vista every effort had been made to ensure
that the prompts and questions for end-users were more transparent.
Kim Cameron, Identity and Access Architect, said
that he remained optimistic about the Internet. The more value
was transferred through the medium the more criminals would target
it, but the industry could stay of top of the problems. The major
companies were increasingly realising that they needed to work
together and with governmentssolutions to the problems
were not purely technical. There had in many cases been a disconnect
between the technology industry and governments. In the UK, for
instance, the original, centralised proposals for ID cards had
been very unfortunate, and the movement towards a more decentralised,
compartmentalised system was very welcome.
It was possible to produce devices which were 100
percent secure. The problem came with the interaction between
those devices and their human users. There were things that users
should not have to knowthe technical approach had to adapt
to them. For instance, Windows had translated complex and, to
most users, meaningless tasks into easily grasped visual analogies.
The key challenge he faced was to translate identity management
into similarly transparent visual terms. The image being used
in CardSpace was of a wallet, containing multiple identities,
from which, like credit cards, users could choose which one to
use in particular circumstances.
The Internet had been built without an identity layer,
and filling this hole retrospectively was a hugely complex task.
The need to take on this task had to be accepted across the industry,
and across national and political boundaries. Dr Cameron's
paper on the Laws of Identity sought to achieve this by setting
out key principles.
Emerging technologies such as RFID tags would have
many potentially dangerous applications. It was essential that
all such devices be set up in such a way that the individual had
a choice over whether or not to broadcast his or her individual
identity. The company was working on IP-free approaches to these
issues, which would be available for other companies to develop
in order to plug into their systems.
Sue Glueck, Senior Attorney, and Nicholas Judge,
Senior Security Program Manager, argued that security and privacy
were two sides of the same coin. As well as improving security
Microsoft had to invest in privacy, both to protect itself legally
and to make deployments more straightforward.
Microsoft's public guidelines for developing privacy-aware
software and services had been made public in an effort to help
the computer industry, particularly smaller companies who could
not afford to have full-time privacy and security staff, use a
common set of rules and way of doings things. In this way
some of the data breaches and other privacy problems that were
currently widespread could be avoided. The guidelines were available
for download at http://go.microsoft.com/fwlink/?LinkID=75045.
The company's key principle was that Microsoft customers
be empowered to control the collection, use and distribution of
their personal information. The company employed 250 staff to
implement this principle, assessing each feature of software at
an early stage of development against core privacy criteria. In
the case of Vista, there were around 520 teams working on features,
of which about 120 had privacy impacts. The requirement for privacy
drove around 30 significant design changes. The privacy team was
formerly seen as a nuisance, but increasingly designers and developers
had bought into the value of privacy.
On the use of language, messages were tested against
stringent usability criteria, including invented personae with
varying knowledge of computers. However, the team did not have
the resources to test messages against focus groups.
Questioned on the privacy implications of the Microsoft
phishing filter, it was noted that the data sent to Microsoft
were stripped of all log-in details and were only preserved for
10 days on a separate server.
Aaron Kornblum, Senior Attorney, said that Microsoft's
Legal and Corporate Affairs Department had over 65 staff worldwide,
seeking to use civil litigation to enforce Internet safety rules.
The staff were in some cases recruited from government agencies,
such as the FBI, the Metropolitan Police etc., but outside counsel
were also used to bring cases.
Under federal and state laws ISPs could bring cases
against spammers on behalf of their customers. Microsoft, through
its ISP, MSN, had brought such cases.
In order to prevent phishing sites using the Microsoft
identity, all newly registered domain names held by the registrars
were scanned against key text, such as "msn.com". As
a result of this work, along with a proactive approach to investigating,
prosecuting and taking down phishing sites, the number of spoof
MSN sites had fallen considerably. Prosecutions in such cases
were launched under trademark law.
Partnerships with law enforcement were crucial, such
as "Digital PhishNet", set up in 2004. Investigations
were frequently worldwide, involving multiple lines of inquiryfor
instance, investigating where phished data were sent, where phishing
sites were hosted, and so on.
Looking forward, the key issues of concern were the
prevalence of botnets to distribute malicious code, and the introduction
of wireless technologies.
Linda Criddle, Look Both Ways
At a separate meeting, Linda Criddle drew attention
to five factors that increased the risks to personal safety online:
- Lack of knowledge;
- Unintentional exposure of (or by) others;
- Technological flaws;
- Criminal acts.
Software was not currently contributing to safety,
and in many cases was undermining it. Networking sites such as
MySpace or espinthebottle did not default to safe options, encouraged
the disclosure of personal information, the use of real names,
and so on.
In addition, much content filtering technology only
filtered external content. For example MSN content filtering did
not filter the (often age-inappropriate) content of the MSN network
itself. This left users wholly exposed.
Products should not carry a default risk setting.
Wherever a choice was involved users should be fully apprised
of the risks so that informed choices could be made.