Memorandum by Alan Cox
This submission attempts to summarise aspects
of the open source community viewpoint on the questions asked
by the inquiry, as was requested. It represents the personal viewpoint
of the author based upon extensive experience and his position
within the community. Although the author is employed in this
field it does not represent the viewpoint of his employers and
has not been reviewed by them.
This response attempts to explain how the "open
source" methods of software development used by projects
such as Linux and Firefox relate to personal Internet security.
Open source is a broad church and to cover anything but the generalities
is worthy of a book not a response.
The Open Source community, generally speaking,
is focused on two threats. The first is technical flaws in software
which create an opportunity for attacks on systems. The second
is attacks on the users themselves such as "phishing".
Both threats are rapidly evolving in terms of attacks and countermeasures.
As computer security increases, the attack target appears to be
shifting as the user becomes the easier target.
The scale of the problem is difficult to measure
and the open source community does not generate detailed end user
data. It may also be misleading to think about it in a conventional
crime recording model. Unlike a burglar who discovers a flaw in
a common type of car lock, a software based attack can go from
unknown to global within hours. This significantly changes the
threat model and the required response. In particular, the open
source community is sceptical of the longterm viability of virus
scanners. These depend upon a reaction from a vendor and an update
being issued before they can protect against a new virus. That
may be too late.
The community does not track data on the number
of users affected although data is tracked on the number of bug
fixes made that may involve security. However it is very hard
to relate bug fixes directly to actual incidents of user attack,
as most flaws are fixed before they are exploited.
Open source users are probably atypical in terms
of their understanding of the threat. There is a higher percentage
of technical users in the open source community. Nevertheless
there are some concerns within the open source security community
that some of the less well informed attitudes of endusers may
be problematic. In particular some users believe that open source
software is totally secure and there will never be a risk of viruses
or other problematic attacks.
The number of flaws in open source software
is lower than average. This has been measured by academics and
commercial organisations using tools which look for flaws in software.,
The public nature of the source code also allows extensive peer
review of the code for quality. More popular programs generally
get more review. The public nature of the code also makes it possible
to search through all the code for all the programs in a system.
This is important because a newly discovered flaw is often a mistake
that will have been repeated in many other places. Having access
to all the code allows screening on a large scale.
Developments in tools that identify flaws more
rapidly, and languages that make it harder to write insecure code,
are followed actively in both the open source community and the
proprietary sphere. Extensions to computer hardware that are useful
for security are also used; although many hardware features touted
by vendors to be for security are actually mostly for marketing
and not as useful as they would have the world believe.
Open source, particularly Linux, has also focused
on making users secure by default. Red Hat Linux shipped with
a built-in and automatically enabled firewall for years, something
Microsoft has finally followed. This has led to new attacks being
increasingly targeted against the web browser which must talk
through the firewall, rather than against the system itself. Each
step taken to improve security triggers a response of this nature.
Fixing software flaws is only one part of the
process. For these fixes to be useful, endusers must be able to
obtain them, verify they are correct and install them easily.
Open source systems use management tools to automate this process,
and digital signatures to verify that the code obtained is the
correct code. But non-broadband users face a huge barrier. Fixes
are not small, and the packaging methods are not currently optimal
either. Thus we face the same problem that proprietary vendors
face: users with limited connectivity are vulnerable to attack
because they lack the ability to update their system.
Attacks directed at the user of a computer are
much more problematic. Some defences are also hampered by US patent
concerns which prevent the deployment of certain technologies
which can help identify fake emails. In the UK there are also
concerns about libel risks that make it hard, if not impossible,
to keep the kind of databases needed to identify phishing attacks.
The open source community is following several
strands of work in this area. Good user interface design can help
to guide users to the correct choices. However there is a permanent
conflict between ease of use and security. This conflict is difficult
to resolve. Tools like SELinux implement security policies that
extend further than traditional access rights. With such tools
it becomes possible for a company to encode and enforce some company
rules in software instead of depending upon education. For a variety
of reasons (notably that the rules are in the company's interest,
not the users') education rarely works well. For example it becomes
easier to control who can run downloaded files or install software.
This is important as it turns a security breach into a helpdesk
call inquiring why the user cannot perform the undesirable act.
It is a general opinion that the focus of attacks on confusing
and misleading the user will continue to grow as the potential
for attacks on software flaws decreases.
The international nature of the Internet requires
international governance. Unfortunately the open source experience
of regulation of the Internet and technology in general has been
extremely poor. There is deep distrust of the establishment. The
EU in particular is generally seen as the tool of big industry,
lacking both transparency and control over lobbying.
Currently proposed and actual regulation affecting
the industry includes the EUCD, rules on encryption export, and
proposals to license computer security workers. These are likely
to have strong negative effects on the open source community.
Proposed regulation has already triggered responses that are not
those desired by government: open source cryptography tool developers
are completing software to render obsolete the government's proposed
legislation on access to encryption keys.
The biggest barriers affecting security in the
UK are probably:
proposals in the EU for the adoption
of software patents, making it impossible for people to implement
some features even when they are critical to security; and
the proposed updates to the Computer
Misuse Act which make it unclear whether possessing a tool for
breaking into a computer is an offence even when such ownership
is for the purpose of security testing and software debugging
and development. This will reduce security testing and discourage
people from working on security (This is the area Lord Northesk
has been attempting to correct).
It would be difficult to summarise the open
source community view on improving governance of the Internet
as it spreads such a wide political range.
5 B P Miller, D Koski, C P Lee, V Maganty, R Murthy,
A Natarajan, and J Steidl, "Fuzz Revisited: A Reexamination
of the Reliability of UNIX Utilities and Services", Computer
Sciences Technical Report 1268, University of Wisconsin
Madison, April 1995.