This is a House of Commons Committee report, with recommendations to government. The Government has two months to respond.
Science, Innovation and Technology Committee
Governance of artificial intelligence (AI)
Date Published: 31 August 2023
This is the report summary, read the full report.
Artificial intelligence (AI) has been the subject of public, private and research sector interest since the 1950’s. However, since the emergence of so-called ‘large language models’ such as ChatGPT in particular, it has become a general-purpose, ubiquitous technology—albeit not one that should be viewed as capable of supplanting humans in all areas of society and the economy.
AI models and tools are capable of processing increasing amounts of data, and this is already delivering significant benefits in areas such as medicine, healthcare, and education. They can find patterns where humans might not, improve productivity through the automation of routine processes, and power new, innovative consumer products. However, they can also be manipulated, provide false information, and do not always perform as one might expect in messy, complex environments—such as the world we live in.
The recent rate of development has made debates regarding the governance and regulation of AI less theoretical, more significant, and more complex. It has also generated intense interest in how public policy can and should respond to ensure that the beneficial consequences of AI can be reaped whilst also safeguarding the public interest and preventing known potential harms, both societal and individual. There is a growing imperative to ensure governance and regulatory frameworks are not left irretrievably behind by the pace of technological innovation. Policymakers must take measures to safely harness the benefits of the technology and encourage future innovations, whilst providing credible protection against harm.
Our inquiry so far has led us to identify twelve challenges of AI governance, that policymakers and the frameworks they design must meet.
1)The Bias challenge. AI can introduce or perpetuate biases that society finds unacceptable.
2)The Privacy challenge. AI can allow individuals to be identified and personal information about them to be used in ways beyond what the public wants.
3)The Misrepresentation challenge. AI can allow the generation of material that deliberately misrepresents someone’s behaviour, opinions or character.
4)The Access to Data challenge. The most powerful AI needs very large datasets, which are held by few organisations.
5)The Access to Compute challenge. The development of powerful AI requires significant compute power, access to which is limited to a few organisations.
6)The Black Box challenge. Some AI models and tools cannot explain why they produce a particular result, which is a challenge to transparency requirements.
7)The Open-Source challenge. Requiring code to be openly available may promote transparency and innovation; allowing it to be proprietary may concentrate market power but allow more dependable regulation of harms.
8)The Intellectual Property and Copyright Challenge. Some AI models and tools make use of other people’s content: policy must establish the rights of the originators of this content, and these rights must be enforced.
9)The Liability challenge. If AI models and tools are used by third parties to do harm, policy must establish whether developers or providers of the technology bear any liability for harms done.
10)The Employment challenge. AI will disrupt the jobs that people do and that are available to be done. Policy makers must anticipate and manage the disruption.
11)The International Coordination challenge. AI is a global technology, and the development of governance frameworks to regulate its uses must be an international undertaking.
12)The Existential challenge. Some people think that AI is a major threat to human life: if that is a possibility, governance needs to provide protections for national security.
In March 2023, the UK Government set out its proposed “pro-innovation approach to AI regulation” in the form of a white paper. It set out five principles to frame regulatory activity, guide future development of AI models and tools, and their use. These principles would not initially be put on a statutory footing but interpreted and translated into action by individual sectoral regulators, with assistance from central support functions.
The UK has a long history of technological innovation and regulatory expertise, which can help it forge a distinctive regulatory path on AI. The AI white paper should be welcomed as an initial effort to engage with this complex task, but its proposed approach is already risking falling behind the pace of development of AI. This threat is made more acute by the efforts of other jurisdictions, principally the European Union and United States, to set international standards.
Our view is that a tightly-focussed AI Bill in the next King’s Speech would help, not hinder, the Prime Minister’s ambition to position the UK as an AI governance leader. Without a serious, rapid and effective effort to establish the right governance frameworks—and to ensure a leading role in international initiatives—other jurisdictions will steal a march and the frameworks that they lay down may become the default even if they are less effective than what the UK can offer.
A summit on AI safety, expected in November or December, will also be key to delivering the Prime Minister’s ambition. The challenges highlighted in our interim Report should form the basis for discussion, with a view to advancing a shared international understanding of the challenges of AI—as well as its opportunities. Invitations to the summit should therefore be extended to as wide a range of countries as possible. A forum should also be established for like-minded countries who share liberal, democratic values, to ensure mutual protection against those actors—state and otherwise—who are enemies of these values and would use AI to achieve their ends.