Governance of artificial intelligence (AI) – Report Summary

This is a House of Commons Committee report, with recommendations to government. The Government has two months to respond.

Download and Share

Summary

Since the publication of our interim Report examining the governance of artificial intelligence (AI) in August 2023, debates over how to regulate the development and deployment of AI have continued. These debates have often centred around the Twelve Challenges of AI Governance we identified in our interim Report.

This Report examines domestic and international developments in the governance and regulation of AI since the publication of our interim Report. It also revisits the Twelve Challenges of AI Governance we identified in our interim Report and suggests how they might be addressed by policymakers. Our conclusions and recommendations apply to whoever is in Government after the General Election.

We have sought to reflect the uncertainty that exists over many questions that are critical to the future shape of the UK’s AI governance framework: how the technology will develop, what the consequences will be of its increased deployment, whether as-yet hypothetical risks will be realised, and how policy can best keep pace with the rate of development in these and other areas.

These questions need to be answered over the longer-term.

Perhaps the most far-reaching of the challenges that AI poses is how to deal with a technology which—in at least some of its variants—operates as a ‘black box’. That is to say, the basis of and reasoning for its recommendations may be strictly unknowable. Most of public policy—and the scientific method—is based on being able to observe and validate the reasons why a particular decision is made and to test transparently the soundness (or the ethics) of the connections that lead to a conclusion. In neural networks-based AI that may not be possible, but the predictive power of models may nevertheless be very strong. A so-called ‘human in the loop’ may be unequal to the power and complexity of the AI model. In our recommendations we emphasise a greater role for testing of outputs of such models as a means to assess their power and acuity.

In the short term, it is important that the UK Government works to increase the level of public trust in AI—a technology that has already become a ubiquitous part of our everyday lives. If this public trust can be secured, we believe that AI can deliver on its significant promise, to complement and augment human activity.

The Government has articulated the case for AI: better public services, high quality jobs and a new era of economic growth driven by advances in AI capabilities. It has confirmed its intention to pursue the principles-based approach proposed in its March 2023 AI White Paper and examined in our interim Report. Five high-level principles—safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress—underpin the Government’s approach and have begun to be translated into sector-specific action by regulators.

A key theme of our Inquiry has been whether the Government should bring forward AI-specific legislation. Resolving this should be a priority for the next administration. We believe that the next Government should be ready to introduce new AI-specific legislation, should the current approach based on regulators’ existing powers and voluntary commitments by leading developers prove insufficient to address current and potential future harms associated with the technology.

The success of the UK’s approach to AI governance will be determined to a significant extent by the ability of our sectoral regulators to put the Government’s high-level principles into practice as AI continues to develop at pace. We have identified three factors that will influence their ability to deliver: powers, coordination and resourcing.

On powers, we welcome confirmation that the Government will undertake a regulatory gap analysis to determine whether regulators require new powers to respond properly to the growing use of AI, as recommended in our interim Report. Concluding this analysis and implementing its findings must be a priority for the next Government.

On coordination, the general-purpose nature of AI will, in some instances, involve overlapping regulatory remits, and a possible lack of clarity of the responsibility of different regulators. This could create confusion on the part of consumers, developers and deployers of the technology, as well as regulators themselves. The central steering committee that the Government has said it will establish should be empowered to provide guidance and, where necessary, direction to help regulators navigate any overlapping remits, whilst respecting their independence. The regulatory gap analysis should also put forward suggestions for delivering this coordination, including joint investigations, a streamlined process for regulatory referrals, and enhanced levels of information sharing.

On resourcing, the capacity of regulators is a concern. Ofcom, for example, is combining implementing a broad new suite of powers conferred on it by the Online Safety Act 2023, with formulating a comprehensive response to the deployment of AI across its regulatory ambit. Others will be required to undertake resource-intensive investigations and it is vital that they have both the powers and resources to do so. We believe that the announced £10 million to support regulators in responding to the growing prevalence of AI is clearly insufficient to meet the challenge, particularly when compared to even the UK-only revenues of leading AI developers.

The AI Safety Institute, established in its current form following the AI Safety Summit at Bletchley Park in November 2023, is another key element of the UK’s AI governance framework. The Institute’s leadership has assembled an impressive and growing team of researchers and technical experts recruited from leading developers and academic institutions, helped shape a global dialogue on AI safety, and—whilst not a regulator—has played a decisive role in shaping the UK’s regulatory approach to AI. However, the reported challenges the Institute has experienced with securing access to leading developers’ future models to undertake pre-deployment safety testing is, if accurate, a major concern. Whilst testing on already-available models is clearly a worthwhile undertaking, the release of future models without the promised independent assessment would undermine the achievement of the Institute’s mission and its ability to secure public trust in the technology.

While international conversations about AI safety have generated a degree of consensus—and provided a notable point of engagement with China—there is not an emerging international standard on regulation. The UK has pursued a principles-based approach that works through existing sector regulators. The Biden-Harris administration in the United States has through its Executive Order issued greater direction to federal bodies and Government departments. The European Union, meanwhile, has agreed its AI Act, which takes a ‘horizontal’, risk-based approach, with AI uses categorised into four levels of risk, and specific requirements for general-purpose AI models. The AI Act will enter into force in phases between now and mid-2026.

Both the US and EU approaches to AI governance have their downsides. The scope of the former imposes requirements only on federal bodies and relies on voluntary commitments from developers. The latter has been criticised for a top-down, prescriptive approach and the potential for uneven implementation across different member states. The UK is entitled to pursue its own, distinct approach that draws on our track record of regulatory innovation and the biggest cluster of AI developers outside the US and China.

Among the areas where learnings from elsewhere could be applied are in formulating responses to the Twelve Challenges of AI Governance proposed in our interim Report. We believe that all of these governance challenges still apply. We have proposed solutions to each of them in this Report to demonstrate what policy makers in Government should be doing.

These should not be viewed as definitive solutions to the challenges, but as provisional illustrations of what policy might be in a complex, rapidly developing area. They are summarised below.

1. The Bias Challenge. Developers and deployers of AI models and tools must not merely acknowledge the presence of inherent bias in datasets, they must take steps to mitigate its effects.

2. The Privacy Challenge. Privacy and data protection frameworks must account for the increasing capability and prevalence of AI models and tools, and ensure the right balance is struck.

3. The Misrepresentation Challenge. Those who use AI to misrepresent others, or allow such misrepresentation to take place unchallenged, must be held accountable.

4. The Access to Data Challenge. Access to data, and the responsible management of it, are prerequisites for a healthy, competitive and innovative AI industry and research ecosystem.

5. The Access to Compute Challenge. Democratising and widening access to compute is a prerequisite for a healthy, competitive and innovative AI industry and research ecosystem.

6. The Black Box Challenge. We should accept that the workings of some AI models are and will remain unexplainable and focus instead on interrogating and verifying their outputs.

7. The Open-Source Challenge. The question should not be ‘open’ or ‘closed’, but rather whether there is a sufficiently diverse and competitive market to support the growing demand for AI models and tools.

8. The Intellectual Property and Copyright Challenge. The Government should broker a fair, sustainable solution based around a licensing framework governing the use of copyrighted material to train AI models.

9. The Liability Challenge. Determining liability for AI-related harms is not just a matter for the courts—Government and regulators can play a role too.

10. The Employment Challenge. Education is the primary tool for policymakers to respond to the growing prevalence of AI, and to ensure workers can ask the right questions of the technology.

11. The International Coordination Challenge. A global governance regime for AI may not be realistic nor desirable, even if there are economic and security benefits to be won from international co-operation.

12. The Existential Challenge. Existential AI risk may not be an immediate concern but it should not be ignored, even if policy and regulatory activity should primarily focus on the here and now.