Science, Innovation and Technology Committee
Governance of artificial intelligence (AI)
Date Published: 16 November 2023
On 31 August 2023 the Science, Innovation and Technology Committee published its Ninth Report of Session 2022–23, The governance of artificial intelligence: interim report (HC 1769). The Government Response was received on 6 November 2023. The Response is appended to this Report.
The Government is grateful to the House of Commons Science, Innovation and Technology Committee for the recommendations in this interim report, shared as part of the inquiry into the Governance or Artificial Intelligence (AI).
The Government welcomes the Committee’s findings, and this document sets out a response to their recommendations. The ‘12 Challenges of AI’ evidenced in this interim report make clear the areas of risk for current and future applications of AI. The Government is considering these challenges carefully and working to establish a framework for AI Regulation and governance that supports the sustainable and responsible use of AI across sectors in the UK.
In October 2022, the Science, Innovation and Technology Committee, formerly known as the Science and Technology Committee, launched an inquiry into the Governance of AI to examine the effectiveness of AI governance and the Government’s proposals to establish a framework to regulate AI.
Since the launch of the Governance of AI inquiry, the Government has made significant steps to establish a suitable framework for the regulation of AI in the UK following the publication of the AI Regulation White Paper in March 2023.1 A range of work has taken place to begin implementing the commitments made in the white paper and to consult on proposals in the framework to effectively regulate AI, including to:
Further detail on the actions undertaken by the Government can be found in the response to the recommendations below.
Education policy must prioritise equipping children with the skills to succeed in a world where AI is ubiquitous: digital literacy and an ability to engage critically with the information provided by AI models and tools. (Paragraph 38)
The Government recognises that to be able to use AI effectively, including the ability to ascertain the accuracy and quality of outputs from generative AI models, children need to have a significant amount of background knowledge stored in their long-term memory. That is why all schools should teach an ambitious, knowledge-rich curriculum, using pedagogical approaches that support long-term knowledge retention.
Emerging technologies, including AI, have the potential to support this goal. The Government is committed to evidence-based policymaking and will ensure that a proven positive impact on pupil outcomes is the central criteria when considering guidance to schools on the use of AI. We are working with key partners and stakeholders to understand better the impact and opportunity of AI on education delivery and pupil outcomes. We put out a call for evidence on this which closed on 23 August, and we will publish the response later this autumn.
Evidence from cognitive science is clear that skills exist within subject disciplines, rather than being generic, and Government policy will therefore continue to promote the importance of knowledge-based curricula over skills- or competency-based curricula. The Government believes the best way to prepare young people for an uncertain future is to ensure they are taught a broad curriculum that gives them the knowledge and skills needed to understand the world and to progress to the next stage of education and work. This includes high standards of literacy and numeracy, which are the gateway to accessing the curriculum and fundamental to a child’s success at school. Literacy and numeracy skills are critical to young people being able to use AI and to being able to navigate a world in which AI has an increasing impact.
Since 2010, we have reformed the National Curriculum, GCSEs and A levels to promote the teaching of rich subject knowledge and to set world-class standards across all subjects. This includes the teaching of computing and digital literacy, which are both part of the National Curriculum. In the Schools White Paper, published last year, we committed to not make any changes to the National Curriculum for the remainder of this Parliament so we can further embed the 2014 curriculum reforms and to provide stability for schools following the pandemic. To meet future challenges, we want to build on the success of our reforms and use the principles that drove them to change 16–19 education. On 4th October, the Prime Minister announced the plans to introduce the Advanced British Standard (ABS)2 for 16–18- year-olds over the next decade. This is a new baccalaureate-style qualification that takes the best of A levels and T levels and brings them together into a single qualification and unified structure. The ABS will ensure students continue to study maths (and English) to age 18, raising the floor of attainment in these subjects, and offer greater breadth.
Computing was introduced as a statutory National Curriculum subject in 2014. The computing curriculum, taught from Key Stages 1 to 4, provides young people with the essential knowledge and skills to succeed as active participants in a digital world, This foundational knowledge will enable them to build more specialist expertise in the future and which will help to meet the needs of the future digital economy in shortage areas such as programming. This replaced the previous ICT curriculum, which was widely regarded as outdated and as failing to prepare pupils for further study, employment or life in a world increasingly dependent upon technology.
For the UK to retain its position as a world-leading economy, we need to ensure people of all ages can develop skills that they, the country, and business need. Adult retraining and upskilling are an essential part of our plans to cement the UK’s status as a science and technology superpower by 2030. We are investing in adult education and skills so adults, at any age, can retrain or upskill to meet their potential.
We are continuing to expand Skills Bootcamps which offer free, flexible courses of up to 16 weeks to support upskilling or reskilling. Skills Bootcamps give people the opportunity to build sector-specific skills, with the offer of a job interview with an employer on completion. Over 1,000 Skills Bootcamps are available across the country, offering training in STEM subjects such as AI, software development, cyber security, cloud engineering, data analytics, mechanical engineering, and engineering diagnostics. Skills Bootcamps are also delivering flexible training for new skills which support the green economy and can offer a pathway to an accelerated apprenticeship. To further support the demand for AI skills, we recently introduced the first AI Data Specialist Apprenticeship Standard at Level 7, a highly skilled role that champions AI and its applications, which promotes the adoption of novel tools and technologies. Additionally, since 2020 the Department for Science, Innovation and Technology has funded £26 million worth of scholarships for groups underrepresented in the technology industry to undertake an AI and Data Science master’s conversion course.
We are also strengthening lifelong learning so that adults have the chance to upskill or retrain at any stage of their working life. The Lifelong Loan Entitlement (LLE) will be delivered from academic year 2025/26, providing individuals with a loan entitlement to the equivalent of four years of post-18 education (£37,000 in today’s fees). It will be available for both full years of study at higher technical and degree levels, as well as, for the first time, modules of high-value courses, regardless of whether they are provided in colleges or universities. Under this flexible skills system, people will be able to space out their studies and learn at a pace that is right for them, including choosing to build up their qualifications over time, within both Further Education and Higher Education providers. They will have a real choice in how and when they study to acquire new life-changing skills.
The Government’s approach to AI governance and regulation should address each of the twelve challenges we have outlined, both through domestic policy and international engagement. (Paragraph 88)
We welcome the Committee’s analysis of the challenges posed by AI and agree with the need for effective domestic governance and regulation, accompanied by international engagement and alignment, to ensure the UK can drive responsible, safe AI innovation and maintain public trust in AI.
Many of the challenges highlighted by the Committee (such as the bias challenge, liability challenge, intellectual property and copyright challenge) relate to risks arising from or exacerbated by foundation models or frontier AI models. In the AI Regulation White Paper published in March 2023, we set out how we intend to regulate AI through a principles-based approach, delivered through our existing expert regulators. We highlighted the need for further work on foundation models as part of our broader analysis of accountability throughout the AI lifecycle. We will be providing a wider update on our domestic regulatory approach to AI through our response to the AI Regulation White Paper later this year.
The AI Safety Summit on 1–2 November achieved a landmark agreement for a new international effort to unlock the benefits offered by AI, and will be the foundation on which future international action on AI safety is built. The Bletchley Declaration agreed an initial mutual understanding of frontier AI, including foundation models, and the risks associated with it. The Declaration also set out that countries will work in an inclusive manner to ensure human-centric, trustworthy and responsible AI that is safe.
Wider work on foundation models is ongoing and is being informed by both the Frontier AI Taskforce, which offers vital insights and expertise and will play an integral part of the recently announced UK AI Safety Institute. The AI Safety Institute was announced as part of the AI Safety Summit, where the risks and challenges of foundation models and frontier AI were examined and discussed at an international level. Countries in attendance also agreed that Yoshua Bengio will lead delivery of a ‘State of the Science’ report, which will help build a shared scientific understanding of the capabilities and risks posed by frontier AI. The UK’s AI Safety Institute will play a vital role in leading this work, in partnership with countries around the world.
We are taking immediate steps on frontier AI safety to begin addressing many of the challenges raised by the Committee. At the AI Safety Summit, for the first time, the UK secured agreement from leading frontier AI companies to publish their safety policies prior to the Summit, and the UK published a document on Emerging Processes for Frontier AI Safety to inform the further development of frontier AI organisations’ safety policies. Senior government representatives from leading AI nations, and major AI organisations, also agreed a plan for safety testing of frontier AI models, which involves testing models both pre- and post-deployment for national security, safety and societal harms, and a shared ambition to invest in public sector capability to support this.
To begin to address the bias challenge, the Centre for Data Ethics and Innovation (CDEI) has launched a Fairness Innovation Challenge,3 that aims to drive the development of solutions to address bias and discrimination in AI systems and ensure alignment with the fairness regulatory principle proposed in the AI White Paper. The Challenge will be delivered with Innovate UK, and in partnership with key UK fairness regulators, the EHRC and the ICO.
To further address the challenges of AI, the CDEI and the Central Digital and Data Office have developed the Algorithmic Transparency Recording Standard,4 which establishes a standardised way for public sector organisations to proactively and openly publish information about how and why they are using algorithmic methods in decision-making. The template requires information on how teams have considered privacy, accuracy, bias, and more, in the development and deployment of their tools.
Through the white paper, the Government also emphasised the importance of active international engagement - both bilaterally and through multilateral fora, such as the Council of Europe, OECD, G7, Global Partnership on AI (GPAI), United Nations (UN), G20, and Standards Development Organisations (SDOs). We continue to recognise the need to shape the global development and governance of AI, achieving the right balance between responding to risks and maximising opportunities afforded by AI in doing so.
We have continued to promote cross-border interoperability and coherence between the different multilaterals, advocating for an inclusive, multistakeholder approach to ensure each forum contributes to the international AI landscape as effectively as possible.
The Government should, as part of its implementation of its proposals, undertake a gap analysis of the UK’s regulators, which considers not only resourcing and capacity, but whether any regulators require new powers to implement and enforce the principles outlined in the AI white paper. (Paragraph 104)
As part of our work to establish the UK’s AI regulatory framework, we are working closely with a range of regulators to make sure they have the skills, expertise and powers to deliver on our approach. We have also been engaging regulators to ensure that they are equipped to manage risks relating to AI, with multiple regulators beginning to take action in line with our proposed AI framework.
We are in the process of establishing a range of central support functions to enable regulators to understand the emerging risks and challenges posed by developments in AI. This includes a central regulatory coordination function, which, once established, will coordinate across regulators to identify potential overlaps and gaps in regulatory remits and support regulators in implementing the regulatory principles for AI. Through this function we will put appropriate governance structures in place to support exchange of information and will produce guidance to regulators to support them in their activity. We will provide further details on the steps we are taking in this area in our forthcoming response to the white paper consultation.
We are also continuing to explore options to address capability gaps within and across regulators. The AI Regulation White Paper set out potential approaches such as the creation of a common pool of expertise to support and to expand knowledge sharing between regulators. We are considering the feasibility and effectiveness of options such as this to support regulators implement the white paper principles.
We have recently announced plans to work with the Digital Regulation Cooperation Forum to establish a multi-agency advice service known as the DRCF AI and Digital Hub.5 The hub will act as a single source of support for innovators of AI technologies. This will reduce the burden for these innovators needing to interact with multiple regulators simultaneously. The hub will also publish case studies of how innovators have been supported, spreading learning to companies facing similar issues. This advisory service will help innovators to comply with multiple regulatory regimes, in order to accelerate the roll out new technologies whilst informing the development of the AI regulation framework.
In its reply to this interim Report, and its response to the AI white paper consultation, the Government should confirm whether AI-specific legislation, such as the introduction of a requirement for regulators to pay due regard to the AI white paper principles, will be introduced in the next Parliament. It should also confirm what work has been undertaken across Government to explore the possible contents of such a Bill. (Paragraph 107)
In the AI Regulation White Paper, we said that we do not intend to introduce new legislation immediately. We are taking an evidence-based approach to regulation, establishing the Frontier AI Taskforce (and now the AI Safety Institute) to offer vital insights into the advanced capabilities of frontier AI and foundation models. Furthermore, we convened nations to examine frontier AI risks at the AI Safety Summit. Alongside this, we have been working with leading frontier AI companies who have now published their AI safety policies and we have developed the “Emerging Processes for Frontier AI Safety” document, providing information on how these companies can keep their models safe.
At the Summit, there was clear consensus that there is an important role for governments in holding frontier AI companies to account and that binding requirements may be required. But rather than rushing to legislate, we want to simultaneously learn about model capabilities and risks, while also carefully considering the frameworks for action. This is in line with the iterative approach to regulation we set out in the AI Regulation White Paper and will ensure we are putting in place the right measures at the right time.
The response to the AI Regulation White Paper will set out the Government’s position on the issues raised in the AI Regulation White Paper consultation. The white paper consultation response will provide our latest assessment of the development of the UK’s AI regulatory framework. Our response to the Committee will not pre-empt setting out this assessment in the white paper response, which is the appropriate place for the Government to respond to feedback provided in that consultation.
We would like to take this opportunity to reassure the Committee that as part of our development of the white paper and since its publication, the Department for Science, Innovation and Technology has worked closely with other departments in the development of the UK’s regulatory approach for AI. This includes our work to review the regulatory landscape to support the principles-based approach we set out in the white paper and the establishment of the Central AI Risk Function to monitor developing risks from AI and coordinate mitigations. Taking such an evidence-based approach to AI regulation will put us in the best position to keep pace with this fast-moving technology, and we will set out further details on our next steps as part of the white paper consultation response.
The Government should confirm the Task Force’s full membership, terms of reference, and the first tranche of public sector pilot projects, in its reply to this interim Report. (Paragraph 108)
In the first progress update we published about the Taskforce, we announced the first seven expert board members, and details about the first partnerships we have made. The Taskforce is the progenitor for the recently announced AI Safety Institute, which the PM announced on the 26th of October. As such, the Taskforce will transition into a new structure to establish the UK as a global hub for advanced safety research and enable the responsible development and deployment of this transformative technology.
Membership of the External Advisory Board includes:
The overarching objective of the Taskforce - to enable the safe and reliable development and deployment of advanced AI systems - has only become more pressing. The Taskforce will therefore become a permanent feature of the AI ecosystem.
The Institute will continue the Taskforce’s safety research and evaluations. The other core parts of the Taskforce’s mission will remain in DSIT as policy functions: identifying new uses for AI in the public sector; and strengthening the UK’s capabilities in AI.
On public sector pilot projects, the Government’s ambition is to use AI confidently and responsibly, where it matters most. The Government are committed to supporting departments with an ambitious approach to the adoption of AI to improve public services and boost productivity and the government will release further details in due course.
Existing work to use AI in public services has been supported, with the Department for Education providing up to £2 million to Oak National Academy to improve and expand Artificial Intelligence tools for teachers and make curriculum content available to companies wanting to build AI edtech tools based on the English national curriculum. Another example is the Department of Health and Social Care where benefits are already being seen through AI use in getting support to those who need it. They used the technology to identify language indicating mental distress in public social media posts. This was then used to signpost people to the NHS-endorsed Every Mind Matters’ digital support hub, leading to a 25% increase in people accessing the vital service.
The challenges highlighted in our interim Report should form the basis for these important international discussions. (Paragraph 119)
We welcome the Committee’s analysis of the challenges posed by AI and agree on their importance, as well as the urgency of effective international action to tackle the risks identified. Active international engagement will continue to be a key priority for the Government, as we recognise the opportunity to both unlock AI’s benefits and address its challenges through bilateral and multilateral cooperation.
The UK will continue to play a proactive role in initiatives such as the G7 Hiroshima AI Process, OECD AI governance discussions, Council of Europe Committee on AI negotiations, through discussions at the UN and its respective bodies, as well as AI-specific for a such as the Global Partnership on AI and international Standards Development processes. Throughout this activity, we aim to ensure that the cross-border challenges (and opportunities) of AI are effectively addressed. We will also continue to build and deepen our bilateral partnerships and dialogues on these issues, such as through the US-UK Atlantic Declaration and the UK-Japan Hiroshima Accord.
Additionally, this November’s AI Safety Summit was the first global meeting of its type, bringing together countries, AI companies, academia, and civil society across a variety of perspectives to tackle the most significant risks at the frontier of AI.
As the Committee noted, we should seek to advance a shared international understanding of the challenges and opportunities of AI. We are pleased to have invited a wide range of countries to the Summit to secure agreement on the Bletchley Declaration, and to support the first-of-its-kind State of the Science Report to help build international consensus on the risks and capabilities of frontier AI. Countries and leading AI companies also agreed on the importance of bringing together the responsibilities of governments and frontier AI developers and agreed to a plan for safety testing at the frontier.
The discussions at this Summit, and the successful outcomes we have achieved, set a strong foundation to continue this international collaboration through both ongoing and future bilateral and multilateral partnerships.
The summit should aim to advance a shared international understanding of the challenges of AI—as well as its opportunities. Invitations to the summit should therefore be extended to as wide a range of countries as possible. Given the importance of AI to our national security there should also be a forum established for like-minded countries who share liberal, democratic values, to be able to develop an enhanced mutual protection against those actors—state and otherwise—who are enemies of these values. (Paragraph 120)
The UK believes that the dangers of frontier AI risks are increasingly urgent. That is why the UK has convened the inaugural AI Safety Summit. Over two days the Summit brought together approximately 150 representatives from across the globe, including a diverse set of government leaders and ministers, multilateral fora, industry, academia and civil society leaders. Countries attending agreed to the Bletchley Declaration on AI safety, a landmark agreement recognising a shared consensus on the opportunities and risks of AI, and the need for collaborative action on frontier AI safety.
Alongside discussions on risk, the Summit considered how we can unlock the opportunities that AI brings and showcased how safe development will enable AI to be used for good globally. The UK, alongside Canada, the USA, the Bill and Melinda Gates Foundation, and other partners, announced £80 million for a new AI for development collaboration, working with innovators and institutions across Africa to support the development of responsible AI.
DSIT also announced a new £100m fund for an AI Life Sciences Accelerator Mission to bring cutting edge AI to bear on some of the most pressing health challenges facing society.
DSIT recognises the value of discussing the protection of democratic values. At the Summit there was significant consideration on the risks from the integration of frontier AI into society, including the broad range of societal harms which may be created and discussion on how disinformation and misinformation might challenge democracy. The impact of AI on democracy, human rights and the rule of law, continues to be recognised as a key prioritiy and t e UK is pleased to be working with other nations at the Council of Europe to negotiate the first intergovernmental treaty on AI, with respect to human rights, democracy, and the rule of law, recognising that both the technology and their shared values are global in nature.
The UK will continue to work with like-minded international partners on threats to democracy and elections, including threats from state actors, and those threats enhanced by AI. During the Summit, world leaders representing Australia, Canada, the European Union, France, Germany, Italy, Japan, the Republic of Korea, Singapore, the USA, and the UK, alongside industry leaders, recognised the importance of bringing together governments and actors developing AI within their countries, to collaborate on testing the next generation of AI models against a range of critical national security, safety and societal risks. The plan involves testing models both pre- and post-deployment, and a role for governments in testing, particularly for critical national security, safety and society harms.
As an initial contribution to this new collaboration, the UK detailed its launch of the world’s first AI Safety Institute, which will build public sector capability to conduct safety testing and research into AI safety. In exploring all the risks, from social harms including bias and misinformation, through to the most extreme risks of all, including the potential for loss of control, the government will seek to make the work of the Safety Institute widely available, and has committed to work in partnership with other countries’ institutes, including the US.
The Summit is only the first part of an urgent global conversation on frontier AI safety. The Government therefore welcomes agreement from the Republic of Korea and France to continue the conversation in future Summits, and we look forward to continuing work with them and other international partners on this crucial topic.
The twelve Challenges of AI Governance which we have set out must be addressed by policymakers in all jurisdictions. Different administrations may choose different ways to do this. (Paragraph 123)
As AI technologies continue to develop at pace, so too do the frameworks designed to govern them. Thus far, different jurisdictions have opted to take different approaches to governing AI domestically.
Given the various approaches to governing AI in different jurisdictions, international interoperability is imperative, as we have set out in our response to recommendation six.
Regarding domestic action, we would also highlight that our AI regulation framework set out in the AI Regulation White Paper applies to the whole of the UK. AI is of course used in various sectors and impacts on a wide range of policy areas, some of which are reserved and some of which are devolved. We will continue to assess the implications of developments in AI technologies and consider the devolution impacts of AI regulation as our policy evolves.
Our approach to AI governance will not alter the current territorial arrangement of AI policy. We will continue to engage the devolved administrations as we develop our AI regulatory framework, and work with the devolved administrations as they develop strategies for those areas which are devolved. We will ensure that the Government’s pro-innovation approach ensures that all parts of the UK benefit from the great opportunities of safe and responsible AI development.
We urge the Government to accelerate, not to pause, the establishment of a governance regime for AI, including whatever statutory measures as may be needed. (Paragraph 124)
We agree that establishing a governance regime for AI is vital to drive responsible, safe innovation in the UK. We have already taken steps to do this, publishing the AI Regulation White Paper in March 2023 which set out our proposals for a principles-based regulatory regime delivered through our existing expert regulators.
In the AI Regulation White Paper, we said that we do not intend to introduce new legislation at that stage, however, we are clear that we will take action to mitigate risks and support safe and responsible AI innovation as required. This is a fast-moving technology, and we will not rush to legislation, but ensure we put in place the right measures at the right time. As we have done throughout, we will develop our approach in close consultation with industry and civil society, maintaining a pro-innovation approach that means AI improves the lives of the British people. Through our response to the AI Regulation White Paper, we will provide an update on our approach.
1 AI Regulation White Paper https://www.gov.uk/government/publications/ai-regulation-a-pro- innovation-approach
2 Advanced British Standards, UK Government
3 CDEI, Fairness Innovation Challenge https://www.gov.uk/government/news/new-innovation- challenge-launched-to-tackle-bias-in-ai-systems
4 CDEI, Algorithmic Transparency Recording Standard https://www.gov.uk/government/publications/algorithmic-transparency-template
5 DRCF AI and Digital Hub https://www.gov.uk/government/news/new-advisory-service-to-help- businesses-launch-ai-and-digital-innovations