Large language models and generative AI Contents

Summary of conclusions and recommendations

Future trends

1.Large language models (LLMs) will have impacts comparable to the invention of the internet. (Paragraph 28)

2.The UK must prepare for a period of heightened technological turbulence as it seeks to take advantage of the opportunities. (Paragraph 28)

Open or closed

3.Fair market competition is key to ensuring UK businesses are not squeezed out of the race to shape the fast-growing LLM industry. The UK has particular strengths in mid-tier businesses and will benefit most from a combination of open and closed source technologies. (Paragraph 40)

4.The Government should make market competition an explicit policy objective. This does not mean backing open models at the expense of closed, or vice versa. But it does mean ensuring regulatory interventions do not stifle low-risk open access model providers. (Paragraph 41)

5.The Government should work with the Competition and Markets Authority to keep the state of competition in foundation models under close review. (Paragraph 42)

6.The risk of regulatory capture is real and growing. External AI expertise is becoming increasingly important to regulators and Government, and industry links should be encouraged. But this must be accompanied by stronger governance safeguards. (Paragraph 48)

7.We recommend enhanced governance measures in DSIT and regulators to mitigate the risks of inadvertent regulatory capture and groupthink. This should apply to internal policy work, industry engagements and decisions to commission external advice. Options include metrics to evaluate the impact of new policies and standards on competition; embedding red teaming, systematic challenge and external critique in policy processes; more training for officials to improve technical know-how; and ensuring proposals for technical standards or benchmarks are published for consultation. (Paragraph 49)

8.The perception of conflicts of interest risks undermining confidence in the integrity of Government work on AI. Addressing this will become increasingly important as the Government brings more private sector expertise into policymaking. Some conflicts of interest are inevitable and we commend private sector leaders engaging in public service, which often involves incurring financial loss. But their appointment to powerful Government positions must be done in ways that uphold public confidence. (Paragraph 56)

9.We recommend the Government should implement greater transparency measures for high-profile roles in AI. This should include further high-level information about the types of mitigations being arranged, and a public statement within six months of appointment to confirm these mitigations have been completed. (Paragraph 57)

A pro-innovation strategy

10.Large language models have significant potential to benefit the economy and society if they are developed and deployed responsibly. The UK must not lose out on these opportunities. (Paragraph 65)

11.Some labour market disruption looks likely. Imminent and widespread cross-sector unemployment is not plausible, but there will inevitably be those who lose out. The pace of change also underscores the need for a credible strategy to address digital exclusion and help all sectors of society benefit from technological change. (Paragraph 66)

12.We reiterate the findings from our reports on the creative industries and digital exclusion: those most exposed to disruption from AI must be better supported to transition. The Department for Education and DSIT should work with industry to expand programmes to upskill and re-skill workers, and improve public awareness of the opportunities and implications of AI for employment. (Paragraph 67)

13.The Government is not striking the right balance between innovation and risk. We appreciate that recent advances have required rapid security evaluations and we commend the AI Safety Summit as a significant achievement. But Government attention is shifting too far towards a narrow view of high-stakes AI safety. On its own, this will not drive the kind of widespread responsible innovation needed to benefit our society and economy. The Government must also recognise that long-term global leadership on AI safety requires a thriving commercial and academic sector to attract, develop and retain technical experts. (Paragraph 80)

14.The Government should set out a more positive vision for LLMs and rebalance towards the ambitions set out in the National AI Strategy and AI White Paper. It otherwise risks falling behind international competitors and becoming strategically dependent on a small number of overseas tech firms. The Government must recalibrate its political rhetoric and attention, provide more prominent progress updates on the ten-year National AI Strategy, and prioritise funding decisions to support responsible innovation and socially beneficial deployment. (Paragraph 81)

15.A diverse set of skills and people is key to striking the right balance on AI. We advocate expanded systems of secondments from industry, academia and civil society to support the work of officials—with appropriate guardrails as set out in Chapter 3. We also urge the Government to appoint a balanced cadre of advisers to the AI Safety Institute with expertise beyond security, including ethicists and social scientists. (Paragraph 82)

16.Recent Government investments in advanced computing facilities are welcome, but more is needed and the Government will struggle to afford the scale required to keep pace with cutting edge international competitors. The Government should provide more incentives to attract private sector investment in compute. These should be structured to maximise energy efficiency. (Paragraph 92)

17.Equitable access will be key. UK Research and Innovation and DSIT must ensure that both researchers and SMEs are granted access to high-end computing facilities on fair terms to catalyse publicly beneficial research and commercial opportunity. (Paragraph 93)

18.The Government should take better advantage of the UK’s start-up potential. It should work with industry to expand spin-out accelerator schemes. This could focus on areas of public benefit in the first instance. It should also remove barriers, for example by working with universities on providing attractive licensing and ownership terms, and unlocking funding across the business lifecycle to help start-ups grow and scale in the UK. (Paragraph 94)

19.The Government should also review UKRI’s allocations for AI PhD funding, in light of concerns that the prospects for commercial spinouts are being negatively affected and foreign influence in funding strategic sectors may grow as a result. (Paragraph 95)

20.A sovereign UK LLM capability could deliver substantial value if challenges around reliability, ethics, security and interpretability can be resolved. LLMs could in future benefit central departments and public services for example, though it remains too early to consider using LLMs in high-stakes applications such as critical national infrastructure or the legal system. (Paragraph 105)

21.We do not recommend using an ‘off the shelf’ LLM or developing one from scratch: the former is too risky and the latter requires high-tech R&D efforts ill-suited to Government. But commissioning an LLM to high specifications and running it on internal secure facilities might strike the right balance. The Government might also make high-end facilities available to researchers and commercial partners to collaborate on applying LLM technology to national priorities. (Paragraph 106)

22.We recommend that the Government explores the options for and feasibility of acquiring a sovereign LLM capability. No option is risk free, though commissioning external developers might work best. Any public sector capability would need to be designed to the highest ethical and security standards, in line with the recommendations made in this report. (Paragraph 107)

Risk

23.The most immediate security concerns from LLMs come from making existing malicious activities easier, rather than qualitatively new risks. (Paragraph 128)

24.The Government should work with industry at pace to scale existing mitigations in the areas of cyber security (including systems vulnerable to voice cloning), child sexual abuse material, counter-terror, and counter-disinformation. It should set out progress and future plans in response to this report, with a particular focus on disinformation in the context of upcoming elections. (Paragraph 128)

25.The Government has made welcome progress on understanding AI risks and catalysing international co-operation. There is however no publicly agreed assessment framework and shared terminology is limited. It is therefore difficult to judge the magnitude of the issues and priorities. (Paragraph 129)

26.The Government should publish an AI risk taxonomy and risk register. It would be helpful for this to be aligned with the National Security Risk Assessment. (Paragraph 129)

27.Catastrophic risks resulting in thousands of UK fatalities and tens of billions in financial damages are not likely within three years, though this cannot be ruled out as next generation capabilities become clearer and open access models more widespread. (Paragraph 140)

28.There are however no warning indicators for a rapid and uncontrollable escalation of capabilities resulting in catastrophic risk. There is no cause for panic, but the implications of this intelligence blind spot deserve sober consideration. (Paragraph 141)

29.The AI Safety Institute should publish an assessment of engineering pathways to catastrophic risk and warning indicators as an immediate priority. It should then set out plans for developing scalable mitigations. (We set out recommendations on powers and take-down requirements in Chapter 7). The Institute should further set out options for encouraging developers to build systems that are safe by design, rather than focusing on retrospective guardrails. (Paragraph 142)

30.There is a credible security risk from the rapid and uncontrollable proliferation of highly capable openly available models which may be misused or malfunction. Banning them entirely would be disproportionate and likely ineffective. But a concerted effort is needed to monitor and mitigate the cumulative impacts. (Paragraph 148)

31.The AI Safety Institute should develop new ways to identify and track models once released, standardise expectations of documentation, and review the extent to which it is safe for some types of model to publish the underlying software code, weights and training data. (Paragraph 148)

32.It is almost certain existential risks will not manifest within three years and highly likely not within the next decade. As our understanding of this technology grows and responsible development increases, we hope concerns about existential risk will decline. The Government retains a duty to monitor all eventualities. But this must not distract it from capitalising on opportunities and addressing more limited immediate risks. (Paragraph 155)

33.LLMs may amplify numerous existing societal problems and are particularly prone to discrimination and bias. The economic impetus to use them before adequate guardrails have been developed risks deepening inequality. (Paragraph 161)

34.The AI Safety Institute should develop robust techniques to identify and mitigate societal risks. The Government’s AI risk register should include a range of societal risks, developed in consultation with civil society. DSIT should also use its White Paper response to propose market-oriented measures which incentivise ethical development from the outset, rather than retrospective guardrails. Options include using Government procurement and accredited standards, as set out in Chapter 7. (Paragraph 162)

35.Further clarity on data protection law is needed. The Information Commissioner’s Office should work with DSIT to provide clear guidance on how data protection law applies to the complexity of LLM processes, including the extent to which individuals can seek redress if a model has already been trained on their data and released. (Paragraph 167)

36.The Department for Health and Social Care should work with NHS bodies to ensure future proof data protection provisions are embedded in licensing terms. This would help reassure patients given the possibility of LLM businesses working with NHS data being acquired by overseas corporations. (Paragraph 168)

International context and lessons

37.The UK should continue to forge its own path on AI regulation, balancing rather than copying the EU, US or Chinese approaches. In doing so the UK can strengthen its position in technology diplomacy and set an example to other countries facing similar decisions and challenges. (Paragraph 175)

38.International regulatory co-ordination will be key, but difficult and probably slow. Divergence appears more likely in the immediate future. We support the Government’s efforts to boost international co-operation, but it must not delay domestic action in the meantime. (Paragraph 178)

39.Extensive primary legislation aimed solely at LLMs is not currently appropriate: the technology is too new, the uncertainties too high and the risk of inadvertently stifling innovation too great. Broader legislation on AI governance may emerge in future, though this was outside the scope of our inquiry. (Paragraph 187)

40.Setting the strategic direction for LLMs and developing enforceable, pro-innovation regulatory frameworks at pace should remain the Government’s immediate priority. (Paragraph 187)

Making the White Paper work

41.We support the overall White Paper approach. But the pace of delivering the central support functions is inadequate. The regulatory support and co-ordination teams proposed in the March 2023 White Paper underpin its entire success. By the end of November 2023, regulators were unaware of the central function’s status and how it would operate. This slowness reflects prioritisation choices and undermines confidence in the Government’s commitment to the regulatory structures needed to ensure responsible innovation. (Paragraph 195)

42.DSIT should prioritise resourcing the teams responsible for regulatory support and co-ordination, and publish an update on staffing and policy progress in response to this report. (Paragraph 196)

43.Relying on existing regulators to ensure good outcomes from AI will only work if they are properly resourced and empowered. (Paragraph 201)

44.The Government should introduce standardised powers for the main regulators who are expected to lead on AI oversight to ensure they can gather information relating to AI processes and conduct technical, empirical and governance audits. It should also ensure there are meaningful sanctions to provide credible deterrents against egregious wrongdoing. (Paragraph 201)

45.The Government’s central support functions should work with regulators at pace to publish cross-sector guidance on AI issues that fall outside individual sector remits. (Paragraph 202)

46.Model developers bear some responsibility for the products they are building—particularly given the foreseeable risk of harm from misuse and the limited information available to customers about how the base model works. But how far such liability extends remains unclear. (Paragraph 209)

47.The Government should ask the Law Commission to review legal liability across the LLM value chain, including open access models. The Government should provide an initial position, and a timeline for establishing further legal clarity, in its White Paper response. (Paragraph 209)

48.We welcome the commitments from model developers to engage with the Government on safety. But it would be naïve to believe voluntary agreements will suffice in the long-term as increasingly powerful models proliferate across the world, including in states which already pose a threat to UK security objectives. (Paragraph 218)

49.The Government should develop mandatory safety tests for high-risk high-impact models. This must include an expectation that the results will be shared with the Government (and regulators if appropriate), and clearly defined powers to require compliance with safety recommendations, suspend model release, and issue market recall or platform take-down notices in the event of a credible threat to public safety. (Paragraph 219)

50.The scope and benchmarks for high-risk high-impact testing should involve a combination of metrics that can adapt to fast-moving changes. They should be developed by the AI Safety Institute through engagement with industry, regulators and civil society. It is imperative that these metrics do not impose undue market barriers, particularly to open access providers. (Paragraph 220)

51.Accredited standards and auditing practices are key. They would help catalyse a domestic AI assurance industry, support business clarity and empower regulators. (Paragraph 226)

52.We urge the Government and regulators to work with partners at pace on developing accredited standards and auditing practices for LLMs (noting that these must not be tick-box exercises). A consistent approach to publishing key information on model cards would also be helpful. (Paragraph 226)

53.The Government should then use the public sector procurement market to encourage responsible AI practices by requiring bidders to demonstrate compliance with high standards when awarding relevant contracts. (Paragraph 227)

Copyright

54.LLMs may offer immense value to society. But that does not warrant the violation of copyright law or its underpinning principles. We do not believe it is fair for tech firms to use rightsholder data for commercial purposes without permission or compensation, and to gain vast financial rewards in the process. There is compelling evidence that the UK benefits economically, politically and societally from upholding a globally respected copyright regime. (Paragraph 245)

55.The application of the law to LLM processes is complex, but the principles remain clear. The point of copyright is to reward creators for their efforts, prevent others from using works without permission, and incentivise innovation. The current legal framework is failing to ensure these outcomes occur and the Government has a duty to act. It cannot sit on its hands for the next decade until sufficient case law has emerged. (Paragraph 246)

56.In response to this report the Government should publish its view on whether copyright law provides sufficient protections to rightsholders, given recent advances in LLMs. If this identifies major uncertainty the Government should set out options for updating legislation to ensure copyright principles remain future proof and technologically neutral. (Paragraph 247)

57.The voluntary IPO-led process is welcome and valuable. But debate cannot continue indefinitely. (Paragraph 249)

58.If the process remains unresolved by Spring 2024 the Government must set out options and prepare to resolve the dispute definitively, including legislative changes if necessary. (Paragraph 249)

59.The IPO code must ensure creators are fully empowered to exercise their rights, whether on an opt-in or opt-out basis. Developers should make it clear whether their web crawlers are being used to acquire data for generative AI training or for other purposes. This would help rightsholders make informed decisions, and reduce risks of large firms exploiting adjacent market dominance. (Paragraph 252)

60.The Government should encourage good practice by working with licensing agencies and data repository owners to create expanded, high quality data sources at the scales needed for LLM training. The Government should also use its procurement market to encourage good practice. (Paragraph 256)

61.The IPO code should include a mechanism for rightsholders to check training data. This would provide assurance about the level of compliance with copyright law. (Paragraph 259)





© Parliamentary copyright 2024