Large language models and generative AI Contents

Executive summary

The world faces an inflection point on AI. Large language models (LLMs) will introduce epoch-defining changes comparable to the invention of the internet. A multi-billion pound race is underway to dominate this market. The victors will wield unprecedented power to shape commercial practices and access to information across the world. Our inquiry examined trends over the next three years and identified priority actions to ensure this new technology benefits people, our economy and society.

We are optimistic about this new technology, which could bring huge economic rewards and drive ground-breaking scientific advances.

Capturing the benefits will require addressing risks. Many are formidable, including credible threats to public safety, societal values, open market competition and UK economic competitiveness.

Far-sighted, nuanced and speedy action is therefore needed to catalyse innovation responsibly and mitigate risks proportionately. We found room for improvement in the Government’s priorities, policy coherence, and pace of delivery here.

We support the Government’s overall approach and welcome its successes in positioning the UK among the world’s AI leaders. This extensive effort should be congratulated. But the Government has recently pivoted too far towards a narrow focus on high-stakes AI safety. On its own this will not deliver the broader capabilities and commercial heft needed to shape international norms. The UK cannot hope to keep pace with international competitors without a greater focus on supporting commercial opportunities and academic excellence. A rebalance is therefore needed, involving a more positive vision for the opportunities and a more deliberate focus on near-term risks.

Concentrated market power and regulatory capture by vested interests also require urgent attention. The risk is real and growing. It is imperative for the Government and regulators to guard against these outcomes by prioritising open competition and transparency.

We have even deeper concerns about the Government’s commitment to fair play around copyright. Some tech firms are using copyrighted material without permission, reaping vast financial rewards. The legalities of this are complex but the principles remain clear. The point of copyright is to reward creators for their efforts, prevent others from using works without permission, and incentivise innovation. The current legal framework is failing to ensure these outcomes occur and the Government has a duty to act. It cannot sit on its hands for the next decade and hope the courts will provide an answer.

There is a short window to steer the UK towards a positive outcome. We recommend the following:

  • Prepare quickly: The UK must prepare for a period of protracted international competition and technological turbulence as it seeks to take advantage of the opportunities provided by LLMs.
  • Guard against regulatory capture: There is a major race emerging between open and closed model developers. Each is seeking a beneficial regulatory framework. The Government must make market competition an explicit AI policy objective. It must also introduce enhanced governance and transparency measures in the Department for Science, Innovation and Technology (DSIT) and the AI Safety Institute to guard against regulatory capture.
  • Treat open and closed arguments with care: Open models offer greater access and competition, but raise concerns about the uncontrollable proliferation of dangerous capabilities. Closed models offer more control but also more risk of concentrated power. A nuanced approach is needed. The Government must review the security implications at pace while ensuring that any new rules support rather than stifle market competition.
  • Rebalance strategy towards opportunity: The Government’s focus has skewed too far towards a narrow view of AI safety. It must rebalance, or else it will fail to take advantage of the opportunities from LLMs, fall behind international competitors and become strategically dependent on overseas tech firms for a critical technology.
  • Boost opportunities: We call for a suite of measures to boost computing power and infrastructure, skills, and support for academic spinouts. The Government should also explore the options for and feasibility of developing a sovereign LLM capability, built to the highest security and ethical standards.
  • Support copyright: The Government should prioritise fairness and responsible innovation. It must resolve disputes definitively (including through updated legislation if needed); empower rightsholders to check if their data has been used without permission; and invest in large, high-quality training datasets to encourage tech firms to use licenced material.
  • Address immediate risks: The most immediate security risks from LLMs arise from making existing malicious activities easier and cheaper. These pose credible threats to public safety and financial security. Faster mitigations are needed in cyber security, counter terror, child sexual abuse material and disinformation. Better assessments and guardrails are needed to tackle societal harms around discrimination, bias and data protection too.
  • Review catastrophic risks: Catastrophic risks (above 1000 UK deaths and tens of billions in financial damages) are not likely within three years but cannot be ruled out, especially as next-generation capabilities come online. There are however no agreed warning indicators for catastrophic risk. There is no cause for panic, but this intelligence blind spot requires immediate attention. Mandatory safety tests for high-risk high-impact models are also needed: relying on voluntary commitments from a few firms would be naïve and leaves the Government unable to respond to the sudden emergence of dangerous capabilities. Wider concerns about existential risk (posing a global threat to human life) are exaggerated and must not distract policymakers from more immediate priorities.
  • Empower regulators: The Government is relying on sector regulators to deliver the White Paper objectives but is being too slow to give them the tools. Speedier resourcing of Government-led central support teams is needed, alongside investigatory and sanctioning powers for some regulators, cross-sector guidelines, and a legal review of liability.
  • Regulate proportionately: The UK should forge its own path on AI regulation, learning from but not copying the US, EU and China. In doing so the UK can maintain strategic flexibility and set an example to the world—though it needs to get the groundwork in first. The immediate priority is to develop accredited standards and common auditing methods at pace to ensure responsible innovation, support business adoption, and enable meaningful regulatory oversight.




© Parliamentary copyright 2024