Large language models and generative AI Contents

Chapter 1: The Goldilocks problem

Our inquiry

1.The world is facing an inflection point in its approach to artificial intelligence (AI). Rapid advances in large language models (LLMs) have generated extensive discussion about the future of technology and society. Some believe the developments are over-hyped. Others worry we are building machines that will one day far outstrip our comprehension and, ultimately, control.

2.We launched this inquiry to examine likely trajectories for LLMs over the next three years and the actions required to ensure the UK can respond to opportunities and risks in time. We focused on LLMs as a comparatively contained case study of the issues associated with generative AI. We focused on what is different about this technology and sought to build on rather than recap the extensive literature on AI.1

3.We took evidence from 41 expert witnesses, reviewed over 900 pages of written evidence, held roundtables with small and medium sized businesses hosted by the software firm Intuit, and visited Google and UCL Business.2 We were assisted by our specialist adviser Professor Michael Wooldridge, Professor of Computer Science at the University of Oxford. We are grateful to all who participated in our inquiry.

The challenge

4.Large language models are likely to introduce some epoch-defining changes. Capability leaps which eclipse today’s state-of-the-art models are possible within the next three years. It is highly likely that openly available models with increasingly advanced capability will proliferate. In the right hands, LLMs may drive major boosts in productivity and deliver ground-breaking scientific insights. In the wrong hands they make malicious activities easier and may lay the groundwork for qualitatively new risks.3

5.The businesses that dominate the LLM market will have unprecedented powers to shape access to information and commercial practices across the world. At present US tech firms lead the field, though that may not hold true forever. The UK, alongside allies and partners, must carefully consider the implications of ceding commercial advantage to states which do not share our values.4 We believe there are strong domestic and foreign policy arguments favouring an approach that supports (rather than stifles) responsible innovation to benefit consumers and preserve our societal values.5

6.The revolution in frontier AI will take place outside Government. But the work involved in building and releasing models will take place in specific geographies—not least because the developers will need access to energy, compute and consumers. National governments and regulators will therefore play a central role in shaping what kind of companies are allowed to flourish. The most successful will wield extensive power. Professor Neil Lawrence, DeepMind Professor of Machine Learning at the University of Cambridge, believed governments have a rare moment of “steerage” and the ramifications of decisions taken now will have impacts far into the future.6

7.Getting this steerage right will be difficult. It is common for technological developments to outpace policy responses (as well as raise ethical questions). But the latest advances in foundation models suggest this divide is becoming acute and will continue to widen.7 This presents difficulties for governments seeking to harness this technology for good. Too much early intervention and they risk introducing laws akin to the ‘Red Flag Act’ of 1865, which required someone to walk in front of the new motorcars waving a red flag.8 This did not age well. But too much caution around sensible rules is also harmful: seatbelts were invented in 1885 but drivers were not required to wear them until 1983.9

8.Solving this ‘Goldilocks’ problem of getting the balance right between innovation and risk, with limited foresight of market developments, will be one of the defining challenges for the current generation of policymakers. Our report proposes a series of recommendations to help the Government, regulators and industry navigate the challenges ahead.


1 See for example Artificial Intelligence Committee, AI in the UK: ready, willing and able? (Report of Session 2017–19, HL Paper 100), Science, Innovation and Technology Committee, The governance of artificial intelligence: interim report (Ninth Report, Session 2022–23, HC 1769), DSIT, ‘Frontier AI’ (25 October 2023): https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper [accessed 8 January 2024] and Department for Digital, Culture, Media and Sport, National AI Strategy, CP 525 (September 2021): https://assets.publishing.service.gov.uk/media/614db4d1e90e077a2cbdf3c4/National_AI_Strategy_-_PDF_version.pdf [accessed 25 January 2024].

2 See Appendix 4.

3 Q 3 (Dr Jean Innes and Ian Hogarth), written evidence from the Alan Turing Institute (LLM0081) and DSIT (LLM0079)

4 Written evidence from Andreessen Horowitz (LLM0114)

5 Written evidence from Google and Google DeepMind (LLM0095), Meta (LLM0093), Microsoft (LLM0087), the Market Research Society (LLM0088), Oxford Internet Institute (LLM0074) and Andreessen Horowitz (LLM0114)

7 Q 2 (Dr Jean Innes) and written evidence from the Open Data Institute (LLM0083)

8 The Open University, ‘The Red Flag Act’: https://law-school.open.ac.uk/blog/red-flag-act [accessed 20 December 2023]

9 Department for Transport and Stephen Hammond MP, ‘Thirty years of seatbelt safety’ (January 2013): https://www.gov.uk/government/news/thirty-years-of-seatbelt-safety [accessed 20 December 2023]




© Parliamentary copyright 2024