AI in the UK: No Room for Complacency Contents

Chapter 2: Living with artificial intelligence

8.Since the publication of the Select Committee on Artificial Intelligence’s report in April 2018, investment in, and focus on, the United Kingdom’s approach to artificial intelligence has grown significantly. In 2015, the UK saw £245 million invested in AI. By 2018, this had increased to over £760 million. In 2019 this was £1.3 billion.10 Artificial intelligence has been deployed in the UK in a range of fields—from agriculture and healthcare, to financial services, through to customer service, retail and logistics. It is being used to help tackle the COVID-19 pandemic,11 but is also being used to underpin facial recognition technology, deep fakes,12 and other ethically challenging uses. The adage from John McCarthy that “as soon as it works no one calls it AI any more”13 continues to ring true: AI has become such a prevalent feature of modern life, that it is not always clear when, and how, it is being used. It is all the more important that we understand its opportunities and risks.

Public understanding and data

9.The Select Committee concluded in 2018 that:

“Artificial intelligence is a growing part of many people’s lives and businesses. It is important that members of the public are aware of how and when artificial intelligence is being used to make decisions about them, and what implications this will have for them personally. This clarity, and greater digital understanding, will help the public experience the advantages of AI, as well as to opt out of using such products should they have concerns.”14

10.The need to ensure that the public is well-versed in the opportunities and risks involved in artificial intelligence, and the data they share which is used to inform such systems, remains essential. The onset of the COVID-19 pandemic has increased the use of technology in everyday life, as well as its application by the Government. In particular, the collection and sharing of sensitive personal data has been a cornerstone of the national, and international, response to the pandemic.

11.Professor Michael Wooldridge, Head of Department and Professor of Computer Science at the University of Oxford and Programme Director for Artificial Intelligence at the Alan Turing Institute, said that “data and privacy” was one of the risks for artificial intelligence in the next five years.15 He added that since the Select Committee’s original inquiry in 2017 “we have seen endless examples, every week, of data abuse. Here is the thing: for current AI techniques to work with you, they need data about you. …That remains a huge challenge. Society has not yet found its equilibrium in this new world of big data and ubiquitous computing.”16 Other witnesses agreed. Dr Daniel Susskind, a Fellow in Economics at Balliol College, Oxford, said:

“Before the pandemic, there was a very lively public debate about issues of data privacy and security … At the start of the pandemic, a “do what it takes” mentality took hold with respect to developing technologies to help us to track and trace the virus. Technology companies around the world were given huge discretion to collect smartphone data, bank account statements, CCTV footage and so on in a way that would have been unimaginable eight or nine months ago.”17

12.While acknowledging that this was needed for the challenge facing society in that moment, Dr Susskind said “There is an important task in the months to come, once the pandemic starts to come to an end, in reining back the discretion and power we have granted to technology companies and, indeed, to states around the world.”18

13.Professor Dame Wendy Hall, Regius Professor of Computer Science, University of Southampton, co-authored a review with Jérôme Pesenti in 2017 as part of the Government’s Digital Strategy. The review, announced in March 2017, published its report on 15 October 2017.19 The Hall-Pesenti Review made 18 recommendations on how to make the UK the best place in the world for businesses developing AI. Professor Hall told us that they “made data trusts the first recommendation in our review.”20 She said:

“… there are issues in how we use personal data to do what companies and government need to do to analyse situations and to develop AI. How companies share data, where the asset is recorded and the legal and ethical framework in which we share data in all circumstances are major societal issues. The UK is in the lead in this space. We need to keep the impetus up there as well.”21

14.In June 2020, a survey by the Department for Business, Energy and Industrial Strategy (BEIS) found that 28 per cent of people said they were positive about AI, while 20 per cent felt negative about it. A greater number of people said they were neither positive nor negative (44 per cent), with a further eight per cent saying they did not know.22

15.Caroline Dinenage MP, the Minister for Digital and Culture, told us that “the public feel deeply suspicious of some parts of AI. There seems to be no rhyme or reason as to how we will embrace some aspects of it and not others, on that aspect of trust.”23 She told us that the Government wants “to ensure the public understand AI, its powers, its limitations and its opportunities, but also its risks”.24 She also highlighted the work of the AI Council “because it has a specific working group dedicated to getting the narrative about AI right.”25

16.Roger Taylor, Chair of the Centre for Data Ethics and Innovation (CDEI),26 told us that there was a “need to educate and engage with the public to understand what is acceptable in the way these technologies are used. This is particularly important in areas of government use of these technologies.”27

17.The 2017 Hall-Pesenti Review also recommended that the “Government should work with industry and experts to establish a UK AI Council to help coordinate and grow AI in the UK.”28 The recommendation was based on the perceived need to facilitate engagement between industry, academia, Government and the public, as “AI in the UK will need to build trust and confidence in AI enabled complex systems.”29 In the Industrial Strategy, published in November 2017, the Government announced that it was taking forward this recommendation, and “working with industry to establish an industry-led AI Council that can take a leadership role across sectors.”30 The membership of the AI Council was announced in May 2019. It is unclear why there was such a delay in getting the Council appointed.

18.It is clear that there is a risk that momentum may be lost in the progress the Government has made in developing its approach to AI. As the deployment and use of AI systems, and wider sharing of data, accelerates, the public’s understanding of that technology, and the ability to give informed consent, could be left behind.

19.Artificial intelligence is a complicated and emotive subject. The increase in reliance on technology caused by the COVID-19 pandemic, has highlighted the opportunities and risks associated with the use of technology, and in particular, data. It is no longer enough to expect the general public to learn about both AI and how their data is used passively. Active steps must be taken to explain to the general public the use of their personal data by AI. Greater public understanding is essential for the wider adoption of AI, and also to enable challenge to any organisation using to deploy AI in an ethically unsound manner.

20.The Government must lead the way on actively explaining how data is being used. Being passive in this regard is no longer an option. The general public are more sophisticated than they are given credit by the Government in their understanding of where data can and should be used and shared, and where it should not. The development of policy to safeguard the use of data, such as data trusts, must pick up pace, otherwise it risks being left behind by technological developments. This work should be reflected in the National Data Strategy.

21.The AI Council, as part of its brief from Government to focus on exploring how to develop and deploy safe, fair, legal and ethical data-sharing frameworks, must make sure it is informing such policy development in a timely manner, and the Government must make sure it is listening to the Council’s advice. The AI Council should take into account the importance of public trust in AI systems, and ensure that developers are developing systems in a trustworthy manner. Furthermore, the Government needs to build upon the recommendations of the Hall-Pesenti Review, as well as the work done by the Open Data Institute, in conjunction with the Office for AI and Innovate UK, to develop, and deploy data trusts as envisaged in the Hall-Pesenti Review.


22.An ethical framework for the development and use of AI became a key focus of the Select Committee’s report. The Committee recommended:

“… that a cross-sector ethical code of conduct, or ‘AI code’, suitable for implementation across public and private sector organisations which are developing or adopting AI, be drawn up and promoted by the Centre for Data Ethics and Innovation, with input from the AI Council and the Alan Turing Institute, with a degree of urgency. In some cases, sector-specific variations will need to be created, using similar language and branding. Such a code should include the need to have considered the establishment of ethical advisory boards in companies or organisations which are developing, or using, AI in their work. In time, the AI code could provide the basis for statutory regulation, if and when this is determined to be necessary.”31

The Committee proposed five overarching principles around which such a code could be built, providing a foundation for an ethical standard of AI for industry, Government, developers and consumers.

23.Dr Susskind explained how the debate on ethical AI is shifting from a discussion of broad ethical principles to the operationalisation of ethics in developing “practical advice and guidance to the companies and engineers developing these systems and technologies.”32

24.Since the publication of the Committee’s report a large number of companies and organisations have produced their own ethical AI codes of conduct. Although we welcome this progress, we believe a solely self-regulatory approach to ethical standards risks a lack of uniformity and enforceability. Professor Hall told us “we need to develop quite simple frameworks and audit arrangements for companies using AI that can be very simply applied.”33 Ms Kind took this further and said “we need to work out how to apply them in practice and shore up public trust and confidence along the way.”34

25.Professor Hall told us “we have to self-regulate”,35 whereas Ms Carly Kind, the Director of the Ada Lovelace Institute (an independent research institute and deliberative body with a remit to ensure data and AI work for people and society), said that “self-regulation and internal ethics processes have not kept up and have not proved to be sufficient to ensure accountability and public trust.”36 She went on to say that the Ada Lovelace Institute hears “time and time again from members of the public that their trust in technologies is contingent on external oversight of those technologies.”37 She emphasised the importance of the role of regulators, ombudsmen and other scrutiny measures. We heard from Simon McDougall, the Deputy Information Commissioner, about the guidance which the Information Commissioner’s Office (ICO), as a regulator, has published on work on such as AI explainability, an AI auditing framework, and existing tools such as data protection impact assessments.38

26.In the debate surrounding ethical AI, we often discuss the technology of AI systems rather than the human involvement in the process. Professor Wooldridge told us a key barrier to ethical AI is complacency: “the assumption that the technology must be doing something better than a human being, is very dangerous. It could be doing some things better than a human being, but we need that human in the loop.”39 Ms Kind also emphasised the consideration of this human element of AI and told us “we need to think about not only making the technology comport with ethical principles, but the humans using the technology and the system as a whole.”40

27.Caroline Dinenage MP told us the Government “take our responsibility in relation to the ethical handling of data and artificial intelligence incredibly seriously.”41 She highlighted the Data Ethics Framework42 and other such products which have been produced to support the Government and the civil service in determining how to manage, use and look after the public’s data. This includes guidance published by the Government Digital Service and the Office for AI in partnership with The Alan Turing Institute on ‘Understanding artificial intelligence ethics and safety’.43

28.This guidance, though applicable to the public sector, is not a foundation for a countrywide ethical framework which developers could apply, the public could understand and the country could offer as a template for global use. Caroline Dinenage MP told us they “feel the legal instruments and mechanisms are sufficient for now, but [they] are keen to watch how industry develops.”44

29.In 2018 the Committee believed that the UK was in prime position to lead on the ethical development of AI, and recommended:

“… that the Government convene a global summit in London by the end of 2019, in close conjunction with all interested nations and governments, industry (large and small), academia, and civil society, on as equal a footing as possible. The purpose of the global summit should be to develop a common framework for the ethical development and deployment of artificial intelligence systems. Such a framework should be aligned with existing international governance structures.”45

30.This year, when asked for an update on a global summit, the Government stated in its letter to Lord McFall:

“The Government is working with the tech community on London Tech Week, an annual event during spring (delayed to September in 2020 due to Covid-19). London Tech Week is open to all countries, organisations and societies, with a range of events, activities and conferences.

The UK is a signatory of the OECD Recommendation on AI, the G20 non-binding principles on AI, and is an active member in multilateral fora including UNESCO, the Council of Europe and the International Telecommunications Union.”46

31.In June 2020, the UK also became a founding member of the Global Partnership on Artificial Intelligence (GPAI) which is “an international and multistakeholder initiative to guide the responsible development and use of AI, grounded in human rights, inclusion, diversity, innovation, and economic growth.”47 These various memberships demonstrate the UK’s commitment to collaborate on the development and use of ethical AI, but it is yet to take on a leading role.

32.Since the Committee’s report was published, the conversation around ethics and AI has evolved. There is a clear consensus that ethical AI is the only sustainable way forward. Now is the time to move that conversation from what are the ethics, to how to instil them in the development and deployment of AI systems.

33.The Government must lead the way on the operationalisation of ethical AI. There is a clear role for the CDEI in leading those conversations both nationally and internationally. The CDEI, and the Government with them, should not be afraid to challenge the unethical use of AI by other governments or organisations.

34.The CDEI should establish and publish national standards for the ethical development and deployment of AI. National standards will provide an ingrained approach to ethical AI, and ensure consistency and clarity on the practical standards expected for the companies developing AI, the businesses applying AI, and the consumers using AI. These standards should consist of two frameworks, one for the ethical development of AI, including issues of prejudice and bias, and the other for the ethical use of AI by policymakers and businesses. These two frameworks should reflect the different risks and considerations at each stage of AI use.


35.The Committee concluded in 2018:

“The labour market is changing, and further significant disruption to that market is expected as AI is adopted throughout the economy. As we move into this unknown territory, forecasts of AI’s growing impact—jobs lost, jobs enhanced and new jobs created—are inevitably speculative. There is an urgent need to analyse or assess, on an ongoing basis, the evolution of AI in the UK, and develop policy responses.”48

36.We asked our witnesses in October 2020 about the impact of AI on the labour market, in particular in the light of the COVID-19 pandemic. Dr Susskind said “the pandemic has increased or is likely to increase the threat of automation. There are various reasons for this. One is less to do with the fact that we find ourselves in a pandemic and more that we find ourselves in a recession. Evidence suggests, particularly in the US, that recessions are the moment when automation often picks up.”49

37.Dr Susskind said that the pandemic “has created a very strong incentive to automate the work people do. A machine, after all, does not fall ill. It does not have to self-isolate to protect customers or co-workers. It will not have to take time off work.”50 Professor Hall disagreed, arguing that the COVID-19 pandemic had potentially slowed down companies in a move towards greater use of AI and automation:

“Generally, the rate at which the forecast saw the jobs going was too great. Over the last few years, we have seen that it takes a long time. Some companies are not even digital, let alone using AI. It is really hard sometimes for management to introduce that type of technology and it does not happen that quickly. The jury is out on how that will play out.”51

38.Professor Wooldridge said that “AI will change the nature of work.”52 He told us that “AI will become embedded in everything we do. It will not necessarily make huge numbers of people redundant, but it will make people redundant.”53

39.When asked about how prepared the United Kingdom is to respond to changes in the labour market, Professor Hall said: “on skills, we are nowhere near ready.”54 She also said that “the Government funded a number of skills programmes. They have been successfully launched, but we need to keep that going. There is not just a money impetus in this; we will have to keep that steady and increase it, if anything.”55

40.A report by Microsoft in August 2020 underscored these concerns. That report found that:

“Only 17 per cent of UK employees say they have been part of re-skilling efforts (far less than the 38 per cent globally), and only 32 per cent of UK employees feel their workplace is doing enough to prepare them for AI (well below global average of 42 per cent)”.56

41.This inertia is a concern. Furthermore, a problem remains with the general digital skills base in the UK. Estimates vary, but around 10 per cent of UK adults were non-internet users in 2018.57 A Lloyds Bank survey in 2019 found that 19 per cent of individuals lacked basic digital skills, such as using a web browser.58 The most common reason for people not going online is lack of interest. In the UK, disparities in internet use exist based on age, location, socioeconomic status and whether a person has a disability. For example, more than half of people aged over 75 do not go online, and older people form the largest proportion of non-internet users.59 The COVID-19 pandemic will have shown these issues in sharp relief.

42.The Select Committee made a number of recommendations on preparing the UK public for the widespread adoption of AI, including:

(a)expanding the National Retraining Scheme;

(b)restoring the wider social and ethical aspects of computer science and artificial intelligence to the computer curriculum;

(c)enabling teachers to gain additional expertise in technology and related areas; and

(d)calling on the Government to outline its plans to tackle any potential societal or regional inequality caused by AI.60

43.It is imperative that the Government takes steps to ensure that the digital skills of the UK are brought up to speed, as well as to ensure that people have the opportunity to reskill and retrain to be able to adapt to the evolving labour market caused by AI.

National Retraining Scheme

44.The National Retraining Scheme was announced by the Government in its 2017 Autumn Budget, which aimed at helping people “re-skill and up-skill as the economy changes, including as a result of automation.”61 On 13 October 2020, the Government announced that the Scheme would be integrated with the new National Skills Fund.62 But what did the Scheme achieve in the interlude? A paper by the Department for Education,63 published the same day as the integration of the Scheme and Fund was announced, found that 3,600 people had accessed the first part of the scheme—the ‘Get help to retrain’ service.64 This had been piloted in six areas. The pace, scale and ambition of the Scheme does not match the challenge facing many people working in the UK. It will be imperative for the Government to learn the lessons of the Scheme in the operation of the National Skills Fund, and to move much more swiftly.

45.There is no clear sense of the impact AI will have on jobs. It is however clear that there will be a change, and that complacency risks people finding themselves underequipped to participate in the employment market of the near future.

46.As and when the COVID-19 pandemic recedes and the Government has to address the economic impact of it, the nature of work will change and there will be a need for different jobs. This will be complemented by opportunities for AI, and the Government and industry must be ready to ensure that retraining opportunities take account of this. In particular the AI Council should identify the industries most at risk, and the skills gaps in those industries. A specific training scheme should be designed to support people to work alongside AI and automation, and to be able to maximise its potential.

Public Trust and regulation

47.In 2018 the Committee concluded:

“Blanket AI-specific regulation, at this stage, would be inappropriate. We believe that existing sector-specific regulators are best placed to consider the impact on their sectors of any subsequent regulation which may be needed. We welcome that the Data Protection Bill and GDPR65 appear to address many of the concerns of our witnesses regarding the handling of personal data, which is key to the development of AI. The Government Office for AI, with the Centre for Data Ethics and Innovation, needs to identify the gaps, if any, where existing regulation may not be adequate. The Government Office for AI must also ensure that the existing regulators’ expertise is utilised in informing any potential regulation that may be required in the future and we welcome the introduction of the Regulator’s Pioneer Fund.”66

48.This regulator-led approach is the current Government position. Lorna Gratton, Director of Digital and Technology Policy, told us:

“The approach we have been taking across government is that the sectors are best placed to identify the regulation needed in their sphere. We have typically not left it to the sectors to do, but they have the best understanding of what is needed, particularly in financial services. The regulator for the relevant sector has responsibility for determining what is needed in the sector and can draw on central resource from government to help understand that.”67

49.The burden this places on regulators was recognised by Mr Taylor who said regulators: “need to upskill. There is variation between regulators. Some of them are moving at pace and addressing this … In other areas, there is much more to do. It varies between industry associations.”68 As Mr Taylor acknowledged “[t]his technology is developing very rapidly. It is no surprise that regulators and regulation have to move at speed to catch up.”69 Although plainly work is needed, it is clear many regulators have taken an active role in explaining the regulations in place and providing relevant, practical guidance for their sector.70

50.It is evident that regulatory gaps remain which need to be addressed. The Court of Appeal recently determined that there are “fundamental deficiencies”71 in the existing legal framework for facial recognition technology. Social media was also raised by our witnesses as a gap in regulation. Ms Kind acknowledged the regulators in this area are “showing a real willingness to work together”.72 The ICO has worked with the Competition and Markets Authority (CMA) to set up the Digital Regulation Cooperation Forum, which tackles areas of overlap between regulators to tackle online harm. This demonstrates the positive and coordinated work being carried out by sector-specific regulators in order to address potential regulatory gaps. However, Ms Kind asked “whether we need some type of framework to enable a more overarching look at online platforms and the use of AI there.”73

51.Since the publication of the Committee’s report the CDEI has been established, and its terms of reference include identifying gaps in the regulatory framework. In June 2020 the CDEI published its AI Barometer,74 which looks at five key sectors (criminal justice, health and social care, financial services, energy and utilities and digital and social media) and identifies the opportunities, risks, barriers and potential regulatory gaps. This Barometer can be used to better inform policy makers of the risks posed by AI and any need for regulation. When asked about the AI Barometer’s findings, Caroline Dinenage MP told us that “we have a way to go to address these challenges, but we are on the right lines.”75

52.The risk-based analysis has also been seen in work on AI regulation in Europe. In February 2020 the European Commission published a White Paper aiming to create a unified approach to the regulation of and investment in AI. The Commission has determined that a risk-based approach to new AI regulation balances the need to regulate against the burden it creates, especially for SMEs.76 The D9+ group77 have called for a proportionate approach to risk-based regulation considering both the impact and probability of the risk, in order to ensure that not all AI is considered high risk.78 The Council of Europe’s Ad Hoc Committee on AI79 similarly is producing a feasibility study on the regulation of AI, in the drafting of which the UK is participating. The way we assess regulatory need will determine whether the UK system takes advantage of opportunities created by AI.

53.Running parallel to the work of the CDEI, the Ministerial Working Group on Future Regulation has been established, and it recommended the development of the White Paper on Regulation for the Fourth Industrial Revolution,80 which was published in June 2019. This included the following measures:

54.Unlike the CDEI, which focuses specifically on the use of AI, the Regulatory Horizons Council (RHC) considers all industries and innovation. Mr Taylor highlighted the “very clear overlap between what the CDEI is doing and what the Regulatory Horizons Council is doing, which is where regulation and AI meet.”82 The RHC joins an already crowded landscape when it comes to the organisations and bodies carrying out work in relation to AI and regulation. However, Mr Taylor told us the two bodies are co-ordinating in order to ensure that work is not duplicated or too disparate, but is “confident that will work.”83

55.Mr Taylor told us that rather than regulatory gaps, “the more significant gap is understanding how to make sense of our existing laws, regulations and ethical standards.”84 Mr Taylor cautioned that this gap in understanding will form a barrier to how AI develops.

56.Regulatory gaps are a cause for concern; however there appeared to be consensus that with the existing regulatory framework there is no desire “to rush to legislate now.”85 As Ms Kind pointed out “we are still understanding where regulation would be useful and appropriate.”86 Mr McDougall told us “[t]he regulatory framework itself, while there is always room for improvement, is broadly applicable to the challenges we are facing with AI.”87

57.Beyond its specific purpose, regulation could also play a role in establishing public trust in AI. Ms Kind said we should consider:

“what role regulation could play in shoring up social licence for these new technologies and creating a sense of social responsibility on the part of both public and private sector entities when deploying them … Before is it deployed on the public at large there needs to be some kind of quality assurance, circuit breaker or mechanism to validate that a piece of technology is ready for public deployment.”88

58.The Select Committee’s 2018 report recommended that:

“Industry should take the lead in establishing voluntary mechanisms for informing the public when artificial intelligence is being used for significant or sensitive decisions in relation to consumers. This industry-led approach should learn lessons from the largely ineffective AdChoices scheme. The soon-to-be established AI Council, the proposed industry body for AI, should consider how best to develop and introduce these mechanisms.”89

59.All our witnesses from DCMS and BEIS agreed that transparency is essential in building public trust.90 Mechanisms such as the recommended kitemark better inform the public when AI is being used. Lorna Gratton told us that we also need to help “the public understand the current regulation framework with GDPR and the Equality Act, to give them confidence that there are already things in place to protect them and regulate the use of AI”.91 A simple kitemark mechanism may not go far enough, as consumers do not know what standards the technology has had to meet in order to be deployed. Carly Kind shared her concern that for AI systems “to make a difference to our society, they need to enjoy indelible public trust. Unless they get a stamp of approval through a regulatory mechanism, I worry that that will not happen and their benefits will not be realised.”92

60.The challenges posed by the development and deployment of AI cannot currently be tackled by cross-cutting regulation. The understanding by users and policymakers needs to be developed through a better understanding of risk and how it can be assessed and mitigated. Sector-specific regulators are better placed to identify gaps in regulation, and to learn about AI and apply it to their sectors. The CDEI and Office for AI can play a cross-cutting role, along with the ICO, to provide that understanding of risk and the necessary training and upskilling for sector specific regulators.

61.The ICO must develop a training course for use by regulators to ensure that their staff have a grounding in the ethical and appropriate use of public data and AI systems, and its opportunities and risks. It will be essential for sector specific regulators to be in a position to evaluate those risks, to assess ethical compliance, and to advise their sectors accordingly. Such training should be prepared with input from the CDEI, Office for AI and Alan Turing Institute. The uptake by regulators should be monitored by the Office for AI. The training should be prepared and rolled out by July 2021.

10 Tech Nation, UK Tech for a changing world: Tech Nation Report 2020 (17 March 2020): [accessed 9 December 2020]

11 In November 2020 it was reported that the Medicines and Healthcare Regulatory Authority had engaged Genpact UK to develop an AI tool to sift through the high volume of reports of adverse reactions to COVID-19 vaccines. ‘UK plans to use AI to process adverse reactions to Covid vaccines’, Financial Times (1 November 2020): [accessed 9 December 2020]

12 AI applications which mimic real people speaking in video format.

13 Bertrand Meyer, ‘John McCarthy’, Communications of the ACM (28 October 2011): [accessed 17 November 2020]

14 Select Committee on Artificial Intelligence, AI in the UK: ready, willing and able? (Report of Session 2017–19, HL Paper 100), para 58

15 Q 2 (Professor Wooldridge)

16 Q 2 (Professor Wooldridge)

17 Q 2 (Dr Susskind)

18 Q 2 (Dr Susskind)

19 Professor Dame Wendy Hall and Jérôme Pesenti, Growing the artificial intelligence industry in the UK (15 October 2017): [accessed 9 December 2020]

20 Q 2 (Professor Dame Wendy Hall). A data trust is an entity which would monitor and supervise the sharing of datasets between organisations and companies. The Hall-Pesenti Review emphasised that these trusts would “not be a legal entity or institution, but rather a set of relationships underpinned by a repeatable framework, compliant with parties’ obligations, to share data in a fair, safe and equitable way.”

21 Q 2 (Professor Dame Wendy Hall)

22 Department for Business, Energy and Industrial Strategy, BEIS Public Attitudes Tracker (June 2020): [accessed 27 November 2020]

23 Q 17 (Caroline Dinenage MP)

24 Q 17 (Caroline Dinenage MP)

25 17 (Caroline Dinenage MP)

26 The Centre for Data Ethics and Innovation is tasked by Her Majesty’s Government to connect policymakers, industry, civil society, and the public to develop the right governance regime for data-driven technologies.

27 8 (Roger Taylor)

28 Professor Dame Wendy Hall and Jérôme Pesenti, Growing the artificial intelligence industry in the UK (15 October 2017): [accessed 9 December 2020], p 5

29 Ibid.

30 HM Government, Industrial Strategy: Building a Britain fit for the future (November 2019), p 39: [accessed 9 December 2020]

31 Select Committee on Artificial Intelligence, AI in the UK: ready, willing and able? (Report of Session 2017–19, HL Paper 100), para 420

32 Q 3 (Dr Susskind)

33 Q 3 (Professor Dame Wendy Hall)

34 Q 8 (Carly Kind)

35 Q 3 (Professor Dame Wendy Hall)

36 Q 8 (Carly Kind)

37 Q 8 (Carly Kind)

38 Q 8 (Simon McDougall)

39 Q 3 (Professor Wooldridge)

40 Q 8 (Carly Kind)

41 Q 16 (Caroline Dinenage MP)

42 Government Digital Service, Data Ethics Framework (16 September 2020): [accessed 9 December 2020]

43 Government Digital Service and Office for Artificial Intelligence, Understanding artificial intelligence ethics and safety (10 June 2019): [accessed 9 December 2020]

44 Q 17 (Caroline Dinenage MP)

45 Select Committee on Artificial Intelligence, AI in the UK: ready, willing and able? (Report of Session 2017–19, HL Paper 100), para 403

46 Letter from the Minister for Science, Research and Innovation to the Senior Deputy Speaker on the Select Committee on Artificial Intelligence, 14 August 2020:

48 Select Committee on Artificial Intelligence, AI in the UK: ready, willing and able? (Session 2017–19, HL Paper 100), para 231

49 Q 4 (Dr Susskind)

50 Q 4 (Dr Susskind)

51 Q 4 (Professor Dame Wendy Hall)

52 Q 4 (Professor Wooldridge)

53 Q 4 (Professor Wooldridge)

54 Q 4 (Professor Dame Wendy Hall)

55 Q 2 (Professor Dame Wendy Hall)

56 Microsoft, AI Skills in the UK (August 2020): [accessed 9 December 2020]

57 Lloyds Bank, UK Consumer Digital Index 2019 (June 2020): [accessed 9 December 2020]

58 Ibid.

59 Ibid.

60 Select Committee on Artificial Intelligence, AI in the UK: ready, willing and able? (Report of Session 2017–19, HL Paper 100), paras 236, 251, 258 and 276

61 Department for Business, Energy and Industrial Strategy, Industrial Strategy: Building a Britain fit for the future (November 2019), p 41: [accessed 9 December 2020]

62 HL Deb, 13 October 2020, HLWS502

63 Department for Education, National Retraining Scheme: Key Findings Paper (October 2020): [accessed 9 December 2020]

64 The service allowed users to identify and input their current skills and then based on these skills, offer suggestions for training and alternative employment. The service is able to then direct users to vacancies in their area based on the suggestions provided.

65 The General Data Protection Regulation was not then in force.

66 Select Committee on Artificial Intelligence, AI in the UK: ready, willing and able? (Report of Session 2017–19, HL Paper 100), para 386

67 Q 17 (Lorna Gratton)

68 Q 9 (Roger Taylor)

69 Q 9 (Roger Taylor)

70 Such as the ICO AI auditing framework - [accessed 9 December 2020]

71 R (Bridges) v Chief Constable of South Wales Police & Ors [2020] EWCA Civ 1058, 11 August 2020, para 91

72 Q 9 (Carly Kind)

73 Q 9 (Carly Kind)

74 Centre for Data Ethics and Innovation, AI Barometer Report (June 2020): [accessed 9 December 2020]

75 Q 17 (Caroline Dinenage MP)

76 European Commission, White Paper On Artificial Intelligence - A European approach to excellence and trust (19 February 2020) COM(2020) 65 final, p 17 [accessed 15 December 2020]

77 The D9+ is a group of likeminded countries characterised by their similar approaches to digital issues.

78 Position paper on behalf of Denmark, Belgium, the Czech Republic, Finland, France, Estonia, Ireland, Latvia, Luxembourg, the Netherlands, Poland, Portugal, Spain and Sweden, Innovative and trustworthy AI: Two sides of the same coin (8 October 2020): [accessed 9 December 2020]

79 Council of Europe, CAHAI - Ad hoc Committee on Artificial Intelligence (23 September 2020): [accessed 9 December 2020]

80 Department for Business, Energy and Industrial Strategy, Regulation for the Fourth Industrial Revolution, CP 111, June 2019: [accessed 9 December 2020]

81 Letter from the Minister for Science, Research and Innovation to the Senior Deputy Speaker on the Select Committee on Artificial Intelligence, 14 August 2020:

82 12 (Roger Taylor)

83 Q 12 (Roger Taylor)

84 9 (Roger Taylor)

85 Q 9 (Carly Kind)

86 9 (Carly Kind)

87 Q 9 (Simon McDougall)

88 9 (Carly Kind)

89 Select Committee on Artificial Intelligence, AI in the UK: ready, willing and able? (Report of Session 2017–19, HL Paper 100), para 59

90 Q 17 (Caroline Dinenage MP, Amanda Solloway MP, Lorna Gratton)

91 Q 17 (Lorna Gratton)

92 Q 9 (Carly Kind)

© Parliamentary copyright 2020