Algorithms in decision-making Contents


1.Algorithms have been used to aid decision-making for centuries and pre-date computers.1 At its core, an algorithm is a set of instructions usually applied to solve a well-defined problem. In the last few years, however, “we have witnessed an exponential growth in the use of automation to power decisions that impact our lives and societies”.2 An increase in digital data and businesses with access to large datasets, and the advent of a new family of algorithms utilising ‘machine learning’ and artificial intelligence (AI), has driven an increase in algorithmic decision-making (see Box 1). This has spurred huge investment into this area such as the recently announced AI sector deal “worth almost £1 billion”,3 including “£603 million in newly allocated funding”.4

Box 1: machine learning and artificial intelligence algorithms (GSE)

Although there is no single agreed definition of AI,5 there are similarities between many of those being used.6 Broadly, AI is “a set of statistical tools and algorithms that combine to form, in part, intelligent software” enabling “computers to simulate elements of human behaviour such as learning, reasoning and classification”.7

Often confused with AI, ‘machine learning’ algorithms are a narrower subset of this technology. They describe “a family of techniques that allow computers to learn directly from examples, data, and experience, finding rules or patterns that a human programmer did not explicitly specify”.8 In contrast to conventional algorithms which are fully coded, the only instructions given to machine learning algorithms are in its objectives. How it completes these are left to its own learning.

We use the term ‘machine learning algorithms’ in this report, although we recognise that many use it interchangeably with ‘AI algorithms’

2.The availability of ‘big data’ and increased computational power is allowing algorithms to identify patterns in that data.9 The Royal Society explained that because “machine learning offers the possibility of extending automated decision-making processes, allowing a greater range and depth of decision-making without human input”, the potential uses are vast and it continues to grow “at an unprecedented rate”.10 Thanks to “cheaper computing power”, as Google put it, “the benefits of algorithmic decision-making will become ever more broadly distributed and […] new use cases will continue to emerge.”11

3.The range of different industries in which machine learning is already being put to use includes finance (including access to loans and insurance), the legal sector, the criminal justice system, education, and healthcare, as well as recruitment decisions and targeting adverts on social media,12 and there are plans for driverless vehicles to be on public roads in the UK in the near future.13 Hetan Shah from the Royal Statistical Society believed that “it is best to understand this as a ubiquitous technology and to think of it almost as a public infrastructure”.14 The Royal Academy of Engineering believes that as more data are generated, an increase in the use of machine learning algorithms will allow organisations to consider a much broader range of datasets or inputs than was previously possible, providing “an opportunity for better decision-making by combining human and machine intelligence in a smart way”.15 Algorithms driven by machine learning bring certain risks as well as benefits. The first fatal collision involving an autonomous car in March 2018 has placed these technologies under heightened scrutiny16 and led to the suspension of self-driving car tests by Uber.17 There has been recent controversy (currently being examined by the Digital, Culture, Media & Sport Committee’s inquiry into Fake News18) over the use of algorithms by Cambridge Analytica to identify characteristics of Facebook users to help target political campaign messaging19—a test case which reinforces the need for effective data protection regulation (see Chapter 4).

4.Our predecessor Committee undertook work relevant to algorithms. It reported on ‘Big Data’ in 2016, examining opportunities from the proliferation of big data and its associated risks.20 It recommended the creation of what it called a “Council of Data Ethics”.21 The Committee envisaged such a body being responsible for “addressing the growing legal and ethical challenges associated with balancing privacy, anonymisation, security and public benefit”.22 In response the Government agreed to establish such a body, which would “address key ethical challenges for data science and provide technical research and thought leadership on the implications of data science across all sectors”.23 The Committee examined in 2016 the implications of the then recently approved General Data Protection Regulation (GDPR), which will become operational in May 2018, and which is being transposed into UK law through the Data Protection Bill. Our predecessor Committee’s subsequent report on ‘Robotics and AI’, also in 2016, reiterated its call for a data ethics council and recommended that a standing ‘Commission on AI’ be established at the Alan Turing Institute, to be focused on “establishing principles to govern the development and application of AI techniques, as well as advising the Government of any regulation required on limits to its progression”.24 The Alan Turing Institute wrote to the Committee later in 2016, welcoming the Committee’s recommendation.25

5.The Government’s response, in 2017, was that work on this front was being conducted by the Royal Society and the British Academy.26 However, while the subsequent report from these institutions, on ‘Data management and use: Governance in the 21st century’, rehearsed important principles around data protection, it did not tackle algorithms more generally.27 In 2017, the Nuffield Foundation announced its intention to establish, in partnership with other bodies, a ‘Convention on Data Ethics and Artificial Intelligence’ to promote and support data practices that are “trustworthy, understandable, challengeable, and accountable”.28

6.In last year’s Industrial Strategy White Paper, the Government announced the establishment of “an industry-led AI Council” supported by “a new government Office for AI”, to “champion research and innovation”; take advantage of advanced data analytics; and “promote greater diversity in the AI workforce”.29 The White Paper also announced that the UK would take:

an international leadership role by investing £9m in a new ‘Centre for Data Ethics & Innovation’. This world-first advisory body will review the existing governance landscape and advise the government on how we can enable and ensure ethical, safe and innovative uses of data including AI.30

Margot James MP, the Minister for Digital and the Creative Industries, told the House that the proposed new Centre “will advise the Government and regulators on how they can strengthen and improve the way that data and AI are governed, as well as supporting the innovative and ethical use of that data”.31 In April 2018, the Government launched its AI Sector Deal and announced that an Interim Centre for Data Ethics & Innovation “will start work on key issues straightaway and its findings will be used to inform the final design and work programme of the permanent Centre, which will be established on a statutory footing in due course. A public consultation on the permanent Centre will be launched soon.”32

7.The Government’s proposed Centre for Data Ethics & Innovation is a welcome initiative. It will occupy a critically important position, alongside the Information Commissioner’s Office, in overseeing the future development of algorithms and the ‘decisions’ they make. The challenge will be to secure a framework which facilitates and encourages innovation but which also maintains vital public trust and confidence.

8.Many of the issues raised in this report will require close monitoring, to ensure that the oversight of machine learning-driven algorithms continues to strike an appropriate and safe balance between recognising the benefits (for healthcare and other public services, for example, and for innovation in the private sector) and the risks (for privacy and consent, data security and any unacceptable impacts on individuals). As we discuss in this report, the Government should ensure that these issues are at the top of the new body’s remit and agenda.

9.The Government plans to put the Centre for Data Ethics & Innovation on a statutory footing. When it does so, it should set it a requirement to report annually to Parliament on the results of its work, to allow us and others to scrutinise its effectiveness. Although the terms of the Government’s proposed consultation on the Centre for Data Ethics & Innovation have yet to be announced, we anticipate our report feeding into that exercise.

Our inquiry

10.Against the background of its earlier inquiries into Big Data and AI, our predecessor Committee also launched an inquiry into algorithms in decision-making. The case for that inquiry was made by Dr Stephanie Mathisen from Sense about Science as part of her evidence to the Committee’s ‘My Science Inquiry’ initiative, which had sought scrutiny suggestions from the public.33 That inquiry was launched in February 2017 but ceased when the General Election was called. We decided subsequently to continue the inquiry. We received 31 submissions (78 including those from the previous inquiry) and took oral evidence from 21 witnesses including from academics in the field, think-tanks, industry and public sector organisations using algorithms, the Information Commissioner and the Minister for the Digital and the Creative Industries, Margot James MP. In addition, we held a private, introductory seminar on algorithms in October 2017, with speakers from the Alan Turing Institute and from Facebook and SAP, a software developer.34 We would like to thank everyone who contributed to our inquiry. In April 2018 the House of Lords Committee on AI published its report.35 We have taken their conclusions on board where relevant to our inquiry.

11.Dr Stephanie Mathisen, in her call for an algorithms inquiry, raised the question of “the extent to which algorithms can exacerbate or reduce biases” as well as “the need for decisions made by algorithms to be challenged, understood and regulated”.36 Such issues echo our predecessor Committee’s concerns, albeit then expressed in the context of Big Data and AI. It is now more than two years since that Committee called for an oversight body to monitor and address such issues. Our report is intended to identify the themes and challenges that the proposed Centre for Data Ethics & Innovation should address as it begins its work. Specifically, in Chapter 2 we look at how algorithms rely on ‘data sharing’ and their potential for bias and discrimination. In Chapter 3 we explore ways of achieving accountability and transparency for algorithms. In Chapter 4 we consider the regulatory environment, in the light of the Cambridge Analytica case and imminent implementation of the EU General Data Protection Regulation.

1 Q6 [Professor Nick Jennings]

5 Science and Technology Committee, Fifth Report of Session 2016–17, Robotics and artificial intelligence, HC 145, para 4

6 See also: Shane Legg and Marcus Hutter. “A Collection of Definitions of Intelligence”, Frontiers in Artificial Intelligence and Applications, Vol.157 (2007), pp 17–24

7 Transpolitica (ROB0044) para 1.4

9 Q4 [Prof Louise Amoore]

11 Google (ADM0016) para 2.5

12 The Royal Academy of Engineering (ALG0046); Guardian News and Media (ADM0001); Q2

13 Autumn Budget, November 2017, para 4.15

14 Q8

15 The Royal Academy of Engineering (ALG0046) para 8

18 Digital, Culture, Media and Sport Committee, ‘Fake News,’ accessed 04 April 2018

19 Facebook bans political data company Cambridge Analytica”, Financial Times, 17 March 2018

20 Science and Technology Committee, Fourth Report of Session 2015–16, The big data dilemma HC 468

21 The big data dilemma, HC 468, para 102

22 Ibid.

23 Science and Technology committee, Fifth special report of session 2015–16, The big data dilemma: Government Response to the Committee’s Fourth Report of the Session 2015–16, HC 992, para 57

24 Science and Technology Committee, Fifth Report of Session 2016–17, Robotics and artificial intelligence, HC 145, para 73

25 Letter from Alan Turing Institute on a ‘Commission on Artificial Intelligence’, 21 October 2016

26 Science and Technology committee, Fifth special report of session 2016–17, Robotics and artificial intelligence: Government Response to the Committee’s Fifth Report of the Session 2016–17, HC 896. Also see: The Royal Society, ‘Data management and use: Governance in the 21st century - a British Academy and Royal Society project’, accessed 21 March 2018

27 Joint report by the British Academy and Royal Society, ‘Data management and use: Governance in the 21st century’, June 2017

28 Nuffield Foundation, ‘Data Ethics and Artificial Intelligence’, accessed 21 March 2018

29 Industrial Strategy, November 2017

30 Ibid.

31 Data Protection Bill Committee, 22 March 2018, col 330

33 Science and Technology Committee, Ninth Report of Session 2016–17, Future Programme: ‘My Science Inquiry, HC 859, para 6

34 SAP, ‘Company Information,’ accessed 13 April 2018

35 House of Lords Select Committee on AI, Report of Session 2017–19, AI in the UK: ready, willing and able?, HC 100

36 Science and Technology Committee, Ninth Report of Session 2016–17, Future Programme: ‘My Science Inquiry, HC 859, para 6

Published: 23 May 2018