45.To the extent that algorithms affect people and the use of personal data, there must be accountability for their application, and those affected are entitled to transparency over the results and how they are arrived at.
46.As the Information Commissioner explained, “accountability requires someone to be responsible”, but where responsibility for algorithms should lie can be uncertain. Nesta highlighted the need for identifying who is responsible if anything goes wrong “where decisions are made by both algorithms and people”. The problem, as the European Commission has acknowledged, is that “the developer of algorithmic tools may not know their precise future use and implementation” while the individuals who are “implementing the algorithmic tools for applications may, in turn, not fully understand how the algorithmic tools operate”.
47.Dr Pavel Klimov of the Law Society’s Technology and the Law Group was wary of placing full responsibility on the user of an algorithm because strict liability may put “innocent users […] at risk of having to answer for the losses that, on the normal application of legal principles, […] they will not be liable for”. Dr Adrian Weller of the Alan Turing Institute believed that we already “have an existing legal framework for dealing with situations where you need to assign accountability” and considered that “we may want to assign strict liability in certain settings, but it is going to require careful thought to make sure that the right incentives are in place to lead to the best outcome for society.” On the other hand, Professor Alan Winfield, Professor of Robot Ethics at the University of West England, told the House of Lords Select Committee on Artificial Intelligence that “we need to treat AI as an engineered system that is held to very high standards of provable safety”, and that:
It is the designers, manufacturers, owners and operators who should be held responsible, in exactly the same way that we attribute responsibility for failure of a motor car, for instance. If there turns out to be a serious problem, generally speaking the responsibility is the manufacturers’.
48.The Royal Academy of Engineering told us that “issues of governance and accountability will need to be considered in the design and development of [algorithmic] systems so that incorrect assumptions about the behaviour of users—or designers—are avoided”. While the submissions to our inquiry agreed that accountability was necessary, the preferred means of achieving it varied. The Upturn and Omidyar Network reported that many ways of achieving accountability which are “fairer, more interpretable, and more auditable” are being explored but that they “remain largely theoretical today”. We examine the scope for some of those potential accountability mechanisms below.
49.Dr Sandra Wachter of the Oxford Internet Institute emphasised that standards are a prerequisite for developing a system of accountability. The Information Commissioner suggested that “codes of conduct may be drawn up by trade associations or bodies representing specific sectors in order to assist the proper application of the GDPR”, before being “approved by the ICO” and “monitored by an ICO-accredited body”. Nesta favoured the establishment of “some general principles” which “guide behaviours, understanding, norms and rules”.
50.There are examples of standards and principles in the field already. The Cabinet Office has published a ‘Data Science Ethics Framework’ for civil servants using data in policy-making. In the private sector, Amazon, DeepMind/Google, Facebook, IBM and Microsoft developed their ‘Partnership on AI’ in 2016 “to address opportunities and challenges with AI to benefit people and society”, with eight tenets which include “working to protect the privacy and security of individuals” and “striving to understand and respect the interests of all parties that may be impacted by AI advances”. The industry-led ‘Asilomar principles’ include ones addressing research funding on the ethics of AI, transparency, privacy, and shared prosperity. The Association for the Advancement of Artificial Intelligence and the Association of Computing Machinery have developed professional codes of ethics for the development of computer science. The Institute of Electrical and Electronics Engineers, a standard-setting body, has begun work to define “ethical concerns and technical standards related to autonomous systems”. An MIT Technology Review has developed five principles for algorithm developers. The House of Lords Committee on AI also suggests an AI code comprising of “five overarching principles”, calling for “intelligibility and fairness” in AI’s use “for the common good”, as well as the principle that AI should not be used to “diminish the data rights or privacy of individuals, families or communities”.
51.Despite these efforts, however, the Upturn and Omidyar Network worried that “the use of automated decisions is far outpacing the evolution of frameworks to understand and govern them”. Our Government witnesses told us that they were giving consideration to the ‘Asilomar principles’. While there is currently no unified framework for the private sector, they hoped that the Centre for Data Ethics & Innovation would be able to help the issues “coalesce around one set of standards”.
52.Audit is also key to building trust in algorithms. The Oxford Internet Institute explained how audit can create a “procedural record to […] help data controllers to meet accountability requirements by detecting when decisions harm individuals and groups, by explaining how they occurred, and under what conditions they may occur again”. The Centre for Intelligent Sensing told us that audits could “probe the system with fictitious data generated by sampling from UK demographic data, or by a company’s own anonymised customer data [… and] counterfactually vary the effects”. Auditors could then evaluate the outputs, Google explained, “to provide an indicator of whether it might be producing negative or unfair effects”.
53.A challenge with machine learning algorithms, highlighted by the Institute of Mathematics and its Applications, is that “there is no guarantee that an online algorithm will remain unbiased or relevant”. When Google Flu Trends was launched in 2008 its use of search queries to predict the spread of flu outbreaks closely matched the surveillance data from the US Centres for Disease Control, but it was reported that it then ran into difficulties when media coverage prompted flu-related searches by people who were not ill. The Institute of Mathematics believed that wholly online algorithms would need their data “updated and fully revalidated”. The Information Commissioner called for “data scientists to find innovative ways of building in auditability, to allow an on-going internal review of algorithmic behaviour”.
54.Professor Daniel Neyland from Goldsmiths University believed that certificates and third-party ‘seals’ for algorithms that are audited could help address “the contemporary limitations of accountability and transparency in algorithmic systems”, particularly if such seals are publicised. The Alan Turing Institute told us that certification or seals could be used to signify “algorithms whose design, development, and/or deployment have produced fair, reasonable, and non-discriminatory outcomes”. The GDPR provides for certification of algorithms in terms of their privacy protections, and the Information Commissioner believed that “seals can help to inform people about the data protection compliance of a particular product or service”. The ICO was “currently looking into how certification schemes can be set up and managed in practice”.
55.Ethics boards can be used “to apply ethical principles and assess difficult issues that can arise in the creation and use of algorithms in decision-making”, the Information Commissioner’s Office told us, and can aid transparency by publishing their deliberations so that “the development of the algorithm is openly documented”. TechUK cautioned, however, that ethics boards “could be seen as a burden for UK SMEs that stand to benefit the most from automated decision-making technologies”.
56.Setting principles and ‘codes’, establishing audits of algorithms, introducing certification of algorithms, and charging ethics boards with oversight of algorithmic decisions, should all play their part in identifying and tackling bias in algorithms. With the growing proliferation of algorithms, such initiatives are urgently needed. The Government should immediately task the Centre for Data Ethics & Innovation to evaluate these various tools and advise on which to prioritise and on how they should be embedded in the private sector as well as in government bodies that share their data with private sector developers. Given the international nature of digital innovation, the Centre should also engage with other like-minded organisations in other comparable jurisdictions in order to develop and share best practice.
57.Algorithm accountability is often framed in terms of openness and transparency, and the ability to challenge and scrutinise the decisions reached using algorithms. Although all of the details are not yet available of the recent NHS breast screening programme failure, where women aged between 68 and 71 were not sent screening appointments, it is possible that if the flaw was a relatively straightforward “coding error”, as the Health Secretary put it, then making that algorithm coding more widely available might have allowed the error to have been spotted much sooner. Transparency would be more of a challenge, however, where the algorithm is driven by machine learning rather than fixed computer coding. Dr Pavel Klimov of the Law Society’s Technology and the Law Group explained that, in a machine learning environment, the problem with such algorithms is that “humans may no longer be in control of what decision is taken, and may not even know or understand why a wrong decision has been taken, because we are losing sight of the transparency of the process from the beginning to the end”. Rebecca MacKinnon from think-tank New America has warned that “algorithms driven by machine learning quickly become opaque even to their creators, who no longer understand the logic being followed”. Transparency is important, but particularly so when critical consequences are at stake. As the Upturn and Omidyar Network have put it, where “governments use algorithms to screen immigrants and allocate social services, it is vital that we know how to interrogate and hold these systems accountable”. Liberty stressed the importance of transparency for those algorithmic decisions which “engage the rights and liberties of individuals”.
58.Transparency, Nesta told us, could lead to greater acceptability of algorithmic decisions. But transparency can take different forms—how an algorithmic decision is arrived at, or visibility of the workings inside the ‘black box’. The Human Rights, Big Data and Technology Project suggested that transparency needs “to be considered at each stage in the algorithmic decision-making process, and in the process as a whole”. Several submissions indicated that the users of algorithms should be able to explain their decisions in terms that users can understand.
59.Transparency inside the ‘black box’ may be of practical use only to some because, as Dr M-H. Carolyn Nguyen of Microsoft put it, it “takes a lot of data scientists to understand exactly what is going on”. And even then, Dr Janet Bastiman told us, “given the complex nature of these decision-making algorithms, even if the full structure, weighting, and training data were published for an end-user, it is unlikely that they would be able to understand and challenge the output from the algorithm”. Where algorithms are based on machine learning, Professor Louise Amoore of Durham University wondered whether full transparency was possible “even to those who have designed and written them”. Even if such difficulties could be overcome, University College London warned that “a central tension with making algorithms completely open is that many are trained on personal data, and some of this private data might be discoverable if we release the algorithmic models”.
60.Hetan Shah of the Royal Statistical Society nevertheless highlighted the recent attempts by New York City Council to require the code for all city agencies’ algorithms to be published. Professor Nick Jennings of the Royal Academy of Engineering, however, drew attention to the issue of ‘adversarial machine learning’ where individuals “know the way a machine-learning algorithm works and so you try to dupe it to believe something and come to a particular set of conclusions; then you can exploit the fact that you know that it has been mis-trained”. When Google originally published its PageRank algorithm nearly 20 years ago, for example, spammers gamed the search algorithm by paying each other for links and so undermined the algorithm’s effectiveness.
61.The Alan Turing Institute told us that “two of the biggest hurdles to a ‘right of explanation’ (paragraph 62) are trade secrets and copyright concerns”. While patents have traditionally been used to balance the interests of society and inventors, academics at Oxford and Nottingham universities questioned how that balance might be struck in the age of machine learning. Microsoft told us that it wanted the Government to “broaden the UK’s copyright exception on text and data mining, bringing it into line with that of the USA, Japan and Canada, and ensuring that the UK is well placed to be at the forefront of data analytics”. ”Dr Janet Bastiman worried that “Since the intellectual property in machine learned systems is encapsulated in the structure, weighting, and input data that comprise the final algorithm, any legislation requiring clear transparency of the algorithm itself could have negative impact on the commercial viability of private sector institutions using this technology.” As Future Advocacy has recently explained, this may result in less accurate algorithms as designers opt for less accurate but easier to explain models—a concern where this affects healthcare algorithms. Others cautioned against “letting commercial interests supersede the rights of people to obtain information about themselves”. The Upturn and Omidyar Network pointed out that France is the only country that has explicitly required disclosure of the source code of government-developed algorithms, under its open record laws.
62.While many of our submissions advocated a ‘right to explanation’, the Royal Statistical Society did not think that wider “standards of algorithmic transparency can be legislatively set, as the specifics of technology, algorithms and their application vary so much”. The think-tank Projects by IF emphasised that “transparency is more useful with context” and, comparing industries, the Royal Statistical Society found “important differences in the level of pressure to explain data science and statistical approaches”. Projects by IF concluded that “how a service explains its workings to users will be different to how it explains its workings to auditors.”
63.We heard about various ways, some in use and others in development, of facilitating a ‘right to explanation’. Hetan Shah saw scope in ‘counterfactual explanations’; an approach that Dr Wachter told us avoids having to open the black box. She gave as an example where a loan application is rejected and the algorithm “would tell you what would have needed to be different in order to get the loan and give you some grounds to contest it […] This might be that if your income were £15,000 higher you would have been accepted.” Google thought that better ‘data visualisation tools’ could also help, showing “key metrics relating to an algorithm’s functioning without going into the full complexity, akin to the way that car dashboards have gauges for speed, oil pressure, and so on”. TechUK similarly highlighted ‘interactive visualisation systems’. In 2017 the US Department of Defence announced funding for thirteen projects examining different approaches to making algorithms more transparent, including through visualisation tools. Machine learning might in the future also be used itself to explain other algorithms.
64.Whatever form transparency takes, Projects by IF emphasised that “services based on the outcome of an algorithm need to empower users to raise a complaint” if a decision is in dispute. Oxford Internet Institute believed that “the rapid spread of automated decision-making into sensitive areas of life, such as health insurance, credit scoring or recruiting, demands that we do better in allowing people to understand how their lives are being shaped by algorithms”. IBM thought it was important that explanations uncover how algorithms “interpreted their input” as well as “why they recommended a particular output”.
65.Oxford Internet Institute highlighted that the ‘right to explanation’ is omitted from the GDPR’s Article 22 (paragraph 73), and is only included in a non-legally binding recital, which serves only as guidance. University College London wanted a meaningful ‘right to explanation’ strengthened, to include ‘semi-automated’ as well as the ‘automated’ decisions that are covered by the GDPR. A ‘right to information’ in the Data Protection Bill gives the data subject the right “to obtain from the [data] controller, on request, knowledge of the reasoning underlying the processing” of any decision, but only in connection with intelligence services data processing. The Bill has no wider ‘right to explanation’ for the UK, nor one that could be applied to all decisions rather than just to the intelligence field. In France, digital-economy minister Mounir Mahjoubi recently said that its government should not use any algorithm whose decisions cannot be explained.
66.Transparency must be a key underpinning for algorithm accountability. There is a debate about whether that transparency should involve sharing the workings of the algorithm ‘black box’ with those affected by the algorithm and the individuals whose data have been used, or whether (because such information will not be widely understood) an ‘explanation’ is provided. Where disclosure of the inner workings of privately-developed public-service algorithms would present their developers with commercial or personal-data confidentiality issues, the Government and the Centre for Data Ethics & Innovation should explore with the industries involved the scope for using the proposed ‘data trust’ model to make that data available in suitably de-sensitised format. While we acknowledge the practical difficulties with sharing an ‘explanation’ in an understandable form, the Government’s default position should be that explanations of the way algorithms work should be published when the algorithms in question affect the rights and liberties of individuals. That will make it easier for the decisions produced by algorithms also to be explained. The Centre for Data Ethics & Innovation should examine how explanations for how algorithms work can be required to be of sufficient quality to allow a reasonable person to be able to challenge the ‘decision’ of the algorithm—an issue we explore further in Chapter 4. Where algorithms might significantly adversely affect the public or their rights, we believe that the answer is a combination of explanation and as much transparency as possible.
67.The ‘right to explanation’ is a key part of achieving accountability. We note that the Government has not gone beyond the GDPR’s non-binding provisions and that individuals are not currently able to formally challenge the results of all algorithm decisions or where appropriate to seek redress for the impacts of such decisions. The scope for such safeguards should be considered by the Centre for Data Ethics & Innovation and the ICO in the review of the operation of the GDPR that we advocate in Chapter 4.
160 Nesta ()
161 Council of Europe, The human rights dimensions of automated data processing techniques (in particular algorithms) and possible regulatory implications (October 2017), p 37
164 Oral evidence taken before the House of Lords Artificial Intelligence Committee on 17 October 2017, HL (2017–18) 100, [Professor Alan Winfield]
165 Royal Academy of Engineering ()
166 Upturn and Omidyar Network, Public Scrutiny of Automated Decisions: Early Lessons and Emerging Methods, p 8
168 Information Commissioner’s Office ()
169 Nesta ()
171 Microsoft () para 13
172 TechUK () para 72
174 TechUK () para 74
175 IEEE, ‘IEEE Standards Association Introduces Global Initiative for Ethical Considerations in the Design of Autonomous Systems,’ 5 April 2016; Upturn and Omidyar Network, Public Scrutiny of Automated Decisions: Early Lessons and Emerging Methods, p 7
176 TechUK () para 73
177 These principles are: 1) Artificial intelligence should be developed for the common good and benefit of humanity. 2) Artificial intelligence should operate on principles of intelligibility and fairness. 3) Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities. 4) All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence. 5) The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.
179 Upturn and Omidyar Network, Public Scrutiny of Automated Decisions: Early Lessons and Emerging Methods, p 7
180 Q375 [Oliver Buckley]
181 TechUK () para 4
182 Oxford Internet Institute ()
183 Centre for Intelligent Sensing ()
184 Google () para 3.17
185 Institute of Mathematics and its Applications () para 38
187 Institute of Mathematics and its Applications () para 38
188 Information Commissioner’s Office ()
189 Professor Daniel Neyland, Goldsmiths ()
190 Alan Turing Institute ()
191 Information Commissioner’s Office ()
192 Information Commissioner’s Office ()
193 TechUK () para 76
194 Neyland, D. 2015. “Bearing accountable witness to the ethical algorithmic system”, Science, Technology and Human Values, Vol.41 (2016), pp 50–76
195 HC Deb, 2 May 2018,
197 Mark Gardiner () but this is referencing a quote made by Rebecca MacKinnon, director of the Ranking Digital Rights project at New America
198 Upturn and Omidyar Network, Public Scrutiny of Automated Decisions: Early Lessons and Emerging Methods, p 3
199 Liberty ()
200 Nesta ()
201 Human Rights, Big Data and Technology Project () para 25
202 Projects by IF ()
204 Dr Janet Bastiman ()
205 Professor Louise Amoore, Durham University, () para 2.3
206 University College London ()
207 Q15 (subsequently amended to the creation of a task force to provide recommendations on how automated decision systems may be shared by the public)
209 Peter A. Hamilton, Google-bombing: Manipulating the PageRank Algorithm, p 3
210 Alan Turing Institute ()
211 Horizon Digital Economy Research Institute, University of Nottingham, and the Human Centred Computing group, University of Oxford () para 19
212 Microsoft () para 14; The current copyright exception permits researchers with legal access to a copyrighted work, to make copies “for the purpose of computational analysis” allowing the use of “automated analytical techniques to analyse text and data for patterns, trends and other useful information”. However this only exists for non-commercial research, restricting companies like Microsoft from commercialising their algorithm (Intellectual Property Office, Guidance, Exceptions to Copyright (November 2014))
213 Dr Janet Bastiman ()
214 Future Advocacy & the Wellcome Trust, Ethical, Social, and Political Challenges of Artificial Intelligence in Health (April 2018), p 32
215 Horizon Digital Economy Research Institute, University of Nottingham, and the Human Centred Computing group, University of Oxford () para 18
216 Upturn and Omidyar Network, Public Scrutiny of Automated Decisions: Early Lessons and Emerging Methods, p 24
217 Aire (ALG0066); University College London () para 28; Alan Turing Institute (); The Royal Statistical Society () para 2.5; et al
218 The Royal Statistical Society ()
219 Projects by IF ()
220 The Royal Statistical Society ()
221 Projects by IF ()
225 Google () para 3.17
226 TechUK () para 81
227 TechUK (); “The U.S. Military Wants Its Autonomous Machines to Explain Themselves”, MIT Technology Review, 14 March 2017
228 A recent experiment aimed at explaining an AI system involved running another AI system in parallel, which monitored patterns in people narrating their experiences of playing a computer game. These patterns in the human explanations were learnt by the parallel AI system, and then applied to provide their own explanations. See Osbert Bastani, Carolyn Kim and Hamsa Bastani, “Interpretability via Model Extraction”; and “The unexamined mind”, The Economist, 17 February 2018.
229 Projects by IF () para 6.3
230 Oxford Internet Institute ()
231 IBM () para 5
232 Oxford Internet Institute ()
233 University College London () para 28
234 “Humans may not always grasp why AIs act. Don’t panic”, The Economist, 15 February 2018
Published: 23 May 2018