Algorithms in decision-making Contents

4The Centre for Data Ethics & Innovation, research and the regulatory environment

68.The Government, after some false starts, has now made a commitment to establish an oversight and ethics body—the planned ‘Centre for Data Ethics & Innovation’ (paragraph 6). Many submissions to our inquiry identified a need for continuing research, which might be a focus for the work of the new body. The Royal Society, like our predecessor Committee, argued that “progress in some areas of machine learning research will impact directly on the social acceptability of machine learning applications”. It recommended that research funding bodies encourage studies into “algorithm interpretability, robustness, privacy, fairness, inference of causality, human-machine interactions, and security”.235 University College London advised that the Government should invest in “interdisciplinary research around how to achieve meaningful algorithmic transparency and accountability from social and technical perspectives”.236 The think tank Future Advocacy wanted more Government research on transparency and accountability and supported more ‘open data’ initiatives (paragraph 24).237 TechUK suggested that, what it called, a ‘UK Algorithmic Transparency Challenge’ should be created to “encourage UK businesses and academia to come up with innovative ways to increase the transparency of algorithms”.238 In April, the Government announced plans to spend £11 million on research projects “to better understand the ethical and security implications of data sharing and privacy breaches”.239

69.We welcome the announcement made in the AI Sector Deal to invest in research tackling the ethical implications around AI. The Government should liaise with the Centre for Data Ethics & Innovation and with UK Research & Innovation, to encourage sufficient UKRI-funded research to be undertaken on how algorithms can realise their potential benefits but also mitigate their risks, as well as the tools necessary to make them more widely accepted including tools to address bias and potential accountability and transparency measures (as we discussed in Chapters 2 and 3).

70.Our inquiry has also identified other key areas which, we believe, should be prominent in the Centre’s early work. It should, as we described in Chapter 2, examine the biases built into algorithms—to identify, for example, how better ‘training data’ can be used; how unjustified correlations are avoided when more meaningful causal relationships are discernible; and how algorithm developer teams should be established which include a sufficiently wide cross-section of society, or of the groups that might be affected by an algorithm. The new body should also, we recommend, evaluate accountability tools—principles and ‘codes’, audits of algorithms, certification of algorithm developers, and charging ethics boards with oversight of algorithmic decisions—and advise on how they should be embedded in the private sector as well as in government bodies that share their data with private sector developers (Chapter 3).

71.There are also important and urgent tasks that the Centre for Data Ethics & Innovation should address around the regulatory environment for algorithms; work which requires priority because of the Cambridge Analytica case, uncertainty about how the General Data Protection Regulation (GDPR) will address the issues around the use of algorithms, and because of the widespread and rapidly growing application of algorithms across the economy.

72.Cambridge Analytica allegedly harvested personal data from Facebook accounts without consent.240 Through a personality quiz app, set up by an academic at the University of Cambridge, 270,000 Facebook users purportedly gave their consent to their data being used. However, the app also took the personal data of those users’ ‘friends’ and contacts—in total at least 87m individuals. It has been reported that firms linked to Cambridge Analytica used these data to target campaign messages and sought to influence voters in the 2016 EU Referendum, as well as elections in the US and elsewhere.241 The Information Commissioner242 and the Electoral Commission243 have been investigating the Cambridge Analytica case.

‘Automated’ decisions

73.The GDPR will have a bearing on the way algorithms are developed and used, because they involve the processing of data. Article 22 of the GDPR prohibits many uses of data processing (including for algorithms) where that processing is ‘automated’ and the ‘data subject’ objects. It stipulates that:

The data subject shall have the right not to be subject to a decision based solely on automated processing, including ‘profiling’, which produces legal effects concerning him or her or similarly significantly affects him or her.244

The ICO explained that “unless it’s (i) a trivial decision, (ii) necessary for a contract or (iii) authorised by law, organisations will need to obtain explicit consent to be able to use algorithms in decision-making”. They believed that the GDPR provides “a powerful right which gives people greater control over automated decisions made about them”.245 The minister saw this as a positive step, explaining that:

People must be informed if decisions are going to be made by algorithms rather than human management. Companies must make them aware of that.246

The Data Protection Bill provides a right to be informed, requiring data controllers to “notify the data subject in writing that a [significant] decision has been taken based solely on automated processing”. This is to be done “as soon as reasonably practicable”. If the data subject then exercises their right to opt-out, the Bill also allows the individual to request either that the decision be reconsidered or that a “new decision that is not based solely on automated processing” is considered. However, this is limited to decisions ‘required or authorised in law’ and would be unavailable for the vast majority of decisions.”

74.Dr Sandra Wachter of the Oxford Internet Institute told us that what constituted the term ‘significant affect’ in the GDPR was “a very complicated and unanswered question”.247 Guidance from the relevant GDPR working party, an independent European advisory body on data protection and privacy, explained that:

The decision must have the potential to significantly influence the circumstances, behaviour or choices of the individuals concerned. At its most extreme, the decision may lead to the exclusion or discrimination of individuals.248

Ultimately, Dr Wachter told us, “it will depend on the individual circumstances of the individual”.249

75.Silkie Carlo, then from Liberty, had concerns about the law-enforcement derogations, which she believed should not apply to decisions affecting human rights: “The GDPR allows member states to draw their own exemption. Our exemptions have been applied in a very broad way for law enforcement processing and intelligence service processing in particular. That is concerning.”250 Others have criticised the fact that it is the data subject themselves that will have to “discern and assess the potential negative outcomes of an automated decision” when the “algorithms underlying these decisions are often complex and operate on a random-group level”.251

76.The restriction of Article 22 of the GDPR to decisions ‘based solely on automated processing’ concerned the Institute of Mathematics and its Applications. They highlighted that many algorithms may “in principle only be advisory”, and therefore not ‘automated’, “but the human beings using it may in practice just rubber-stamp its ‘advice’, so in practice it’s determinative”.252 University College London was similarly concerned that decisions may be effectively ‘automated’ because of “human over-reliance on machines or the perception of them as objective and/or neutral”, while the protections of Article 22 would “fall away”.253 Professor Kate Bowers the UCL Jill Dando Institute worried similarly that “people could just pay lip service to the fact that there is a human decision” involved in algorithmic processes.254

77.The GDPR working party on Article 22 recommended that “unless there is ‘meaningful human input’, a decision should still be considered ‘solely’ automated. This requires having individuals in-the-loop who a) regularly change decisions; and b) have the authority and competence organisationally to do so without being penalised.”255 Durham Constabulary told us that its HART algorithm (paragraph 21) only “supports decision-making for the custody officer”256 and that a human always remains in the loop. It was running a test of the algorithm’s reliability by comparing its results against police officers making unaided decisions in parallel.257

78.The sort of algorithm used in the Cambridge Analytica case would be effectively prohibited when the GDPR’s ‘automated’ processing provisions become effective in May 2018 if, as has been reported, the algorithm was used to target political campaign messages without human intervention.

Consent

79.Even if future use of the Cambridge Analytica algorithm would not be regarded as ‘automated’, and therefore a potentially allowable use of data, it would have to satisfy the requirements of the GDPR on consent.

80.The GDPR seeks to embed ‘privacy by design’ by addressing data protection when designing new data-use systems.258 The ICO told us that “in data protection terms, transparency means that people should be given some basic information about the use of their personal data, such as the purpose of its use and the identity of the organisation using it.”259 The GDPR addresses online ‘terms and conditions’ clauses which are often used to get consent. As our predecessor Committee explained, the way these are used has significant shortcomings.260 In our current inquiry too, Dr Sandra Wachter of the Oxford Internet Institute pointed out that few people would go through “hundreds of pages” of terms and conditions, and she instead preferred to see an “understandable overview of what is going to happen to your data while you are visiting a service”.261 The Minister, Margot James, also acknowledged the importance of “active consent”, and emphasised the introduction of opt-outs in the GDPR as a mechanism for achieving this.262 Our predecessor Committee highlighted the potential of “simple and layered privacy notices to empower the consumer to decide exactly how far they are willing to trust each data-holder they engage with”.263 In our inquiry, Dr Pavel Klimov suggested that such ‘layered notices’ could be helpful, giving certain critical information up-front and then allowing the user to click further if they want to learn more, including policies on sharing data with third-parties.264

81.Algorithm technology might in the future be used itself to provide transparency and consent by notifying data subjects when their data are used in other algorithms. DeepMind told us that they were working on a ‘verifiable data audit’ project using digital ledgers (‘blockchains’) to give people cryptographic proof that their data are being used in particular ways.265

82.In the meantime, privacy and consent remain critical issues for algorithms—just as they are (as our predecessor Committee found) for compiling profiles of people from diverse ‘big data’ datasets—because personal data are not always sufficiently anonymised. As our previous Committee highlighted, the risk on ‘big data’ analytics has been that data anonymisation can be undone as datasets are brought together.266 Such risks apply equally to algorithms that look for patterns across datasets, although Dr M-H. Carolyn Nguyen of Microsoft argued that anonymisation could still play a part in deterring privacy abuse provided it is backed up by privacy laws.267

83.Cambridge Analytica’s use of personal data, if used in the UK as has been alleged, would not have met the requirements for consent, even under the existing (pre-GDPR) regime. While it harvested the personal data of at least 87 million users, only the 270,000 individuals who were participants in the initial ‘personality survey’ were asked for consent.268 The provisions of the GDPR will be applied where the ‘data processor’ or the data processing itself is in EU countries (or in the UK through the Data Protection Bill), or if individuals (‘data subjects’) are in the EU/UK.

84.In situations where consent is obtained, there is problem of the power imbalance between the individual and the organisation seeking consent. According to the Information Commissioner, “we are so invested” in digital services that “we become dependent on a service that we can’t always extricate ourselves from”.269 This is especially true where, through acquisitions, companies restrict alternative services, as the Information Commissioner goes on to say. The £11 million of research announced in the AI sector deal (paragraph 6) is intended to better understand the “ethical […] implications of data sharing”.270

Data protection impact assessments

85.To help identify bias in data-driven decisions, which we examined in Chapter 2, the GDPR requires ‘data protection impact assessments’. Article 35 of the GDPR, reflected in the Data protection Bill, states:

Where a type of [data] processing […] is likely to result in a high risk to the rights and freedoms of natural persons, the [data] controller shall, prior to the processing, carry out an assessment of the impact of the envisaged processing operations on the protection of personal data.271

Elizabeth Denham, the Information Commissioner, expected such impact assessments to be produced “when they are building AI or other technological systems that could have an impact on individuals”.272 According to the GDPR working party, these impact assessments offer “a process for building and demonstrating compliance”,273 and the Information Commissioner hoped that they would “force the organisation to think through carefully what data are going into an AI system, how decisions are going to be made and what the output is.”274 Because of “commercial sensitives”, however, she would not be “promoting the need to publish” the assessments.275

86.It is arguable whether those Facebook users who completed the personality questionnaire, that Cambridge Analytica subsequently used to target campaigning material, gave their full, informed consent. It is clear, however, that the millions of people receiving material because their data were included in the algorithm as ‘friends’ or contacts of those completing the questionnaire did not give their consent. Recently, in dealing with the Cambridge Analytica controversy, Facebook has begun to provide its customers with an explicit opportunity to allow or disallow apps that use their data. Every 90 days, users will be prompted to a Facebook Login process where users can specify their data permissions.276 Whether Facebook or Cambridge Analytica would have undertaken a ‘data protection impact assessment’ to meet the requirements of the GDPR is impossible to know. It appears to us, however, that had they completed such an assessment they would have concluded that the algorithm would have been ‘likely to result in a high risk to the rights and freedoms’ of the individuals affected.

The Information Commissioner’s powers

87.Enforcement of key features of the GDPR will fall on the shoulders of the Information Commissioner.277 Professor Louise Amoore of Durham University expressed misgivings that “the ICO was only able to ask questions about how the data was being used and not the form of analysis that was taking place”.278 In a December 2017 speech, however, the Information Commissioner said that the ICO’s duties were “wide and comprehensive, and not merely a complaints-based regulator. […] My office is here to ensure fairness, transparency and accountability in the use of personal data on behalf of people in the UK”.279 The GDPR will provide the Information Commissioner with greater powers, including under Article 58 to undertake data protection audits, as well as the right to obtain all personal data necessary for the ICO’s investigations and to secure access to any premises required.280 The GDPR will also give the ICO the power to ban data processing operations, and to issue much more significant financial penalties than under the existing regulations.281

88.Under the GDPR, however, the ICO cannot compel companies to make their data available. In March 2018 the Information Commissioner issued a “Demand for Access to records and data in the hands of Cambridge Analytica”, but had to secure a High Court warrant to gain access to the data when the company did not oblige.282 The delay in the ICO’s access led some to question the powers of the Information Commissioner to quickly obtain ‘digital search warrants’.283 In her submission to the Data Protection Bill Committee in March 2018, the Information Commissioner wrote:

Under the current Data Protection Act (DPA 1998), non-compliance with an Information Notice is a criminal offence, punishable by a fine in the Magistrate’s Court. However, the court cannot compel compliance with the Information Notice or issue a disclosure order. This means, that although the data controller can receive a criminal sanction for non-compliance, the Commissioner is still unable to obtain the information she needs for her investigation.284

She complained that the inability to compel compliance with an Information Notice meant that investigations have “no guarantee of success”, and which “may affect outcomes as it proves impossible to follow essential lines of enquiry”. She contrasted this with her previous role as the Information and Privacy Commissioner for British Columbia where she had a power “to compel the disclosure of documents, records and testimony from data controllers and individuals, and failure to do so was a contempt of court”. As a result, she called for the Data Protection Bill to “provide a mechanism to require the disclosure of requested information under her Information Notice powers”. In her opinion, “Failure to do this will have an adverse effect on her investigatory and enforcement powers.”285 Addressing these challenges, the Government subsequently amended the Bill to increase the Information Commissioner’s powers; enabling the courts to compel compliance with Information Orders and making it an offence to “block” or otherwise withhold the required information.286

89.The developments within algorithms and the way data are used have changed since the Information Commissioner’s Office was set up. To accommodate this new landscape, Hetan Shah called “for Government to sort out its funding model”.287 The Government has since “announced a new charging structure” requiring large organisation to pay a higher fee, representative of their higher risk.288

90.The provisions of the General Data Protection Regulation will provide helpful protections for those affected by algorithms and those whose data are subsumed in algorithm development, although how effective those safeguards are in practice will have to be tested when they become operational later this spring. While there is, for example, some uncertainty about how some of its provisions will be interpreted, they do appear to offer important tools for regulators to insist on meaningful privacy protections and more explicit consent. The Regulation provides an opt-out for most ‘automated’ algorithm decisions, but there is a grey area that may leave individuals unprotected—where decisions might be indicated by an algorithm but are only superficially reviewed or adjusted by a ‘human in the loop’, particularly where that human intervention is little more than rubber-stamping the algorithms’ decision. While we welcome the inclusion in the Data Protection Bill of the requirement for data controllers to inform individuals when an automated algorithm produces a decision, it is unfortunate that it is restricted to decisions ‘required or authorised by law’. There is also a difficulty in individuals exercising their right to opt-out of such decisions if they are unaware that they have been the subject of an entirely automated process in the first place.

91.The Centre for Data Ethics & Innovation and the ICO should keep the operation of the GDPR under review as far as it governs algorithms, and report to Government by May 2019 on areas where the UK’s data protection legislation might need further refinement. They should start with a more immediate review of the lessons of the Cambridge Analytica case. We welcome the amendments made to the Data Protection Bill which give the ICO the powers it sought in relation to its Information Notices, avoiding the delays it experienced in investigating the Cambridge Analytica case. The Government should also ensure that the ICO is adequately funded to carry out these new powers. The Government, along with the ICO and the Centre for Data Ethics & Innovation, should continue to monitor how terms and conditions rules under the GDPR are being applied to ensure that personal data is protected and that consumers are effectively informed, acknowledging that it is predominantly algorithms that use those data.

92.‘Data protection impact assessments’, required under the GDPR, will be an essential safeguard. The ICO and the Centre for Data Ethics & Innovation should encourage the publication of the assessments (in summary form if needed to avoid any commercial confidentiality issues). They should also consider whether the legislation provides sufficient powers to compel data controllers to prepare impact assessments, and to improve them if the ICO and the Centre believe the assessments to be inadequate.

Sector regulation

93.There is a wider issue for the Centre for Data Ethics & Innovation to consider early in its work, we believe, about any role it might have in providing regulatory oversight to complement the ICO’s remit.

94.Nesta advocated the establishment of “some general principles around accountability, visibility and control” but applied with “plenty of flexibility”. They believed that it was now time “to start designing new institutions”.289 The Financial Services Consumer Panel also wanted “a framework in place for supervision and enforcement as algorithmic decision making continues to play an increasing role in the financial services sector”.290 The Royal Society concluded that: “The volumes, portability, nature, and new uses of data in a digital world raise many challenges for which existing data access frameworks do not seem well equipped. It is timely to consider how best to address these novel questions via a new framework for data governance.”291

95.There was a range of views in our inquiry on the relative benefits of a general overarching oversight framework and a sector-specific framework. Nesta doubted the effectiveness of “well intentioned private initiatives” which would be “unlikely to have the clout or credibility to deal with the more serious potential problems”.292 The Royal Society favoured sectoral regulation:

While there may be specific questions about the use of machine learning in specific circumstances, these should be handled in a sector-specific way, rather than via an overarching framework for all uses of machine learning.293

They noted that the impact of algorithms which affect “buying or listening recommendations” matter less than those filtering what “appears to me as news, or affect how I am evaluated by others”.294 Similarly, Professor Kate Bowers of the UCL Jill Dando Institute believed that algorithms are context specific and that there is “a different set of risks and issues from the point of view of the degree to which they expose individuals”.295 These arguments suggest sectoral regulation as opposed to having a single regulator—a view supported by Elizabeth Denham, the Information Commissioner, who did not think that we need “an AI regulator”,296 but was nevertheless bringing sector regulators together “to talk about AI systems”.297 This is a role that could be taken by the newly created Centre for Data Ethics and Innovation, a view also shared by the Minister, Margot James.298

96.In contrast to this sectoral approach, the Oxford Internet Institute proposed “an AI Watchdog, or a trusted and independent regulatory body” which would be “equipped with the proper expertise (spanning ideally law, ethics, to computer science), resources and auditing authority (to make inspections) to ensure that algorithmic decision making is fair, unbiased and transparent”.299 In a similar vein, Microsoft favoured “all aspects of society, including government, academia and business [… coming] together to create a set of shared principles by which to guide the use of algorithms and AI”,300 although not necessary leading to overarching regulation. Nesta wanted an advisory body to “guide behaviours, understanding, norms and rules”, without “formal regulatory powers of approval or certification” but instead “strong powers of investigation and of recommendation”.301

97.The Centre for Data Ethics & Innovation and the Information Commissioner should review the extent of algorithm oversight by each of the main sector-specific regulators, and use the results to guide those regulators to extend their work in this area as appropriate. The Information Commissioner should also make an assessment, on the back of that work, of whether it needs greater powers to perform its regulatory oversight role where sector regulators do not see this as a priority.


235 The Royal Society (ADM0021)

236 University College London (ALG0050)

237 Future Advocacy (ALG0064)

238 TechUK (ADM0003)

240FTC to question Facebook over Cambridge Analytica data scandal’, FT, 20 March 2018; New York Times, ‘Facebook’s Surveillance Machine’, 19 March 2018

243 Electoral Commission, FOI release, May 2017

244 (EU) 2016/679, Article 22

245 Information Commissioner’s Office (ALG0038)

246 Q371

247 Q62

248 Article 29 Data Protection Working Party, Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 (October 2017), p 10

249 Q62

250 Q52

251 University of Leuven Centre for IT and IP, The Right not to be Subject to Automated Decision-Making: The role of explicit consent, 2 August 2016

252 Institute of Mathematics and its Applications (ADM0008) para 8

253 University College London (ADM0010) para 5

254 Q175

255 University College London (ADM0010) para 6

256 Q150

257 Sheena Urwin, Head of Criminal Justice, Durham Constabulary (ADM0032)

258 EU GDPR, ‘GDPR Key Changes’, accessed 20 March 2018

259 Information Commissioner’s Office (ALG0038)

260 Science and Technology Committee, Fourth Report of Session 2015–16, The big data dilemma, HC 468

261 Q57

262 Q365

263 Science and Technology Committee, Fourth Report of Session 2015–16, The big data dilemma, HC 468, para 66

264 Q66

265 Q245 [Dr Dominic King]

266 The big data dilemma, HC 468; The Royal Society (ALG0056)

267 Q139

271 (EU) 2016/679, Article 35

272 Q303

273 Article 29 Data Protection Working Party, Processing is “likely to result in a high risk” for the purposes of Regulation 2016/679 (April 2017), p 4

274 Q303

275 Q304

276 Facebook - Developer News, ‘User Access Token Changes’, 09 April 2018

277 Information Commissioner’s Office (ALG0038)

278 Q27

280 (EU) 2016/679, Article 58

281 (EU) 2016/679, Article 83; Information Commissioner’s Office (ALG0038)

283 Financial Times, ‘UK data watchdog still seeking Cambridge Analytica warrant’, 20 March 2018

284Information Commissioner’s Office (DPB05)’, Data Protection Bill, Public Bill Committee, March 2018

285Information Commissioner’s Office (DPB05)’, Data Protection Bill, Public Bill Committee, March 2018

286HL Bill 104 Commons amendments’, 09 May 2018, p 6,7

287 Q44

289 Nesta (ALG0059)

290 Financial Services Consumer Panel (ALG0039)

291 The Royal Society (ALG0056)

292 Nesta (ALG0059)

293 The Royal Society (ADM0021)

294 Mark Gardiner (ALG0068). See also “Should algorithms be regulated?”, IT Pro, 19 December 2016

295 Q156

296 Q298

297 Q300

298 Q378

299 Oxford Internet Institute (ALG0031)

300 Microsoft (ALG0072) para 11

301 Nesta (ALG0059)




Published: 23 May 2018