68.The Government, after some false starts, has now made a commitment to establish an oversight and ethics body—the planned ‘Centre for Data Ethics & Innovation’ (paragraph 6). Many submissions to our inquiry identified a need for continuing research, which might be a focus for the work of the new body. The Royal Society, like our predecessor Committee, argued that “progress in some areas of machine learning research will impact directly on the social acceptability of machine learning applications”. It recommended that research funding bodies encourage studies into “algorithm interpretability, robustness, privacy, fairness, inference of causality, human-machine interactions, and security”. University College London advised that the Government should invest in “interdisciplinary research around how to achieve meaningful algorithmic transparency and accountability from social and technical perspectives”. The think tank Future Advocacy wanted more Government research on transparency and accountability and supported more ‘open data’ initiatives (paragraph 24). TechUK suggested that, what it called, a ‘UK Algorithmic Transparency Challenge’ should be created to “encourage UK businesses and academia to come up with innovative ways to increase the transparency of algorithms”. In April, the Government announced plans to spend £11 million on research projects “to better understand the ethical and security implications of data sharing and privacy breaches”.
69.We welcome the announcement made in the AI Sector Deal to invest in research tackling the ethical implications around AI. The Government should liaise with the Centre for Data Ethics & Innovation and with UK Research & Innovation, to encourage sufficient UKRI-funded research to be undertaken on how algorithms can realise their potential benefits but also mitigate their risks, as well as the tools necessary to make them more widely accepted including tools to address bias and potential accountability and transparency measures (as we discussed in Chapters 2 and 3).
70.Our inquiry has also identified other key areas which, we believe, should be prominent in the Centre’s early work. It should, as we described in Chapter 2, examine the biases built into algorithms—to identify, for example, how better ‘training data’ can be used; how unjustified correlations are avoided when more meaningful causal relationships are discernible; and how algorithm developer teams should be established which include a sufficiently wide cross-section of society, or of the groups that might be affected by an algorithm. The new body should also, we recommend, evaluate accountability tools—principles and ‘codes’, audits of algorithms, certification of algorithm developers, and charging ethics boards with oversight of algorithmic decisions—and advise on how they should be embedded in the private sector as well as in government bodies that share their data with private sector developers (Chapter 3).
71.There are also important and urgent tasks that the Centre for Data Ethics & Innovation should address around the regulatory environment for algorithms; work which requires priority because of the Cambridge Analytica case, uncertainty about how the General Data Protection Regulation (GDPR) will address the issues around the use of algorithms, and because of the widespread and rapidly growing application of algorithms across the economy.
72.Cambridge Analytica allegedly harvested personal data from Facebook accounts without consent. Through a personality quiz app, set up by an academic at the University of Cambridge, 270,000 Facebook users purportedly gave their consent to their data being used. However, the app also took the personal data of those users’ ‘friends’ and contacts—in total at least 87m individuals. It has been reported that firms linked to Cambridge Analytica used these data to target campaign messages and sought to influence voters in the 2016 EU Referendum, as well as elections in the US and elsewhere. The Information Commissioner and the Electoral Commission have been investigating the Cambridge Analytica case.
73.The GDPR will have a bearing on the way algorithms are developed and used, because they involve the processing of data. Article 22 of the GDPR prohibits many uses of data processing (including for algorithms) where that processing is ‘automated’ and the ‘data subject’ objects. It stipulates that:
The data subject shall have the right not to be subject to a decision based solely on automated processing, including ‘profiling’, which produces legal effects concerning him or her or similarly significantly affects him or her.
The ICO explained that “unless it’s (i) a trivial decision, (ii) necessary for a contract or (iii) authorised by law, organisations will need to obtain explicit consent to be able to use algorithms in decision-making”. They believed that the GDPR provides “a powerful right which gives people greater control over automated decisions made about them”. The minister saw this as a positive step, explaining that:
People must be informed if decisions are going to be made by algorithms rather than human management. Companies must make them aware of that.
The Data Protection Bill provides a right to be informed, requiring data controllers to “notify the data subject in writing that a [significant] decision has been taken based solely on automated processing”. This is to be done “as soon as reasonably practicable”. If the data subject then exercises their right to opt-out, the Bill also allows the individual to request either that the decision be reconsidered or that a “new decision that is not based solely on automated processing” is considered. However, this is limited to decisions ‘required or authorised in law’ and would be unavailable for the vast majority of decisions.”
74.Dr Sandra Wachter of the Oxford Internet Institute told us that what constituted the term ‘significant affect’ in the GDPR was “a very complicated and unanswered question”. Guidance from the relevant GDPR working party, an independent European advisory body on data protection and privacy, explained that:
The decision must have the potential to significantly influence the circumstances, behaviour or choices of the individuals concerned. At its most extreme, the decision may lead to the exclusion or discrimination of individuals.
Ultimately, Dr Wachter told us, “it will depend on the individual circumstances of the individual”.
75.Silkie Carlo, then from Liberty, had concerns about the law-enforcement derogations, which she believed should not apply to decisions affecting human rights: “The GDPR allows member states to draw their own exemption. Our exemptions have been applied in a very broad way for law enforcement processing and intelligence service processing in particular. That is concerning.” Others have criticised the fact that it is the data subject themselves that will have to “discern and assess the potential negative outcomes of an automated decision” when the “algorithms underlying these decisions are often complex and operate on a random-group level”.
76.The restriction of Article 22 of the GDPR to decisions ‘based solely on automated processing’ concerned the Institute of Mathematics and its Applications. They highlighted that many algorithms may “in principle only be advisory”, and therefore not ‘automated’, “but the human beings using it may in practice just rubber-stamp its ‘advice’, so in practice it’s determinative”. University College London was similarly concerned that decisions may be effectively ‘automated’ because of “human over-reliance on machines or the perception of them as objective and/or neutral”, while the protections of Article 22 would “fall away”. Professor Kate Bowers the UCL Jill Dando Institute worried similarly that “people could just pay lip service to the fact that there is a human decision” involved in algorithmic processes.
77.The GDPR working party on Article 22 recommended that “unless there is ‘meaningful human input’, a decision should still be considered ‘solely’ automated. This requires having individuals in-the-loop who a) regularly change decisions; and b) have the authority and competence organisationally to do so without being penalised.” Durham Constabulary told us that its HART algorithm (paragraph 21) only “supports decision-making for the custody officer” and that a human always remains in the loop. It was running a test of the algorithm’s reliability by comparing its results against police officers making unaided decisions in parallel.
78.The sort of algorithm used in the Cambridge Analytica case would be effectively prohibited when the GDPR’s ‘automated’ processing provisions become effective in May 2018 if, as has been reported, the algorithm was used to target political campaign messages without human intervention.
79.Even if future use of the Cambridge Analytica algorithm would not be regarded as ‘automated’, and therefore a potentially allowable use of data, it would have to satisfy the requirements of the GDPR on consent.
80.The GDPR seeks to embed ‘privacy by design’ by addressing data protection when designing new data-use systems. The ICO told us that “in data protection terms, transparency means that people should be given some basic information about the use of their personal data, such as the purpose of its use and the identity of the organisation using it.” The GDPR addresses online ‘terms and conditions’ clauses which are often used to get consent. As our predecessor Committee explained, the way these are used has significant shortcomings. In our current inquiry too, Dr Sandra Wachter of the Oxford Internet Institute pointed out that few people would go through “hundreds of pages” of terms and conditions, and she instead preferred to see an “understandable overview of what is going to happen to your data while you are visiting a service”. The Minister, Margot James, also acknowledged the importance of “active consent”, and emphasised the introduction of opt-outs in the GDPR as a mechanism for achieving this. Our predecessor Committee highlighted the potential of “simple and layered privacy notices to empower the consumer to decide exactly how far they are willing to trust each data-holder they engage with”. In our inquiry, Dr Pavel Klimov suggested that such ‘layered notices’ could be helpful, giving certain critical information up-front and then allowing the user to click further if they want to learn more, including policies on sharing data with third-parties.
81.Algorithm technology might in the future be used itself to provide transparency and consent by notifying data subjects when their data are used in other algorithms. DeepMind told us that they were working on a ‘verifiable data audit’ project using digital ledgers (‘blockchains’) to give people cryptographic proof that their data are being used in particular ways.
82.In the meantime, privacy and consent remain critical issues for algorithms—just as they are (as our predecessor Committee found) for compiling profiles of people from diverse ‘big data’ datasets—because personal data are not always sufficiently anonymised. As our previous Committee highlighted, the risk on ‘big data’ analytics has been that data anonymisation can be undone as datasets are brought together. Such risks apply equally to algorithms that look for patterns across datasets, although Dr M-H. Carolyn Nguyen of Microsoft argued that anonymisation could still play a part in deterring privacy abuse provided it is backed up by privacy laws.
83.Cambridge Analytica’s use of personal data, if used in the UK as has been alleged, would not have met the requirements for consent, even under the existing (pre-GDPR) regime. While it harvested the personal data of at least 87 million users, only the 270,000 individuals who were participants in the initial ‘personality survey’ were asked for consent. The provisions of the GDPR will be applied where the ‘data processor’ or the data processing itself is in EU countries (or in the UK through the Data Protection Bill), or if individuals (‘data subjects’) are in the EU/UK.
84.In situations where consent is obtained, there is problem of the power imbalance between the individual and the organisation seeking consent. According to the Information Commissioner, “we are so invested” in digital services that “we become dependent on a service that we can’t always extricate ourselves from”. This is especially true where, through acquisitions, companies restrict alternative services, as the Information Commissioner goes on to say. The £11 million of research announced in the AI sector deal (paragraph 6) is intended to better understand the “ethical […] implications of data sharing”.
85.To help identify bias in data-driven decisions, which we examined in Chapter 2, the GDPR requires ‘data protection impact assessments’. Article 35 of the GDPR, reflected in the Data protection Bill, states:
Where a type of [data] processing […] is likely to result in a high risk to the rights and freedoms of natural persons, the [data] controller shall, prior to the processing, carry out an assessment of the impact of the envisaged processing operations on the protection of personal data.
Elizabeth Denham, the Information Commissioner, expected such impact assessments to be produced “when they are building AI or other technological systems that could have an impact on individuals”. According to the GDPR working party, these impact assessments offer “a process for building and demonstrating compliance”, and the Information Commissioner hoped that they would “force the organisation to think through carefully what data are going into an AI system, how decisions are going to be made and what the output is.” Because of “commercial sensitives”, however, she would not be “promoting the need to publish” the assessments.
86.It is arguable whether those Facebook users who completed the personality questionnaire, that Cambridge Analytica subsequently used to target campaigning material, gave their full, informed consent. It is clear, however, that the millions of people receiving material because their data were included in the algorithm as ‘friends’ or contacts of those completing the questionnaire did not give their consent. Recently, in dealing with the Cambridge Analytica controversy, Facebook has begun to provide its customers with an explicit opportunity to allow or disallow apps that use their data. Every 90 days, users will be prompted to a Facebook Login process where users can specify their data permissions. Whether Facebook or Cambridge Analytica would have undertaken a ‘data protection impact assessment’ to meet the requirements of the GDPR is impossible to know. It appears to us, however, that had they completed such an assessment they would have concluded that the algorithm would have been ‘likely to result in a high risk to the rights and freedoms’ of the individuals affected.
87.Enforcement of key features of the GDPR will fall on the shoulders of the Information Commissioner. Professor Louise Amoore of Durham University expressed misgivings that “the ICO was only able to ask questions about how the data was being used and not the form of analysis that was taking place”. In a December 2017 speech, however, the Information Commissioner said that the ICO’s duties were “wide and comprehensive, and not merely a complaints-based regulator. […] My office is here to ensure fairness, transparency and accountability in the use of personal data on behalf of people in the UK”. The GDPR will provide the Information Commissioner with greater powers, including under Article 58 to undertake data protection audits, as well as the right to obtain all personal data necessary for the ICO’s investigations and to secure access to any premises required. The GDPR will also give the ICO the power to ban data processing operations, and to issue much more significant financial penalties than under the existing regulations.
88.Under the GDPR, however, the ICO cannot compel companies to make their data available. In March 2018 the Information Commissioner issued a “Demand for Access to records and data in the hands of Cambridge Analytica”, but had to secure a High Court warrant to gain access to the data when the company did not oblige. The delay in the ICO’s access led some to question the powers of the Information Commissioner to quickly obtain ‘digital search warrants’. In her submission to the Data Protection Bill Committee in March 2018, the Information Commissioner wrote:
Under the current Data Protection Act (DPA 1998), non-compliance with an Information Notice is a criminal offence, punishable by a fine in the Magistrate’s Court. However, the court cannot compel compliance with the Information Notice or issue a disclosure order. This means, that although the data controller can receive a criminal sanction for non-compliance, the Commissioner is still unable to obtain the information she needs for her investigation.
She complained that the inability to compel compliance with an Information Notice meant that investigations have “no guarantee of success”, and which “may affect outcomes as it proves impossible to follow essential lines of enquiry”. She contrasted this with her previous role as the Information and Privacy Commissioner for British Columbia where she had a power “to compel the disclosure of documents, records and testimony from data controllers and individuals, and failure to do so was a contempt of court”. As a result, she called for the Data Protection Bill to “provide a mechanism to require the disclosure of requested information under her Information Notice powers”. In her opinion, “Failure to do this will have an adverse effect on her investigatory and enforcement powers.” Addressing these challenges, the Government subsequently amended the Bill to increase the Information Commissioner’s powers; enabling the courts to compel compliance with Information Orders and making it an offence to “block” or otherwise withhold the required information.
89.The developments within algorithms and the way data are used have changed since the Information Commissioner’s Office was set up. To accommodate this new landscape, Hetan Shah called “for Government to sort out its funding model”. The Government has since “announced a new charging structure” requiring large organisation to pay a higher fee, representative of their higher risk.
90.The provisions of the General Data Protection Regulation will provide helpful protections for those affected by algorithms and those whose data are subsumed in algorithm development, although how effective those safeguards are in practice will have to be tested when they become operational later this spring. While there is, for example, some uncertainty about how some of its provisions will be interpreted, they do appear to offer important tools for regulators to insist on meaningful privacy protections and more explicit consent. The Regulation provides an opt-out for most ‘automated’ algorithm decisions, but there is a grey area that may leave individuals unprotected—where decisions might be indicated by an algorithm but are only superficially reviewed or adjusted by a ‘human in the loop’, particularly where that human intervention is little more than rubber-stamping the algorithms’ decision. While we welcome the inclusion in the Data Protection Bill of the requirement for data controllers to inform individuals when an automated algorithm produces a decision, it is unfortunate that it is restricted to decisions ‘required or authorised by law’. There is also a difficulty in individuals exercising their right to opt-out of such decisions if they are unaware that they have been the subject of an entirely automated process in the first place.
91.The Centre for Data Ethics & Innovation and the ICO should keep the operation of the GDPR under review as far as it governs algorithms, and report to Government by May 2019 on areas where the UK’s data protection legislation might need further refinement. They should start with a more immediate review of the lessons of the Cambridge Analytica case. We welcome the amendments made to the Data Protection Bill which give the ICO the powers it sought in relation to its Information Notices, avoiding the delays it experienced in investigating the Cambridge Analytica case. The Government should also ensure that the ICO is adequately funded to carry out these new powers. The Government, along with the ICO and the Centre for Data Ethics & Innovation, should continue to monitor how terms and conditions rules under the GDPR are being applied to ensure that personal data is protected and that consumers are effectively informed, acknowledging that it is predominantly algorithms that use those data.
92.‘Data protection impact assessments’, required under the GDPR, will be an essential safeguard. The ICO and the Centre for Data Ethics & Innovation should encourage the publication of the assessments (in summary form if needed to avoid any commercial confidentiality issues). They should also consider whether the legislation provides sufficient powers to compel data controllers to prepare impact assessments, and to improve them if the ICO and the Centre believe the assessments to be inadequate.
93.There is a wider issue for the Centre for Data Ethics & Innovation to consider early in its work, we believe, about any role it might have in providing regulatory oversight to complement the ICO’s remit.
94.Nesta advocated the establishment of “some general principles around accountability, visibility and control” but applied with “plenty of flexibility”. They believed that it was now time “to start designing new institutions”. The Financial Services Consumer Panel also wanted “a framework in place for supervision and enforcement as algorithmic decision making continues to play an increasing role in the financial services sector”. The Royal Society concluded that: “The volumes, portability, nature, and new uses of data in a digital world raise many challenges for which existing data access frameworks do not seem well equipped. It is timely to consider how best to address these novel questions via a new framework for data governance.”
95.There was a range of views in our inquiry on the relative benefits of a general overarching oversight framework and a sector-specific framework. Nesta doubted the effectiveness of “well intentioned private initiatives” which would be “unlikely to have the clout or credibility to deal with the more serious potential problems”. The Royal Society favoured sectoral regulation:
While there may be specific questions about the use of machine learning in specific circumstances, these should be handled in a sector-specific way, rather than via an overarching framework for all uses of machine learning.
They noted that the impact of algorithms which affect “buying or listening recommendations” matter less than those filtering what “appears to me as news, or affect how I am evaluated by others”. Similarly, Professor Kate Bowers of the UCL Jill Dando Institute believed that algorithms are context specific and that there is “a different set of risks and issues from the point of view of the degree to which they expose individuals”. These arguments suggest sectoral regulation as opposed to having a single regulator—a view supported by Elizabeth Denham, the Information Commissioner, who did not think that we need “an AI regulator”, but was nevertheless bringing sector regulators together “to talk about AI systems”. This is a role that could be taken by the newly created Centre for Data Ethics and Innovation, a view also shared by the Minister, Margot James.
96.In contrast to this sectoral approach, the Oxford Internet Institute proposed “an AI Watchdog, or a trusted and independent regulatory body” which would be “equipped with the proper expertise (spanning ideally law, ethics, to computer science), resources and auditing authority (to make inspections) to ensure that algorithmic decision making is fair, unbiased and transparent”. In a similar vein, Microsoft favoured “all aspects of society, including government, academia and business [… coming] together to create a set of shared principles by which to guide the use of algorithms and AI”, although not necessary leading to overarching regulation. Nesta wanted an advisory body to “guide behaviours, understanding, norms and rules”, without “formal regulatory powers of approval or certification” but instead “strong powers of investigation and of recommendation”.
97.The Centre for Data Ethics & Innovation and the Information Commissioner should review the extent of algorithm oversight by each of the main sector-specific regulators, and use the results to guide those regulators to extend their work in this area as appropriate. The Information Commissioner should also make an assessment, on the back of that work, of whether it needs greater powers to perform its regulatory oversight role where sector regulators do not see this as a priority.
235 The Royal Society ()
236 University College London ()
237 Future Advocacy ()
238 TechUK ()
239 HM Government, ‘Tech sector backs British AI industry with multi million pound investment’, 26 April 2018
240 ‘FTC to question Facebook over Cambridge Analytica data scandal’, FT, 20 March 2018; New York Times, ‘Facebook’s Surveillance Machine’, 19 March 2018
241 ‘The great British Brexit robbery: how our democracy was hijacked’, The Guardian, 7 May 2017
242 ICO, The Information Commissioner opens a formal investigation into the use of data analytics for political purposes’ May 2017; ICO, Update on ICO investigation into data analytics for political purposes, Dec 2017.
244 , Article 22
245 Information Commissioner’s Office ()
248 Article 29 Data Protection Working Party, Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 (October 2017), p 10
251 University of Leuven Centre for IT and IP, The Right not to be Subject to Automated Decision-Making: The role of explicit consent, 2 August 2016
252 Institute of Mathematics and its Applications () para 8
253 University College London () para 5
255 University College London () para 6
257 Sheena Urwin, Head of Criminal Justice, Durham Constabulary ()
259 Information Commissioner’s Office ()
265 Q245 [Dr Dominic King]
268 ‘FTC to question Facebook over Cambridge Analytica data scandal’, Financial Times, 20 March 2018
269 ‘Why we should worry about WhatsApp accessing our personal information’, The Guardian, 10 November 2016
270 HM Government, ‘Tech sector backs British AI industry with multi million pound investment’, 26 April 2018
271 , Article 35
273 Article 29 Data Protection Working Party, Processing is “likely to result in a high risk” for the purposes of Regulation 2016/679 (April 2017), p 4
277 Information Commissioner’s Office ()
279 Speech by the Information Commissioner at the TechUK Data Ethics Summit, 13 December 2017
280 , Article 58
281 , Article 83; Information Commissioner’s Office ()
282 ‘ICO statement: investigation into data analytics for political purposes’.Accessed: 24 March 2018
283 Financial Times, ‘UK data watchdog still seeking Cambridge Analytica warrant’, 20 March 2018
288 Information Commissioner’s Office, ‘New model announced for funding the data protection work of the Information Commissioner’s Office’, 21 February 2018
289 Nesta ()
290 Financial Services Consumer Panel ()
291 The Royal Society ()
292 Nesta ()
293 The Royal Society ()
299 Oxford Internet Institute ()
300 Microsoft () para 11
301 Nesta ()
Published: 23 May 2018