Algorithms in decision-making Contents

Conclusions and recommendations

Introduction

1.The Government’s proposed Centre for Data Ethics & Innovation is a welcome initiative. It will occupy a critically important position, alongside the Information Commissioner’s Office, in overseeing the future development of algorithms and the ‘decisions’ they make. The challenge will be to secure a framework which facilitates and encourages innovation but which also maintains vital public trust and confidence. (Paragraph 7)

2.Many of the issues raised in this report will require close monitoring, to ensure that the oversight of machine learning-driven algorithms continues to strike an appropriate and safe balance between recognising the benefits (for healthcare and other public services, for example, and for innovation in the private sector) and the risks (for privacy and consent, data security and any unacceptable impacts on individuals). As we discuss in this report, the Government should ensure that these issues are at the top of the new body’s remit and agenda. (Paragraph 8)

3.The Government plans to put the Centre for Data Ethics & Innovation on a statutory footing. When it does so, it should set it a requirement to report annually to Parliament on the results of its work, to allow us and others to scrutinise its effectiveness. Although the terms of the Government’s proposed consultation on the Centre for Data Ethics & Innovation have yet to be announced, we anticipate our report feeding into that exercise. (Paragraph 9)

Applications and bias

4.Algorithms are being used in an ever-growing number of areas, in ever-increasing ways. They are bringing big changes in their wake; from better medical diagnoses to driverless cars, and within central government where there are opportunities to make public services more effective and achieve long-term cost savings. They are also moving into areas where the benefits to those applying them may not be matched by the benefits to those subject to their ‘decisions’—in some aspects of the criminal justice system, for example, and algorithms using social media datasets. Algorithms, like ‘big data’ analytics, need data to be shared across previously unconnected areas, to find new patterns and new insights. (Paragraph 29)

5.The Government should play its part in the algorithms revolution in two ways. It should continue to make public sector datasets available, not just for ‘big data’ developers but also algorithm developers. We welcome the Government’s proposals for a ‘data trusts’ approach to mirror its existing ‘open data’ initiatives. Secondly, the Government should produce, publish, and maintain a list of where algorithms with significant impacts are being used within Central Government, along with projects underway or planned for public service algorithms, to aid not just private sector involvement but also transparency. The Government should identify a ministerial champion to provide government-wide oversight of such algorithms, where they are used by the public sector, and to co-ordinate departments’ approaches to the development and deployment of algorithms and partnerships with the private sector. (Paragraph 30)

6.Algorithms need data, and their effectiveness and value tends to increase as more data are used and as more datasets are brought together. The Government could do more to realise some of the great value that is tied up in its databases, including in the NHS, and negotiate for the improved public service delivery it seeks from the arrangements and for transparency, and not simply accept what the developers offer in return for data access. The Crown Commercial Service should commission a review, from the Alan Turing Institute or other expert bodies, to set out a procurement model for algorithms developed with private sector partners which fully realises the value for the public sector. The Government should explore how the proposed ‘data trusts’ could be fully developed as a forum for striking such algorithm partnering deals. These are urgent requirements because partnership deals are already being struck without the benefit of comprehensive national guidance for this evolving field. (Paragraph 31)

7.Algorithms, in looking for and exploiting data patterns, can sometimes produce flawed or biased ‘decisions’—just as human decision-making is often an inexact endeavour. As a result, the algorithmic decision may disproportionately discriminate against certain groups, and are as unacceptable as any existing ‘human’ discrimination. Algorithms, like humans, can produce bias in their results, even if unintentional. When algorithms involve machine learning, they ‘learn’ the patterns from ‘training data’ which may be incomplete or unrepresentative of those who may be subsequently affected by the resulting algorithm. That can result, for example, in race or gender discrimination in recruitment processes. The patterns that algorithms rely on may be good correlations but may not in fact show a reliable causal relationship, and that can have important consequences if people are discriminated against as a result (such as in offender rehabilitation decisions). Algorithms may have incomplete data so that, for example, some do not get favourable financial credit decisions. Algorithm developer teams may not include a sufficiently wide cross-section of society (or the groups that might be affected by an algorithm) to ensure a wide range of perspectives is subsumed in their work. These biases need to be tackled by the industries involved and .... by the regulatory environment being introduced by the GDPR, and safeguards against bias should be a critical element of the remit of the Centre for Data Ethics & Innovation. (Paragraph 44)

Accountability and transparency

8.Setting principles and ‘codes’, establishing audits of algorithms, introducing certification of algorithms, and charging ethics boards with oversight of algorithmic decisions, should all play their part in identifying and tackling bias in algorithms. With the growing proliferation of algorithms, such initiatives are urgently needed. The Government should immediately task the Centre for Data Ethics & Innovation to evaluate these various tools and advise on which to prioritise and on how they should be embedded in the private sector as well as in government bodies that share their data with private sector developers. Given the international nature of digital innovation, the Centre should also engage with other like-minded organisations in other comparable jurisdictions in order to develop and share best practice. (Paragraph 56)

9.Transparency must be a key underpinning for algorithm accountability. There is a debate about whether that transparency should involve sharing the workings of the algorithm ‘black box’ with those affected by the algorithm and the individuals whose data have been used, or whether (because such information will not be widely understood) an ‘explanation’ is provided. Where disclosure of the inner workings of privately-developed public-service algorithms would present their developers with commercial or personal-data confidentiality issues, the Government and the Centre for Data Ethics & Innovation should explore with the industries involved the scope for using the proposed ‘data trust’ model to make that data available in suitably de-sensitised format. While we acknowledge the practical difficulties with sharing an ‘explanation’ in an understandable form, the Government’s default position should be that explanations of the way algorithms work should be published when the algorithms in question affect the rights and liberties of individuals. That will make it easier for the decisions produced by algorithms also to be explained. The Centre for Data Ethics & Innovation should examine how explanations for how algorithms work can be required to be of sufficient quality to allow a reasonable person to be able to challenge the ‘decision’ of the algorithm. Where algorithms might significantly adversely affect the public or their rights, we believe that the answer is a combination of explanation and as much transparency as possible. (Paragraph 66)

10.The ‘right to explanation’ is a key part of achieving accountability. We note that the Government has not gone beyond the GDPR’s non-binding provisions and that individuals are not currently able to formally challenge the results of all algorithm decisions or where appropriate to seek redress for the impacts of such decisions. The scope for such safeguards should be considered by the Centre for Data Ethics & Innovation and the ICO in the review of the operation of the GDPR that we advocate. (Paragraph 67)

The Centre for Data Ethics & Innovation, research and the regulatory environment

11.We welcome the announcement made in the AI Sector Deal to invest in research tackling the ethical implications around AI. The Government should liaise with the Centre for Data Ethics & Innovation and with UK Research & Innovation, to encourage sufficient UKRI-funded research to be undertaken on how algorithms can realise their potential benefits but also mitigate their risks, as well as the tools necessary to make them more widely accepted including tools to address bias and potential accountability and transparency measures. (Paragraph 69)

12.The provisions of the General Data Protection Regulation will provide helpful protections for those affected by algorithms and those whose data are subsumed in algorithm development, although how effective those safeguards are in practice will have to be tested when they become operational later this spring. While there is, for example, some uncertainty about how some of its provisions will be interpreted, they do appear to offer important tools for regulators to insist on meaningful privacy protections and more explicit consent. The Regulation provides an opt-out for most ‘automated’ algorithm decisions, but there is a grey area that may leave individuals unprotected—where decisions might be indicated by an algorithm but are only superficially reviewed or adjusted by a ‘human in the loop’, particularly where that human intervention is little more than rubber-stamping the algorithms’ decision. While we welcome the inclusion in the Data Protection Bill of the requirement for data controllers to inform individuals when an automated algorithm produces a decision, it is unfortunate that it is restricted to decisions ‘required or authorised by law’. There is also a difficulty in individuals exercising their right to opt-out of such decisions if they are unaware that they have been the subject of an entirely automated process in the first place (Paragraph 90)

13.The Centre for Data Ethics & Innovation and the ICO should keep the operation of the GDPR under review as far as it governs algorithms, and report to Government by May 2019 on areas where the UK’s data protection legislation might need further refinement. They should start with a more immediate review of the lessons of the Cambridge Analytica case. We welcome the amendments made to the Data Protection Bill which give the ICO the powers it sought in relation to its Information Notices, avoiding the delays it experienced in investigating the Cambridge Analytica case. The Government should also ensure that the ICO is adequately funded to carry out these new powers. The Government, along with the ICO and the Centre for Data Ethics & Innovation, should continue to monitor how terms and conditions rules under the GDPR are being applied to ensure that personal data is protected and that consumers are effectively informed, acknowledging that it is predominantly algorithms that use those data (Paragraph 91)

14.‘Data protection impact assessments’, required under the GDPR, will be an essential safeguard. The ICO and the Centre for Data Ethics & Innovation should encourage the publication of the assessments (in summary form if needed to avoid any commercial confidentiality issues). They should also consider whether the legislation provides sufficient powers to compel data controllers to prepare impact assessments, and to improve them if the ICO and the Centre believe the assessments to be inadequate. (Paragraph 92)

15.The Centre for Data Ethics & Innovation and the Information Commissioner should review the extent of algorithm oversight by each of the main sector-specific regulators, and use the results to guide those regulators to extend their work in this area as appropriate. The Information Commissioner should also make an assessment, on the back of that work, of whether it needs greater powers to perform its regulatory oversight role where sector regulators do not see this as a priority. (Paragraph 97)





Published: 23 May 2018