Algorithms in decision-making Contents

Summary

Algorithms have long been used to aid decision-making, but in the last few years the growth of ‘big data’ and ‘machine learning’ has driven an increase in algorithmic decision-making—in finance, the legal sector, the criminal justice system, education, and healthcare, as well as recruitment decisions, giving loans or targeting adverts on social media, and there are plans for autonomous vehicles to be on public roads in the UK.

The case for our inquiry was made by Dr Stephanie Mathisen from Sense about Science, who raised the question of “the extent to which algorithms can exacerbate or reduce biases” as well as “the need for decisions made by algorithms to be challenged, understood and regulated”. Such issues echo our predecessor Committee’s concerns during their inquiries into Big Data and Artificial Intelligence. Now, more than two years have elapsed since that Committee called for an oversight body to monitor and address such issues. Our report identifies the themes and challenges that the newly established ‘Centre for Data Ethics & Innovation’ should address as it begins its work.

Our report comes as the EU General Data Protection Regulation (GDPR) becomes effective and in the wake of the recent controversy centred around the algorithm used by Cambridge Analytica to target political campaign messaging—a test case which reinforces the need for effective data protection regulation.

Algorithms need data, and their effectiveness and value tends to increase as more data are used and as more datasets are brought together. The Government should play its part in the algorithms revolution by continuing to make public sector datasets available, not just for ‘big data’ developers but also for algorithm developers, through new ‘data trusts’. The Government should also produce, maintain and publish a list of where algorithms with significant impacts are being used within Central Government, along with projects underway or planned for public service algorithms, to aid not just private sector involvement but also transparency. The Government should identify a ministerial champion to provide government-wide oversight of such algorithms, where they are used by the public sector, and to co-ordinate departments’ approaches to the development and deployment of algorithms and partnerships with the private sector. The Government could do more to realise some of the great value that is tied up in its databases, including in the NHS, and negotiate for the improved public service delivery it seeks from the arrangements and for transparency, and not simply accept what the developers offer in return for data access. The Crown Commercial Service should commission a review, from the Alan Turing Institute or other expert bodies, to set out a procurement model for algorithms developed with private sector partners which fully realises the value for the public sector. The Government should explore how the proposed ‘data trusts’ could be fully developed as a forum for striking such algorithm partnering deals. These are urgent requirements because partnership deals are already being struck without the benefit of comprehensive national guidance for this evolving field.

Algorithms, in looking for and exploiting data patterns, can sometimes produce flawed or biased ‘decisions’—just as human decision-making is often an inexact endeavour. As a result, the algorithmic decision may disproportionately affect certain groups. The Centre for Data Ethics & Innovation should examine such algorithm biases—to identify how to improve the ‘training data’ they use; how unjustified correlations can be avoided when more meaningful causal relationships should be discernible; and how algorithm developer teams should be established which include a sufficiently wide cross-section of society, or of the groups that might be affected by an algorithm. The new body should also evaluate accountability tools—principles and ‘codes’, audits of algorithms, certification of algorithm developers, and charging ethics boards with oversight of algorithmic decisions—and advise on how they should be embedded in the private sector as well as in government bodies that share their data with private sector developers. Given the international nature of digital innovation, the Centre should also engage with other like-minded organisations in other comparable jurisdictions in order to develop and share best practice.

Transparency must be a key underpinning for algorithm accountability. There is a debate about whether that transparency should involve sharing the workings of the algorithm ‘black box’ with those affected by the algorithm and the individuals whose data have been used, or whether, alternatively, an ‘explanation’ is provided. While we acknowledge the practical difficulties with sharing data in an understandable form, the default should be that algorithms are transparent when the algorithms in question affect the public. The Centre for Data Ethics & Innovation and the ICO should examine the scope for individuals to be able to challenge the results of all significant algorithm decisions which affect them, and where appropriate to seek redress for the impacts of such decisions. Where algorithms might significantly adversely affect the public or their rights, we believe that the answer is a combination of explanation and as much transparency as possible.

Overall, the GDPR will provide helpful protections for those affected by algorithms and those whose data are subsumed in algorithm development, including more explicit consent requirements, although there remains some uncertainty about how some of its provisions will be interpreted. The challenge will be to secure a framework which facilitates and encourages innovation but which also maintains vital public trust and confidence. The Centre for Data Ethics & Innovation and the Information Commissioner’s Office (ICO) should keep the operation of the GDPR under review as far as it governs algorithms, and report to Government by May 2019 on areas where the UK’s data protection legislation might need further refinement. They should start more immediately with a review of the lessons of the Cambridge Analytica case. We welcome the amendments made to the Data Protection Bill which give the ICO the powers it sought in relation to its Information Notices, avoiding the delays it experienced in investigating the Cambridge Analytica case. The Government should also ensure that the ICO is adequately funded to carry out these new powers. The Government, along with the ICO and the Centre for Data Ethics & Innovation, should continue to monitor how terms and conditions rules under the GDPR are being applied to ensure that personal data is protected and that consumers are effectively informed, acknowledging that it is predominantly algorithms that use those data.

‘Data protection impact assessments’, required under the GDPR, will be an essential safeguard. The ICO and the Centre for Data Ethics & Innovation should encourage their publication. They should also consider whether the legislation provides sufficient powers to compel data controllers to prepare adequate impact assessments.

There are also important tasks that the Centre for Data Ethics & Innovation should address around the regulatory environment for algorithms. It should review the extent of algorithm oversight by each of the main sector-specific regulators, and use the results to guide those regulators to extend their work in this area as needed. The Information Commissioner should also make an assessment, on the back of that work, of whether it needs greater powers to perform its regulatory oversight role where sector regulators do not see this as a priority.

The Government plans to put the Centre for Data Ethics & Innovation on a statutory footing. When it does so, it should set it a requirement to report annually to Parliament on the results of its work, to allow us and others to scrutinise its effectiveness.





Published: 23 May 2018