On 23 May 2018 the Committee published its Fourth Report of Session 2017–19, Algorithms in decision-making [HC 351]. The response from the Government, and an accompanying letter from Margot James MP, were received on 23 July 2018 and are appended below.
Thank you for your Committee’s report on Algorithms in Decision Making, and your letter of 10 July to the Secretary of State about the recommendations made in relation to the Centre for Data Ethics and Innovation. I enclose the Government response to your report.
The Government takes the challenges posed by the deployment of algorithms in decision making extremely seriously. Since I gave evidence to the Committee the Data Protection Bill has been granted Royal Assent, and GDPR has come into force. This has bolstered protections for individuals who may be affected by decisions made about them by algorithms.
I am pleased that we have also now launched a public consultation on the role, objectives and activities of the Centre for Data Ethics and Innovation, and announced the appointment of Roger Taylor as the Chair, who I understand you will be meeting in September. Following Roger’s appointment we are now appointing a board with the range of multi-disciplinary expertise needed to drive the Centre’s work programme forward.
Your Committee’s report will inform our thinking both as we finalise the terms of reference for the Centre, and as we discuss the initial work programme with the Centre’s Chair, Roger Taylor. I can assure you that the Centre will be asked to consider the broader ethical issues, beyond the handling of data, that the development and use of algorithms poses. Thank you again for your ongoing engagement with these important issues.
23 July 2018
1.The Department for Digital, Culture, Media and Sport welcomes the House of Commons Science and Technology Select Committee’s Report, Algorithms in Decision Making.
2.The Government agrees with the Select Committee that the use of algorithms in decision making presents huge opportunities, but that we must carefully monitor their impact. We will do this by keeping the implementation of the General Data Protection Regulation (GDPR) under review, and through the Centre for Data Ethics and Innovation. The Centre will identify the measures needed to strengthen and improve the way data and AI are used and regulated. This will include articulating best practice and advising on how we address potential gaps in regulation.
3.Shortly after the Select Committee report was published a public consultation on the role, objectives and activities of the Centre was launched. The Select Committee’s report will make a valuable contribution to our consultation exercise, and the recommendations will be taken into consideration as we develop the design of the Centre and its initial work programme, ahead of the first board meeting in the autumn.
4.The Committee has highlighted a number of important areas where the use of algorithms in decision making has caused concern, including in relation to transparency and fairness. The Government has identified transparency and fairness as two of the key themes within which the Centre could undertake projects to strengthen the governance of data and AI uses. The Centre, in dialogue with the Government, and drawing on responses to the consultation, will carefully prioritise and scope the specific projects within its initial work programme.
The Government’s proposed Centre for Data Ethics & Innovation is a welcome initiative. It will occupy a critically important position, alongside the Information Commissioner’s Office, in overseeing the future development of algorithms and the ‘decisions’ they make. The challenge will be to secure a framework which facilitates and encourages innovation but which also maintains vital public trust and confidence. (Paragraph 7)
5.The Centre for Data Ethics and Innovation, working closely with existing regulators including the ICO, will aim to enable safe and ethical innovation in the use of data and AI, including the use of algorithms. Through its recommendations the Centre will seek to maximise the ability and incentives for all organisations - commercial and public - to deliver new innovative products and services through data and AI; and to ensure those innovations are developed and deployed responsibly. In doing so the Centre will help to build trust in, and demand for, new data-driven and AI-based innovations.
Many of the issues raised in this report will require close monitoring, to ensure that the oversight of machine learning-driven algorithms continues to strike an appropriate and safe balance between recognising the benefits (for healthcare and other public services, for example, and for innovation in the private sector) and the risks (for privacy and consent, data security and any unacceptable impacts on individuals). As we discuss in this report, the Government should ensure that these issues are at the top of the new body’s remit and agenda. (Paragraph 8)
6.The Government agrees that these issues should be high priorities for the Centre, as described in the consultation on the Centre and associated terms of reference.
7.There are also a number of mechanisms for the oversight of data issues in central government. The Data Advisory Board and the Data Leaders Network are the senior cross-government groups that provide strategic oversight of how government uses data in public service delivery, and we expect the Centre will engage with these groups as appropriate.
The Government plans to put the Centre for Data Ethics & Innovation on a statutory footing. When it does so, it should set it a requirement to report annually to Parliament on the results of its work, to allow us and others to scrutinise its effectiveness. Although the terms of the Government’s proposed consultation on the Centre for Data Ethics & Innovation have yet to be announced, we anticipate our report feeding into that exercise. (Paragraph 9)
8.Once established, the Centre for Data Ethics and Innovation must build confidence by operating in a way that is transparent and open. In the consultation on the Centre, the Government proposes that the Centre will make its recommendations to Government public at the point of delivery and publish a report presenting its overall assessment of the governance landscape for data and AI once a parliament.
9.The consultation proposes that the Centre adopts these procedures for reporting its findings and recommendations as soon as it is established, and ahead of it being placed on a statutory footing.
Algorithms are being used in an ever-growing number of areas, in ever-increasing ways. They are bringing big changes in their wake; from better medical diagnoses to driverless cars, and within central government where there are opportunities to make public services more effective and achieve long-term cost savings. They are also moving into areas where the benefits to those applying them may not be matched by the benefits to those subject to their ‘decisions’—in some aspects of the criminal justice system, for example, and algorithms using social media datasets. Algorithms, like ‘big data’ analytics, need data to be shared across previously unconnected areas, to find new patterns and new insights. (Paragraph 29)
The Government should play its part in the algorithms revolution in two ways. It should continue to make public sector datasets available, not just for ‘big data’ developers but also algorithm developers. We welcome the Government’s proposals for a ‘data trusts’ approach to mirror its existing ‘open data’ initiatives. Secondly, the Government should produce, publish, and maintain a list of where algorithms with significant impacts are being used within Central Government, along with projects underway or planned for public service algorithms, to aid not just private sector involvement but also transparency. The Government should identify a ministerial champion to provide government-wide oversight of such algorithms, where they are used by the public sector, and to co-ordinate departments’ approaches to the development and deployment of algorithms and partnerships with the private sector. (Paragraph 30)
10.The Government fully supports safe and responsible innovative uses of data. Data sharing frameworks such as data trusts could operate using a regulatory sandbox model to facilitate exploratory analyses and offer new insights. The GDPR, which was brought into UK law through the Data Protection Act 2018, will support automated processing.
11.Not all data trusts will necessarily pertain to personal data. For example, they may concern data that parties want to keep secure such as commercially sensitive data, data that could put the public at risk if it fell into the wrong hands, or pre-commercial intellectual property.
12.The Government has recently published an updated Data Ethics Framework which provides principles and guidance for using algorithms in the public sector. We will be working to ensure thorough uptake of the framework across sectors to encourage responsible use of algorithms for decision-making. Principle 6 of the Data Ethics framework says ‘Make your work transparent and be accountable’. In aligning to this principle, those working with data and algorithms in the public sector, are encouraged to be transparent about the tools, data and algorithms used to conduct their work, working in the open where possible. The Data Ethics Framework is accompanied by the Data Ethics Workbook which helps teams interrogate their methodological and ethical decisions when using data and algorithms to inform policy and service design.
13.The Government recognises that there is work to be done in order to ensure the quality of published data is of the highest calibre – including that it is in a commonly accessible and machine readable format, and conforms to metadata standards – both of which reduce friction in access and use, including by the AI community.
14.In December 2017 the Prime Minister wrote to Departments asking them to improve the quantity and quality of data that they make open. We recognise that continued engagement of Departments on this agenda is crucial. There are examples of innovative organisational models providing access to restricted data, such as around geospatial data or education data, through data labs. Government departments should continue to engage with such models, and with best practice on publishing high-quality open data, to provide access to more, better quality, timely, and machine readable open data when the data is non-personal.
Algorithms need data, and their effectiveness and value tends to increase as more data are used and as more datasets are brought together. The Government could do more to realise some of the great value that is tied up in its databases, including in the NHS, and negotiate for the improved public service delivery it seeks from the arrangements and for transparency, and not simply accept what the developers offer in return for data access. The Crown Commercial Service should commission a review, from the Alan Turing Institute or other expert bodies, to set out a procurement model for algorithms developed with private sector partners which fully realises the value for the public sector. The Government should explore how the proposed ‘data trusts’ could be fully developed as a forum for striking such algorithm partnering deals. These are urgent requirements because partnership deals are already being struck without the benefit of comprehensive national guidance for this evolving field. (Paragraph 31)
15.The Crown Commercial Service (CCS) regularly reviews with its customers in central government, the wider public sector and other stakeholders where there might be a need for commercial agreements to facilitate procurement of common goods and services and opportunities to maximise public value from working with the private sector. CCS will explore the points raised by the Committee and engage with relevant organisations involved in technology and data science, including the Alan Turing Institute and others, as it develops its category strategies in this area. Data trusts may offer one mechanism to support the development of these arrangements.
Algorithms, in looking for and exploiting data patterns, can sometimes produce flawed or biased ‘decisions’—just as human decision-making is often an inexact endeavour. As a result, the algorithmic decision may disproportionately discriminate against certain groups, and are as unacceptable as any existing ‘human’ discrimination. Algorithms, like humans, can produce bias in their results, even if unintentional. When algorithms involve machine learning, they ‘learn’ the patterns from ‘training data’ which may be incomplete or unrepresentative of those who may be subsequently affected by the resulting algorithm. That can result, for example, in race or gender discrimination in recruitment processes. The patterns that algorithms rely on may be good correlations but may not in fact show a reliable causal relationship, and that can have important consequences if people are discriminated against as a result (such as in offender rehabilitation decisions). Algorithms may have incomplete data so that, for example, some do not get favourable financial credit decisions. Algorithm developer teams may not include a sufficiently wide cross-section of society (or the groups that might be affected by an algorithm) to ensure a wide range of perspectives is subsumed in their work. These biases need to be tackled by the industries involved and by the regulatory environment being introduced by the GDPR, and safeguards against bias should be a critical element of the remit of the Centre for Data Ethics & Innovation. (Paragraph 44)
16.The Government agrees with the Committee’s recommendation. The consultation on the Centre has identified fairness, and the challenges posed by the transmission of biases within algorithms, as one of the six themes within which the Centre could undertake projects to strengthen the governance of data and AI uses. The Centre, in dialogue with the Government, will carefully prioritise and scope the specific projects within its initial work programme and once these are finalised they will be announced. The Government is considering how it can best support transparency around the use of algorithms and use of data more generally. It will set out its plans in the National Data Strategy.
Setting principles and ‘codes’, establishing audits of algorithms, introducing certification of algorithms, and charging ethics boards with oversight of algorithmic decisions, should all play their part in identifying and tackling bias in algorithms. With the growing proliferation of algorithms, such initiatives are urgently needed. The Government should immediately task the Centre for Data Ethics & Innovation to evaluate these various tools and advise on which to prioritise and on how they should be embedded in the private sector as well as in government bodies that share their data with private sector developers. Given the international nature of digital innovation, the Centre should also engage with other like-minded organisations in other comparable jurisdictions in order to develop and share best practice. (Paragraph 56)
17.As set out in the consultation, the Government expects the Centre to support industry and the public sector by identifying, sharing and building on best practice - including (but not limited to) mechanisms such as codes of conduct, standards and principles. The Centre will also address novel issues and make recommendations where best practice doesn’t currently exist. This could include agreeing and articulating codes of conduct, standards and principles.
18.The Government also expects the Centre to engage internationally, including through identifying global opportunities to collaborate on cross-jurisdictional questions of data and AI governance, as well as the formulation of governance measures that have international traction and credibility.
Transparency must be a key underpinning for algorithm accountability. There is a debate about whether that transparency should involve sharing the workings of the algorithm ‘black box’ with those affected by the algorithm and the individuals whose data have been used, or whether (because such information will not be widely understood) an ‘explanation’ is provided. Where disclosure of the inner workings of privately-developed public-service algorithms would present their developers with commercial or personal-data confidentiality issues, the Government and the Centre for Data Ethics & Innovation should explore with the industries involved the scope for using the proposed ‘data trust’ model to make that data available in suitably de-sensitised format. While we acknowledge the practical difficulties with sharing an ‘explanation’ in an understandable form, the Government’s default position should be that explanations of the way algorithms work should be published when the algorithms in question affect the rights and liberties of individuals. That will make it easier for the decisions produced by algorithms also to be explained. The Centre for Data Ethics & Innovation should examine how explanations for how algorithms work can be required to be of sufficient quality to allow a reasonable person to be able to challenge the ‘decision’ of the algorithm. Where algorithms might significantly adversely affect the public or their rights, we believe that the answer is a combination of explanation and as much transparency as possible. (Paragraph 66)
19.The consultation on the Centre recognises the challenges posed by the use of algorithms in terms of their explainability and transparency, and has proposed this as a key theme that the Centre will need to consider. The consultation invited views on priority projects for the Centre’s work programme, across this and other themes.
20.The Data Ethics Framework puts a strong emphasis on the need for Government decision-making to be transparent, accountable and interpretable. This is the case whether the algorithm is built in Government or procured. When designing or building predictive technology, the seven data ethics principles must be adhered to. The Data Ethics Workbook: working with suppliers contains questions to guide public servants when procuring software.
The ‘right to explanation’ is a key part of achieving accountability. We note that the Government has not gone beyond the GDPR’s non-binding provisions and that individuals are not currently able to formally challenge the results of all algorithm decisions or where appropriate to seek redress for the impacts of such decisions. The scope for such safeguards should be considered by the Centre for Data Ethics & Innovation and the ICO in the review of the operation of the GDPR that we advocate. (Paragraph 67)
21.The Centre will have an ongoing role in reviewing the adequacy of existing regulatory frameworks and identifying any further measures that may be needed to supplement and strengthen those frameworks.
22.The Information Commissioner will be keeping the operation of the Data Protection Act and the GDPR under review and can raise any issues that may arise on this or other areas. In addition, the Data Protection Act, as with all new primary legislation, will be subject to post-legislative scrutiny, three to five years after receiving Royal Assent. Any review of the Act will also cover the GDPR.
We welcome the announcement made in the AI Sector Deal to invest in research tackling the ethical implications around AI. The Government should liaise with the Centre for Data Ethics & Innovation and with UK Research & Innovation, to encourage sufficient UKRI-funded research to be undertaken on how algorithms can realise their potential benefits but also mitigate their risks, as well as the tools necessary to make them more widely accepted including tools to address bias and potential accountability and transparency measures. (Paragraph 69)
23.The Government will liaise with the UKRI on the extent to which current research funding will address how the benefits and risks of algorithms can be managed, and whether further funding should be allocated to this important area. In addition, the Engineering and Physical Sciences Research Council (EPSRC, part of UKRI) has developed a Framework for Responsible Innovation, which can provide guidance for the whole research community in this area.
The provisions of the General Data Protection Regulation will provide helpful protections for those affected by algorithms and those whose data are subsumed in algorithm development, although how effective those safeguards are in practice will have to be tested when they become operational later this spring. While there is, for example, some uncertainty about how some of its provisions will be interpreted, they do appear to offer important tools for regulators to insist on meaningful privacy protections and more explicit consent. The Regulation provides an opt-out for most ‘automated’ algorithm decisions, but there is a grey area that may leave individuals unprotected—where decisions might be indicated by an algorithm but are only superficially reviewed or adjusted by a ‘human in the loop’, particularly where that human intervention is little more than rubber-stamping the algorithms’ decision. While we welcome the inclusion in the Data Protection Bill of the requirement for data controllers to inform individuals when an automated algorithm produces a decision, it is unfortunate that it is restricted to decisions ‘required or authorised by law’. There is also a difficulty in individuals exercising their right to opt-out of such decisions if they are unaware that they have been the subject of an entirely automated process in the first place. (Paragraph 90)
24.The Government is committed to reviewing the effectiveness of the safeguards in Section 14 of the Data Protection Act, should there be a deficiency in the application of the rules.
25.The Act has been designed to allow for secondary legislation to be used in the future to ensure the UK’s data protection legal framework can continue to adapt and align with changes in technology, society and the manner in which data can be processed by organisations, while providing individuals with rights and protection over the processing of their personal data.
The Centre for Data Ethics & Innovation and the ICO should keep the operation of the GDPR under review as far as it governs algorithms, and report to Government by May 2019 on areas where the UK’s data protection legislation might need further refinement. They should start with a more immediate review of the lessons of the Cambridge Analytica case. We welcome the amendments made to the Data Protection Bill which give the ICO the powers it sought in relation to its Information Notices, avoiding the delays it experienced in investigating the Cambridge Analytica case. The Government should also ensure that the ICO is adequately funded to carry out these new powers. The Government, along with the ICO and the Centre for Data Ethics & Innovation, should continue to monitor how terms and conditions rules under the GDPR are being applied to ensure that personal data is protected and that consumers are effectively informed, acknowledging that it is predominantly algorithms that use those data. (Paragraph 91)
26.The Government is committed to making sure that the Information Commissioner has sufficient resource to carry out her statutory duties, and the Government will continue to work closely with her to keep her enforcement powers under review. As part of the Information Commissioner’s general obligation to monitor and enforce the application of the GDPR she may monitor how the terms and conditions rules are being applied.
27.The Centre will also have an ongoing role in scrutinising and evaluating the adequacy of overarching regulatory frameworks. This will include identifying any gaps, responding to shifting uses of data, and advising the Government on how to address them.
28.The Government welcomes the investigation of the independent regulator. It is essential that people are confident their personal data will be protected and used in an appropriate way. We have laws in place to ensure companies collect and use data appropriately. We are committed to supporting the Information Commissioner in her ongoing independent investigation into whether Facebook data was acquired and used illegally by Cambridge Analytica.
29.This is a live and on-going independent investigation by the commissioner with a number of legal proceedings underway. We continue to expect that all the organisations involved cooperate fully with the ICO. The Government will of course review all the recommendations of the Independent Commissioner in her interim report and we look forward to seeing the Information Commissioner’s full report once it is completed.
‘Data protection impact assessments’, required under the GDPR, will be an essential safeguard. The ICO and the Centre for Data Ethics & Innovation should encourage the publication of the assessments (in summary form if needed to avoid any commercial confidentiality issues). They should also consider whether the legislation provides sufficient powers to compel data controllers to prepare impact assessments, and to improve them if the ICO and the Centre believe the assessments to be inadequate. (Paragraph 92)
30.One of the key safeguards in the GDPR which ensures that people’s fundamental rights, including the right to privacy, are protected is the requirement to produce Impact Assessments. If the processing of an individual’s personal data is high risk, an impact assessment is required to be produced by organisations. Where the ICO considers a Data Protection Impact Assessment to be inadequate they may ask data controllers to improve them.
The Centre for Data Ethics & Innovation and the Information Commissioner should review the extent of algorithm oversight by each of the main sector-specific regulators, and use the results to guide those regulators to extend their work in this area as appropriate. The Information Commissioner should also make an assessment, on the back of that work, of whether it needs greater powers to perform its regulatory oversight role where sector regulators do not see this as a priority. (Paragraph 97)
31.The Centre will work closely with regulators to identify where the ethical and governance challenges posed by the use of data and AI, including algorithms, go beyond current practice and law, and to determine how and in what ways those challenges can best be addressed.
32.This is a complex area with shared responsibility between Government, regulators, industry and consumer groups. The Information Commissioner is independent of Government and reports directly to Parliament. The Information Commissioner has identified Artificial Intelligence, big data and machine learning as priority areas to be monitored, and she has made commitments in her ‘Information Rights Strategic Plan 2017–2021’ to help promote transparency in these areas.
2 UKRI is a new body that brings together the seven Research Councils, Innovate UK, and a new organisation Research England
Published: 10 September 2018