347.Over the course of our inquiry the Government has made a series of announcements regarding artificial intelligence, mostly in the Industrial Strategy and in response to the Hall-Pesenti Review. These policies are nascent, but welcome. They represent the Government’s commitment to AI, with a focus on the AI development sector in the UK. We believe these policies are a good base upon which to build, and for the Government to show strong domestic leadership on AI. Our recommendations in this chapter focus on the action the Government can take to maximise the potential of AI for the UK, and to minimise its risks.
348.Much of the recent policy focus by the Government has related to the announcement of a series of new AI-related bodies.
349.The Hall-Pesenti Review recommended that the “Government should work with industry and experts to establish a UK AI Council to help coordinate and grow AI in the UK.” The recommendation was based on the perceived need to facilitate engagement between industry, academia, Government and the public, as “AI in the UK will need to build trust and confidence in AI-enabled complex systems”.
350.In the Industrial Strategy, the Government announced that they were taking forward this recommendation, and “working with industry to establish an industry-led AI Council that can take a leadership role across sectors”. It was announced that the Council would be supported by a new Government Office for AI. The Industrial Strategy stated that both these bodies would:
351.It was not clear from the announcements how these new bodies would be constituted, or how they might function. We asked Matt Hancock MP who would be represented on the AI Council. He told us “there has to be small and medium-sized business representation but also users and developers”. Dr Pesenti told us the Council “should be a mix of industry, academia and Government”. Dr Pesenti told us that the Council should not be too big, but that it should have “one or two representatives” of large companies and of start-ups in AI. Dr Pesenti also said that membership should be “UK-centric” although he “would put international companies on it. You could have people with an interest in the UK who are part of these companies”. Dr Pesenti also told us that he envisaged the Council reporting annually on the progress made by the UK against agreed metrics, and playing a role in ensuring that the aims of policies are being delivered.
352.Matt Hancock MP told us the Government Office for AI would be a “joint unit between BEIS and DCMS to ensure that we are joined up at the central Government level”. The Minister said “we think it will be resourced by civil servants reporting directly to Ministers … It is the team that will manage this policy development and architecture”.
353.As we have previously discussed, the Industrial Strategy also announced a new Centre for Data Ethics and Innovation. This would be a “world-first advisory body” which would review the current “governance landscape” and advise the Government on “ethical, safe and innovative uses of data, including AI”. The Centre would engage with industry to establish data trusts, and there would be wide consultation as to the remit of the Centre in due course. The Prime Minister reaffirmed these ambitions in her speech to the World Economic Forum on 25 January 2018.
354.Matt Hancock MP told us that the Centre “will not be a regulatory body, but it will provide the leadership that will shape how artificial intelligence is used”. The Minister said the Government wanted “to ensure that the adoption of AI is accompanied, and in some cases led, by a body similarly set up not just with technical experts who know what can be done but with ethicists who understand what should be done so that the gap between those two questions is not omitted”. The Minister cited the Human Fertilisation and Embryology Authority as an example of how this can be an effective approach (see Box 12), and said “it is incredibly important to ensure that society moves at the same pace as the technology, because this technology moves very fast”.
The Human Fertilisation and Embryology Authority (HFEA) was established as a non-departmental public body by the Human Fertilisation and Embryology Act 1990, and came into being on 1 August 1991. The Human Fertilisation and Embryology Act 2008 updated the role of the HFEA. The 1990 Act was the result of a report into the issues surrounding in vitro fertilisation (IVF) by a committee chaired by Baroness Warnock.
Baroness Harding of Winscombe cited the work of the Warnock Committee as an example of where “we have had the moral and ethical debate as technology was developing and the possibilities the technology officered began to emerge”, and that the work of the Committee “settled public opinion, set the framework for balanced regulation in the UK and enabled the UK to benefit—citizens and businesses—from the development of the technology”.
355.The remit of the Centre the Minister outlined to us was extensive, and included:
356.The data trusts which the Centre could be charged with the development of are another recommendation of the Hall-Pesenti Review, which said:
“To facilitate the sharing of data between organisations holding data and organisations looking to use data to develop AI, Government and industry should deliver a programme to develop data trusts—proven and trusted frameworks and agreements—to ensure exchanges are secure and mutually beneficial”.
357.The Review envisaged the data trusts as being “a set of relationships underpinned by a repeatable framework, compliant with parties’ obligations, to share data in a fair, safe and equitable way”. Dr Pesenti told us that “one size definitely does not fit all when you share data” and the Review’s concept of data trusts was to establish a trusted way to facilitate the “sharing of data among multiple parties.” He told us that phase one of establishing the trusts would entail trialling agreements and working out what could work as a template. Phase two would involve looking to reach a point where: “You, as a person, do not give your data to an organisation. Even when you go to a website, when they collect that data, they do not put it in their organisation. They do not own that data, but they put it in a trust, where it is very visible and clear how that data will be used”.
358.This is an extensive, and potentially challenging, remit for any organisation, let alone a newly established one. The Minister himself warned us that he did not want to “encumber it with so much at the start that we are trying to boil the ocean”. In January 2018, the Government advertised for a Chair for an interim Centre, the role of which would be to “start work on key issues straight away” with its findings to be “used to inform the final design and work programme of the permanent Centre”. The advertisement also noted that the Government intends to establish the Centre “on a statutory footing in due course”. The citing of HFEA as an example, and the decision to place the Centre on a statutory basis, suggest that the Centre will not be dissimilar to a regulator. Care must be taken to not do this inadvertently. The plans for the Centre at this stage can, of course, be considered as a possible blueprint for a regulator if, and when, one is necessary.
359.At the same time as the announcement of the creation of the Centre for Data Ethics and Innovation, the AI Council, and the Government Office of AI, it was announced that the Alan Turing Institute would become the national research centre for AI, and would be supporting Turing Fellowships, doctoral studentships linked to the Institute and focused on AI-related issues. This was another of the Hall-Pesenti Review’s recommendations.
360.The Alan Turing Institute is a national centre for the study of data science established in 2015. In March 2014 the Government announced they were dedicating £42 million to establish the Alan Turing Institute, and the institute also receives funding from its founding universities, donations and grants.
361.Matt Hancock MP explained to us the need for the national centre for AI alongside the other AI bodies being established. The Minister said that the Alan Turing Institute was “essentially the champion of artificial intelligence research” and that as such did “not want the university-led champion of AI research also to be the body that does the thinking on the ethics and framework, because … we want the Alan Turing Institute to be able to take on industrial sponsorship … and work directly for corporates in developing AI”.
362.We asked Dr Pesenti why the Alan Turing Institute was recommended as the national centre. Dr Pesenti told us it was because of the Institute’s name: “Turing is one of the most recognised names in AI and it is a great legacy to what has been done here. You cannot just go around it. You are not going to create a new institute that is Turing II”. Dr Pesenti also told us that “there is scepticism in the industry that the institute is where it should be, in terms of efficacy and delivery, so it is really important for an institute to step up”. We asked Lord Henley whether the Alan Turing Institute had the capacity to act in the role envisaged for it. Lord Henley said “where we do not really know what will happen, it is best to let a thousand things bloom so that the Government, as long as they remain nimble, can respond in the appropriate way”. Dr Barber, who is a Turing Fellow at the Alan Turing Institute, told us that the Institute had the appetite to focus more on AI, but that the “level of commitment would require financial support and it cannot easily be done with the current resources at the Institute”.
363.The UK is not the only country to have taken steps to establish national centres for artificial intelligence research. In 2017, the Canadian government’s ‘Pan-Canadian Artificial Intelligence Strategy’ looked to establish “nodes of scientific excellence in Canada’s three major centres for artificial intelligence in Edmonton, Montreal and Toronto-Waterloo”. The most prominent of these currently is the Vector Institute. It will be hosted by the University of Toronto, and funded by the governments of Ontario and Canada (C$90 million), and the private sector (C$80 million). Geoff Hinton, an early pioneer of deep learning, will be the Vector Institute’s chief scientific advisor. There is much potential for the Alan Turing Institute to learn from the experiences of other national centres for AI, and we urge the Institute to develop a relationship with the new centres being established in Canada, and elsewhere, in order to meet the challenge with which it has been presented. The work of the German Research Center for AI (DFKI), to whom we spoke, offers another source of potential advice, as a much longer standing centre (having been founded in 1988).
364.We welcome the enthusiasm, and speed, with which the Government has met the emergence of artificial intelligence. However, we agree with Nicola Perrin who said to us that “a proliferation of different bodies has been proposed” and that “there needs to be much greater clarity in the governance landscape”. Nicola Perrin asked us to “give clarity over what is needed rather than suggesting yet another new body, it would be very helpful”. This was before the announcements of the bodies above were made in the 2017 Budget and Industrial Strategy. We have given consideration to this when making our own recommendations (see Appendix 9).
365.Many of our witnesses called for an AI-specific national strategy. Professor Hall, speaking to us prior to the publication of her review with Dr Pesenti, said “I hope when you see the review you will think there are the beginnings of that strategy there”. Dr Taylor wanted a “comprehensive strategy around how the country is going to benefit from its exploitation”. Sage said “without a clear AI strategy for social good the huge potential benefits of AI will not be felt across society at large”. It is clear to us that with the Government being so actively engaged with AI, and the number of institutes that could now possibly be involved in shaping and developing AI policy, that a clear framework is required.
366.Artificial intelligence’s potential is an opportunity the Government is embracing. The Government’s recent enthusiasm and responsiveness to artificial intelligence in the UK is to be welcomed. We have proposed a number of recommendations for strengthening recent policy announcements, based on the extensive evidence we have received as a Committee. We encourage the Government to continue to be proactive in developing policies to harness the potential of AI and mitigate the risks. We do, however, urge the Government to ensure that its approach is focused and that it provides strategic leadership—there must be a clear roadmap for success. Policies must be produced in concert with one another, and with existing policy. Industry and the public must be better informed about the announcements, and sufficient detail provided from the outset.
367.The pace at which this technology will grow is unpredictable, and the policy initiatives have been many. To avoid policy being too reactive, and to prevent the new institutions from overlapping and conflicting with one another, we recommend that the Government Office for AI develop a national policy framework for AI, to be in lockstep with the Industrial Strategy, and to be overseen by the AI Council. Such a framework should include policies related to the recommendations of this report, and be accompanied, where appropriate, by a long-term commitment to such policies in order to realise the benefits. It must also be clear within Government who is responsible around the Cabinet table for the direction and ownership of this framework and the AI-related policies which fall within it.
368.The roles and remits of the new institutions must be clear, if they are to be a success. The public and the technology sector in the UK must know who to turn to for authoritative advice when it comes to the development and use of artificial intelligence. To ensure public confidence, it must also be clear who to turn to if there are any complaints about how AI has been used, above and beyond the matters relating to data use (which falls within the Information Commissioner’s remit).
369.We recommend that the Government Office for AI should act as the co-ordinator of the work between the Centre for Data Ethics and Innovation, the GovTech Catalyst team and the national research centre for Artificial Intelligence Research (the Alan Turing Institute), as well as the AI Council it is being established to support. It must also take heed of the work of the more established bodies which have done work in this area, such as the Information Commissioner’s Office and the Competition and Markets Authority. The work programmes of all the new AI-specific institutions should be subject to agreement with one another, on a quarterly basis, and should take into account the work taking place across Government in this area, as well as the recommendations from Parliament, regulators, and the work of the devolved assemblies and governments. The UK has a thriving AI ecosystem, and the Government Office for AI should seek to inform its work programme through wide public consultation as it develops Government policy with regard to artificial intelligence. The programme should be publicly available for scrutiny.
370.We welcome the new focus for the Alan Turing Institute as the national research centre for artificial intelligence. We want it to be able to fulfil this role, and believe it has the potential to do so. As such, the new focus must not simply be a matter of rebranding. The successful institutes in Canada and Germany, such as the Vector Institute and the German Research Center for Artificial Intelligence, offer valuable lessons as to how a national research centre should be operated.
371.The Government must ensure that the Alan Turing Institute’s funding and structure is sufficient for it to meet its new expanded remit as the UK’s national research centre for AI. In particular, the Institute’s current secondment-based staffing model should be assessed to ensure its suitability, and steps taken to staff the Institute appropriately to meet the demands now placed upon it.
372.The role the Centre for Data Ethics and Innovation is to play in reviewing the current governance landscape is a challenging one. Many of our witnesses commented on the desirability of regulation (or not), including whether a blanket AI-specific regulation is required. Our witnesses also commented on the need for a specific regulator was required, or whether the existing regulatory landscape was sufficient.
373.Witnesses fell into three broad camps: those who considered existing laws could do the job; those who thought that action was needed immediately; and those who proposed a more cautious and staged approach to regulation. Those who said that no new AI-specific regulation was required did so on the basis that existing laws and regulations could adequately regulate the development and use of AI. TechUK, who represent over 950 companies, told us that “the concerns regarding the use of AI technologies … are focused around how data is being used in these systems” and that it was “important to remember that the current data protection legal framework is already sufficient”, and the GDPR would further strengthen that framework. Further, techUK advocated a cautious approach to other areas where regulation might be required:
“Where there are other concerns about how AI is developing these need to be fully identified, understood and discussed before determining whether regulation or legislation has a role to play”.
374.The Law Society of England and Wales told us “that there is no obvious reason why the growth of AI and the use of data would require further legislation or regulation”. They added: “AI is still relatively in its infancy and it would be advisable to wait for its growth and development to better understand its forms, the possible consequences of its use, and whether there are any genuine regulatory gaps”. Professor Robert Fisher et al said “most AI is embedded in products and systems, which are already largely regulated and subject to liability legislation. It is therefore not obvious that widespread new legislation is needed”. Professor Bradley Love, a Turing Fellow at the Alan Turing Institute, agreed with this, and said “existing laws and regulations may adequately cover AI” and that “we already have laws that cover faulty products, as well as the release of computer code (e.g. viruses) that are intended to harm the general public”.
375.Many others agreed, and some referred to this as a ‘technology-neutral’ approach: “regulation needs to be independent of technology change and focused on how risk is managed, safety assured and how the outcomes of people using services are fulfilled”. The Online Dating Association said “the pace of change can make the regulation of technologies, such as AI, very difficult. However, the outputs can be more clearly covered”. They cited the UK’s strong data protection regime, and that “consumer law provides protections around contracts, unfair behaviours, advertising, payments and other critical areas” as examples of an outcomes-focused approach.
376.Those arguing for a more cautious approach told us that poorly thought through regulation could have unintended consequences, including the stifling of development, innovation and competitiveness. Professor Love told us that there was a risk that “AI specific regulation could reduce innovation and competitiveness for UK industry” as the competitive advantage gained by using artificial intelligence might be outweighed by regulatory burdens. Dr Reger said:
“Governments in general—the UK Government might be an exception, and I hope they are—like to regulate. AI technology does not need regulation because it is a competitive race and the faster the United Kingdom progresses in that race, the better it is for the country”.
377.Kemp Little LLP said that “the pace of change in technology means that overly prescriptive or specific legislation struggles to keep pace and can almost be out of date by time it is enacted” and that lessons from regulating previous technologies suggested that a “strict and detailed legal requirements approach is unhelpful”. Many other witnesses expressed similar concerns at the possible detrimental effect of premature regulation.
378.Baker McKenzie, an international law firm, recommended a “proactive, principles-led intervention, based on a sound understanding of the issues and technology, careful consideration and planning” rather than reactive regulation, put in place after something goes wrong. They recommended that “the right regulatory approach … is staged and considered” and the Government should “facilitate ethical (as opposed to legal) frameworks for the development of AI technologies” to support self-regulation in industry.
379.There were those who argued for immediate action and regulation, mostly in order to avoid unintended consequences. Dr Morley and Dr Lawrence said “there is an urgent need for the Government to produce policies and regulations that address the emergence of AI and the involvement of corporations in their creation and operation”. The Foundation for Responsible Robotics said “we need to act now to prevent the perpetuation of injustice” and that “there are no guarantees of unbiased performance” for algorithms presently. Bristows LLP argued that it could help with the adoption of the technology, and said:
“It has long been considered that public trust in new technologies is directly affected by the amount of regulation that is put in place and so industries such as the aviation industry are often cited as examples where robust regulation increases public trust in an otherwise inherently risky process”.
380.Few witnesses, however, gave any clear sense to us as to what specific regulation should be considered.
381.We heard a number of persuasive arguments against a specific AI-regulation. Professor Reed told us that he “would not have any one-size-fits-all answer” to the question about AI regulation and that it is “inappropriate and impossible to attempt to produce a regulatory regime which applies to all AIs”. Dr Jerry Fishenden, Visiting Professor at the University of Surrey, said “if only ‘AI’ software is regulated, some industries, companies, suppliers etc. may decide to stop labelling their systems ‘AI’ to avoid such regulation—another disadvantage of such an arbitrary distinction”. Professor Wooldridge said “AI-specific legislation is not the right way to go. I would look at our existing data protection legislation and ask what AI adds into this mix that we need to start thinking about”.
382.We were told by the BBC that “the rapid development of AI requires lawmakers and regulators to keep any AI framework under review and up-to-date”. Some of our witnesses called for a new, specific artificial intelligence regulator to do this. The Observatory for Responsible Research and Innovation in ICT (ORBIT) said “it will be difficult to regulate AI via straightforward legislation, given the volatile and dynamic nature of this technology” and that it “seems reasonable to establish an AI regulator that oversees the technology, contributes to the development of standards and best practice and is empowered to enforce such standards”. Big Brother Watch agreed, and called for “independent oversight of AI in the form of a regulatory or supervisory body to provide legal and technical scrutiny of AI technology and algorithms”.
383.Other witnesses disagreed, arguing that existing regulators were enough. Javier Ruiz Diaz, Policy Director, Open Rights Group, said “a new regulator may end up overlapping with many other regulators”, and that instead “we need to get many regulators to be AI informed and to be able to incorporate AI into their work”. Olivier Thereaux said “it feels premature to have a regulator for AI. It is probably more useful right now to recognise that AI is going to be used across many sectors, many of which already have regulators”. Andrew de Rozairo, Kriti Sharma and James Luke, Chief Technology Officer for the Public Sector, and Master Inventor, IBM, all agreed no new regulator was required, as AI was so intertwined into other business practices that other regulators would suffice. The Government said “AI will create new challenges for regulation in the future, and it is important for all sector regulators to be part of the adaption of systems where required”.
384.Of existing regulators, AI presents the most pressing issues to the ICO given the current importance of data to machine learning techniques used in AI, and the Data Protection Bill’s proposed changes to the UK’s data protection regime. The ICO upholds information rights in the public interest, promoting openness by public bodies and data privacy for individuals. Elizabeth Denham told us that the GDPR, and the resulting Data Protection Bill:
“ … gives us a huge step forward in requiring companies and public bodies to think and to focus on what they are doing with machines, machine learning and artificial intelligence, and to consider the rights of individuals, document that and stand ready to account for the decisions that they have made. The Information Commissioner has the ability to look at those decisions. Individuals have the right to challenge those decisions. We have taken a couple of giant steps forward”.
385.There is no doubt in our mind that as the development and use of artificial intelligence grows in the UK the ICO will have a pivotal role to play in the regulation of the data underpinning such growth. The Information Commissioner said the ICO’s involvement in the GDPR and changes to data protection law, as well as its involvement in giving (and developing) advice, was feeling “a little like changing a tyre on a moving car”. It is essential that the ICO (and other regulators) have the capacity, and support, to fulfil their roles sufficiently. This was recognised by Matt Hancock, who told us that “the ICO is an incredibly important part of getting the new Data Protection Act into place and supporting companies through the changes, and we need to make sure that the Information Commissioner has all the support that she needs to do that”. In the Autumn Budget 2017, the Government announced the establishment of the Regulators’ Pioneer Fund “to help unlock the potential of emerging technologies”, which would have £10 million to assist regulators develop innovative approaches for getting products and services to market.
386.Blanket AI-specific regulation, at this stage, would be inappropriate. We believe that existing sector-specific regulators are best placed to consider the impact on their sectors of any subsequent regulation which may be needed. We welcome that the Data Protection Bill and GDPR appear to address many of the concerns of our witnesses regarding the handling of personal data, which is key to the development of AI. The Government Office for AI, with the Centre for Data Ethics and Innovation, needs to identify the gaps, if any, where existing regulation may not be adequate. The Government Office for AI must also ensure that the existing regulators’ expertise is utilised in informing any potential regulation that may be required in the future and we welcome the introduction of the Regulator’s Pioneer Fund.
387.The additional burden this could place on existing regulators may be substantial. We recommend that the National Audit Office’s advice is sought to ensure that existing regulators, in particular the Information Commissioner’s Office, are adequately and sustainably resourced.
388.Dr Pesenti was clear that the Hall-Pesenti Review was intended to be a first step, and that there were clear metrics in the recommendations of the Review for their implementation to be monitored. Dr Andrew Blick said that AI could raise serious questions over the accountability of Ministers and the Government more generally, and require new constitutional models and adjustments to ensure that decisions are properly scrutinised. He suggested that Parliament might consider establishing a committee for artificial intelligence oversight, akin to the Commons Public Accounts Committee. This would be entrusted with “monitoring across government whether artificial intelligence was operating in accordance with the policy objectives it was directed towards, and was doing so effectively, and in accordance with prescribed norms”. Whilst we do not intend to recommend the establishment of a permanent Parliamentary committee, we do agree with the sentiment that artificial intelligence policy must be scrutinised and the Ministers with responsibility held accountable.
389.One of the central lessons we learnt from historic Government policy on artificial intelligence in the United Kingdom was that a lack of clear short and long-term objective setting for policies in this field can lead to the potential benefits not being realised. Furthermore, a lack of evaluation of these objectives and assessment as the technology grows and develops could prevent the policies from reacting to the uncertain nature of AI.
390.We have made it clear in this report that the growth in the development and use of artificial intelligence offers a significant opportunity for the United Kingdom. There are benefits for society as a whole, the chance to be a world-leader in the development of AI, and potential to shape this emerging technology so that the possible risks are avoided. However, this opportunity will be missed if the Government’s commitment to its policies are not sincere.
391.It is essential that policies towards artificial intelligence are suitable for the rapidly changing environment which they are meant to support. For the UK to be able to realise the benefits of AI, the Government’s policies, underpinned by a co-ordinated approach, must be closely monitored and react to feedback from academia and industry where appropriate. Policies should be benchmarked and tracked against appropriate international comparators. The Government Office for AI has a clear role to play here, and we recommend that progress against the recommendations of this report, the Government’s AI-specific policies within the Industrial Strategy and other related polices, be reported on an annual basis to Parliament.
392.Throughout this report we have discussed the relative strengths and weaknesses of AI development in the UK, but questions still remain regarding Britain’s distinctive role in the wider world of AI. The Government has stated in its recent Industrial Strategy White Paper that it intends for the UK to be “at the forefront of the AI and data revolution”. What this means in practice is open to interpretation.
393.Some of our respondents appeared to take this at face value, and made comparisons with the United States and China, especially in terms of funding. For example, Nvidia drew attention to the large investments in AI being made in these countries, including the $5 billion investment announced by the Tianjin state government in China, and the estimated $20–30 billion investments in AI research from Baidu and Google. Balderton Capital emphasised the “many billions of funding” being invested in AI and robotics in China and the US, and argued that the UK Government needed to invest more in academic research to ensure that the UK “remains a global leader in the field”. Microsoft also highlighted the disparities in computer science education, noting that “in a year when China and India each produced 300,000 computer science graduates, the UK produced just 7,000”. Ocado commented favourably on China’s relative lack of regulation, observing that “less legislation around the application of technology is fuelling faster experimentation and innovation, including when it comes to the use of data and AI”, and argued that the UK needed to be careful not to over-regulate by comparison.
394.However, it was more commonly suggested that it was not plausible to expect the UK to be able to compete, at least in terms of investment, with the US and China. Dr Pesenti stated this most clearly when he told us that “the UK, because it is smaller, is not going to be the best at everything in AI”, but believed there could still be a unique and important role for the UK on the AI world stage, if it was to be “nimble, clever and move quickly”. Indeed, we were greatly impressed by the focus and clarity of Canada and Germany’s national strategies when we spoke with Dr Alan Bernstein, President and CEO of CIFAR and Professor Wolfgang Wahlster, CEO and Scientific Director of the DFKI. Dr Bernstein focused on the Pan-Canadian AI Strategy’s bid to attract talented AI developed and researchers back to Canada from the United States, while Professor Wahlster emphasised that Germany was focusing on AI for manufacturing. We also received evidence from the governments of Japan and the Republic of Korea, informing us of their focus on AI in areas such as manufacturing and robotics, and consumer goods.
395.There are encouraging signs that the UK Government is beginning to think in these terms, and is starting to focus on the concept of ethical AI development and application. Matt Hancock MP was clear that there are “gaps across the world that no one has yet filled”, and it was particularly important to ensure that “we have the structures in place to harness the potential of this technology to improve the lot of humanity”. The Industrial Strategy reaffirms this stance, stating that “we will lead the world in safe and ethical use of data and artificial intelligence giving confidence and clarity to citizens and business”. In January 2018, the Prime Minister said at the World Economic Forum in Davos that she wanted to establish “the rules and standards that can make the most of artificial intelligence in a responsible way”, and emphasised that the Centre for Data Ethics and Innovation would work with international partners on this project, and that the UK would be joining the World Economic Forum’s new council on artificial intelligence, which aims to help shape global governance in the area.
396.Indeed, we are also aware of the growing international interest in the governance of AI in recent years—witnesses mentioned the 2016 IEEE AI & Ethics Summit, and the 2017 AI for Good Summit, for example. The European Parliament’s interest in this area was also frequently mentioned, and many witnesses were clear that the UK should continue to work with the EU on this area even after Brexit. Thomas Cheney, a researcher in space law at Sunderland University, even called for “a global coordination effort via the United Nations, as was done at the beginning of the Space Age”. ORBIT brought these points together when they stated:
“The development of new technologies is not a national matter. The leading tech companies are international players that can easily change jurisdiction. Any intervention by the UK with the aim to render AI beneficial must seek close international cooperation, in the first instance with the EU”.
They further mentioned the Council of Europe’s proposals for close cooperation between themselves, the EU and UNESCO to develop a harmonised legal framework and regulatory mechanisms at the international level. We also received direct evidence of the appetite for greater international co-operation on AI matters. For example, the government of China told us of its hope “to promote closer communication and co-operation” between the UK and China on AI.
397.However, there are also countervailing trends which are less encouraging. When we visited the Leverhulme Centre for the Future of Intelligence, their researchers warned of a potential ‘AI arms race’ emerging, as various countries seek to develop more sophisticated AI, and potentially disregard concerns around safety and ethics in the process. Last year, Russian President Vladimir Putin’s speech on AI attracted attention worldwide, when he observed that alongside “colossal opportunities” it also brought “threats that are difficult to predict”, and that “whoever becomes the leader in this sphere will become the ruler of the world”. However, he also emphasised that, should Russian become a leader, “we will share this know-how with the entire world, the same way we share our nuclear technologies today”. A number of witnesses suggested to us that China’s relative lack of interest in moderating the use of data by the state and private sector is giving it a competitive advantage relative to more privacy conscious western nations.
398.On the basis of the evidence we have received, we are convinced that vague statements about the UK ‘leading’ in AI are unrealistic and unhelpful, especially given the vast scale of investment in AI by both the USA and China. By contrast, countries such as Germany and Canada are developing cohesive strategies which take account of their circumstances and seek to play to their strengths as a nation. The UK can either choose to actively define a realistic role for itself with respect to AI, or be a relegated to the role of a passive observer.
399.We believe it is very much in the UK’s interest to take a lead in steering the development and application of AI in a more co-operative direction, and away from this riskier and ultimately less beneficial vision of a global ‘arms race’. The kind of AI-powered future we end up with will ultimately be determined by many countries, whether by collaboration or competition, and whatever the UK decides for itself will ultimately be for naught if the rest of the world moves in a different direction. It is therefore imperative that the Government, and its many internationally-respected institutions, facilitate this global discussion and put forward its own practical ideas for the ethical development and use of AI.
400.We should take advantage of the demand for considered and joined-up ethical principles and frameworks for the development and use of AI in democratic societies. The United States is unlikely to take this role. Not only does the current administration appear relatively uninterested in AI, and has taken a cautious stance on international leadership more generally, the overwhelming dominance of a few powerful technology companies in the development of AI there makes it less likely that a truly democratic debate of equals, encompassing the state, the private sector, universities and the public, is likely to emerge there. Similarly, China shows few signs of wishing to limit the purview of the state or state-supported companies in utilising AI for alarmingly intrusive purposes.
401.The UK therefore has a unique opportunity to forge a distinctive role for itself as a pioneer in ethical AI, which would play to our particular blend of national assets. Alongside a very strong tradition of computer science research in our universities, we also have world-leading humanities departments, who can provide invaluable insight and context regarding the ethical and societal implications of AI. Furthermore, our successful AI start-up sector is enhanced by their close proximity to associated areas of business, most notably our thriving fintech sector, which can serve as practical testbeds for ethical AI development. We have some of the world’s foremost law firms, legal experts and civic institutions, all of which can help enshrine the values and principles we arrive at in robust legal and civic mechanisms where necessary. And finally, we have world-respected institutions such as the BBC, alongside a long history of international diplomacy, engagement and leadership, which will be necessary if we are to help convene, guide and shape the international debates which need to happen in this area.
402.The transformative potential for artificial intelligence on society at home, and abroad, requires active engagement by one and all. The Government has an opportunity at this point in history to shape the development and deployment of artificial intelligence to the benefit of everyone. The UK’s strengths in law, research, financial services and civic institutions, mean it is well placed to help shape the ethical development of artificial intelligence and to do so on the global stage. To be able to demonstrate such influence internationally, the Government must ensure that it is doing everything it can for the UK to maximise the potential of AI for everyone in the country.
403.We recommend that the Government convene a global summit in London by the end of 2019, in close conjunction with all interested nations and governments, industry (large and small), academia, and civil society, on as equal a footing as possible. The purpose of the global summit should be to develop a common framework for the ethical development and deployment of artificial intelligence systems. Such a framework should be aligned with existing international governance structures.
404.While the precise impact of AI across society, politics and the economy remains uncertain, it is generally not disputed that it will have some effect on all three. If poorly handled, public confidence in artificial intelligence could be undermined. The public are entitled to be reassured that AI will be used in their interests, and will not be used to exploit or manipulate them, and many organisations and companies are as eager to confirm these hopes and assuage these concerns.
405.We heard from a number of companies who are developing and publishing their own principles for the ethical development and use of AI, as well as about a number of other ethics-orientated initiatives. In January 2017 the chief executive officer (CEO) of IBM, Ginni Rometty, proposed three core principles for designing and developing AI at IBM, focused on ensuring: that AI is used to augment, rather than replace, human labour; that AI systems are designed to be transparent; and that workers and citizens are properly trained and educated in the use of AI products and services. Sage, who have been developing AI-powered accounting software, announced ‘five core principles’ for developing AI for business in June 2017, which focused on diversity, transparency, accessibility, accountability and augmenting rather than replacing human labour. SAP, Nvidia and others told us of similar initiatives. DeepMind has developed this one stage further, recently launching their ‘Ethics & Society’ unit, which we were told would help them “explore and understand the real world impacts of AI”, and aims to “help technologists put ethics into practice, and to help society anticipate and direct the impact of AI so that it works for the benefit of all”.
406.The Market Research Society told us that the “use of ethics boards and ethics reviews committees and processes within a self-regulatory framework will be important tools”. Eileen Burbidge explained the benefits of this approach to companies, and said “the companies employing AI technology, to the extent they demonstrate they have ethics boards, review their policies and understand their principles, will be the ones to attract the clients, the customers, the partners and the consumers more readily than others that do not or are not as transparent about that”.
407.There are a number of organisations now attempting to devise similar ethical initiatives, often at an international level. Over the course of 2017 the Partnership on AI, an international, pan-industry organisation which aims to bring together researchers, academics, businesses and policymakers, started to take shape. Alongside a number of companies, including Google, Facebook, IBM, Microsoft and Amazon, the Partnership includes a number of university departments and non-governmental organisations (NGOs). It has announced a range of initiatives, including the establishment of a number of working groups to research and formulate best practices, a fellowship program aimed at assisting NGOs, and a series of AI Grand Challenges aimed at using AI to address long-term societal issues.
408.The Institute of Electrical and Electronics Engineers (IEEE) also told us of their efforts to develop a set of internationally accepted ethical principles, with their design manual, Ethically Aligned Design, and their IEEE P7000 series of ethically oriented standards. Closer to home, the British Standards Institution also told us of their similar efforts, which led to the publication of their Guide to the ethical design and application of robots and robotic systems (BS 8611:2016). Most recently, Nesta produced a draft outline of ten principles for the public sector use of algorithmic decision making. Finally, the Nuffield Foundation have announced the creation of an independent Convention on Data Ethics and Artificial Intelligence, bringing together various experts to examine ethical issues on a rolling basis, which they intend to convene by the end of 2018.
409.While these efforts are to be encouraged, it was stressed to us that there is still a lack of co-ordination, especially at the national level. Andrew de Rozairo told us that in his view that there was a need for a “multi-stakeholder dialogue” on a national basis in the UK, and pointed to SAP’s engagement with these approaches in France and Germany. Likewise, Kriti Sharma said that while she had taken Sage’s ethical principles to 1,500 AI developers in London, she believed that more needed to be done to ensure that industry shared and collaborated on these principles “because this will not work if it is just Sage, SAP or IBM doing it in silos alone”. She believed there was a role for Government to help facilitate this collaboration, and help “identify that ultimate checklist and then share it with the industry bodies and have the executives and boards consider that as the things they need to care about”.
410.There are also questions over how companies will translate these principles into practice, and the degree of accountability which companies and organisations will face if they violate them. For example, when we asked Dr Timothy Lanfear how Nvidia ensured their own workforce was aware of their ethical principles, and how they ensure compliance, he admitted that he struggled to answer the question, because “as a technologist it is not my core thinking”. It is unlikely Dr Lanfear is alone in this, and mechanisms must be found to ensure the current trend for ethical principles does not simply translate into a meaningless box-ticking exercise. Dr Lynch was altogether more sceptical, arguing “there is no ability to create voluntary measures in this area, because there is no agreement and precedent for what is and is not acceptable—there are many open questions and these will be taken in different ways by different people”. He believed that only legal frameworks and appropriate regulation would suffice. On the other hand, Eileen Burbidge, took the view that proper ethical governance made good business sense:
“AI companies or the companies employing AI technology, to the extent they demonstrate they have ethics boards, review their policies and understand their principles, will be the ones to attract the clients, the customers, the partners and the consumers more readily than others that do not or are not as transparent about that”.
411.The pace at which the Government has approached these issues has been varied. In some respects, it has been ahead of the curve, and in May 2016 Matt Hancock MP announced the first Data Science Ethical Framework for public consultation, which outlined six ‘key principles’ intended to guide the work of public sector data scientists:
412.The Framework was developed with advice from the ICO, who confirmed that it could form the basis for Data Protection Impact Assessments (see Box 13), as would be required by the EU’s General Data Protection Regulation (GDPR). However, since its announcement, its development has lacked a sense of urgency, with very little reference to it in the intervening years. Dr Jerry Fishenden, an expert on digital government, while welcoming the original draft, noted in July 2017 that “since the launch it’s unclear what the status of the framework is. There are no indications of any consultation taking place, or resulting improvements, on the website … the execution since its launch lacks credibility”.
Data protection impact assessments (DPIAs) is a new requirement of the GDPR (and likely the Data Protection Bill). DPIAs are intended as a tool to help organisations identify the most effective way to comply with their data protection obligations and meet individuals’ expectations of privacy. According to the ICO, one must carry out a DPIA when using new technologies and the processing is likely to result in a high risk to the rights and freedoms of individuals. This means organisations and companies which choose to deploy AI systems in the near future will likely have to produce them. A DPIA should include:
413.More recently a number of announcements have been made in this area. In November 2017 the Government Digital Service announced they were updating the Framework, based on feedback from the British Academy, the Royal Society and Essex County Council. The Government also announced the creation of the Centre for Data Ethics and Innovation, described as aiming “to enable and ensure safe, ethical and ground-breaking innovation in AI and data-driven technologies”. This “world-first advisory body” will work with “Government, regulators and industry to lay the foundations for AI adoption”. Finally, in January 2018, there was also the creation of a new Digital Charter, which the Government has described as a “rolling programme of work to agree norms and rules for the online world and put them into practice”, which would aim to ensure that people have “the same rights and expect the same behaviour online as [they] do offline”. ‘Data and artificial intelligence ethics and innovation’ will constitute one of the seven elements of this work programme.
414.From all we have seen, we believe this area is not lacking good will, but there is a lack of awareness and co-ordination, which is where Government involvement could help. It is also clear to us that this is not only an ethical matter, but also good business sense. The evidence we have received suggests that some individuals, organisations and public services are reluctant to share data with companies because they do not know what is acceptable—a particular concern after the Royal Free London NHS Foundation Trust’s deal with DeepMind (see Box 8).
415.A core set of widely recognised ethical principles, which companies and organisations deploying AI sign up to in the form of a code, could be useful in this context. The Digital Charter may yet turn into this, although given the slow development of the Government’s Data Science Ethical Framework, there are reasons to be sceptical. Nevertheless, whether it is positioned within the broader framework of the Digital Charter or independent of it, there is a need for clear and understandable guidelines governing the applications to which AI may be put, between businesses, public organisations and individual consumers.
416.These guidelines should be developed with substantive input from the Centre for Data Ethics and Innovation, the AI Council and the Alan Turing Institute. It should include both organisation-level considerations, as well as questions and checklists for those designing, developing and utilising AI at an operational level, alongside concrete examples of how this should work in practice.
417.As a starting point in this process, we suggest five overarching principles for an AI Code:
(1)Artificial intelligence should be developed for the common good and benefit of humanity.
(2)Artificial intelligence should operate on principles of intelligibility and fairness.
(3)Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
(4)All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
(5)The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.
418.Furthermore, while we do not see this having a statutory basis, at least initially, consumers in particular should be able to trust that someone external to the companies and organisations which adopt these principles has some measure of oversight regarding their adherence to them. An appropriate organisation, such as the Centre for Data Ethics and Innovation, could be assigned to oversee the adherence of signed-up organisations and companies to this code, and offer advice on how to improve where necessary. In more extreme cases, they may even consider withdrawing this seal of approval. To organisations and businesses, it would provide a clear, consistent and interoperable framework for their activities, while for citizens and consumers, it would provide a recognised and trustworthy brand, reassuring them across the multiple domains of their life that they were getting a fair deal from AI.
419.Many organisations are preparing their own ethical codes of conduct for the use of AI. This work is to be commended, but it is clear that there is a lack of wider awareness and co-ordination, where the Government could help. Consistent and widely-recognised ethical guidance, which companies and organisations deploying AI could sign up to, would be a welcome development.
420.We recommend that a cross-sector ethical code of conduct, or ‘AI code’, suitable for implementation across public and private sector organisations which are developing or adopting AI, be drawn up and promoted by the Centre for Data Ethics and Innovation, with input from the AI Council and the Alan Turing Institute, with a degree of urgency. In some cases, sector-specific variations will need to be created, using similar language and branding. Such a code should include the need to have considered the establishment of ethical advisory boards in companies or organisations which are developing, or using, AI in their work. In time, the AI code could provide the basis for statutory regulation, if and when this is determined to be necessary.
470 Appendix 9 to this report shows which of our recommendations is relevant to which newly established AI-related body.
471 , p 5
472 , p 39
474 (Matt Hancock MP)
475 (Dr Jérôme Pesenti)
479 (Matt Hancock MP)
481 , p 40
483 Prime Minister Theresa May, Speech at the World Economic Forum in Davos, Switzerland, 25 January 2018: [accessed 1 March 2018]
484 (Matt Hancock MP)
487 Written evidence from Baroness Harding of Winscombe ()
488 (Matt Hancock MP)
489 , p 4
491 (Dr Jérôme Pesenti)
492 (Matt Hancock MP)
493 Centre for Public Appointments, ‘Interim centre for Data Ethics and Innovation: Chair’ (25 January 2018): [accessed 1 February 2018]
495 (Matt Hancock MP)
496 (Dr Jérôme Pesenti)
498 (Lord Henley)
499 (Dr David Barber)
500 CIFAR, ‘Pan-Canadian Artificial Intelligence Strategy Overview’ (30 March 2017): [accessed 29 January 2018]
501 (Nicola Perrin)
503 (Professor Dame Wendy Hall)
504 (Dr Mark Taylor)
505 Written evidence from Sage ()
506 See written evidence from The Alan Turing Institute (); Professor Robert Fisher , Professor Alan Bundy, Professor Simon King, Professor David Robertson, Dr Michael Rovatsos, Professor Austin Tate and Professor Chris Williams (); Electronic Frontier Foundation (); techUK (); Arm (); Deep Science Ventures () and Professor Chris Reed ()
507 Written evidence from techUK ()
509 Written evidence from the Law Society of England and Wales ()
511 Written evidence from Professor Robert Fisher, Professor Alan Bundy, Professor Simon King, Professor David Robertson, Dr Michael Rovatsos, Professor Austin Tate and Professor Chris Williams ()
512 Written evidence from The Alan Turing Institute ()
513 Written evidence from SCAMPI Research Consortium, City, University of London ()
514 Written evidence from Online Dating Association ()
516 Written evidence from The Alan Turing Institute ()
517 (Dr Joseph Reger)
518 Written evidence from Kemp Little LLP ()
519 See written evidence from Electronic Frontier Foundation (); Professor Chris Reed () and Professor Robert Fisher, Professor Alan Bundy, Professor Simon King, Professor David Robertson, Dr Michael Rovatsos, Professor Austin Tate and Professor Chris Williams ()
520 Written evidence from Baker McKenzie ()
522 Written evidence from Dr Sarah Morley and Dr David Lawrence ()
523 Written evidence from the Foundation for Responsible Robotics ()
524 Written evidence from Bristows LLP ()
525 (Professor Chris Reed) and written evidence from Professor Chris Reed ()
526 Written evidence from Dr Jerry Fishenden ()
527 (Professor Michael Wooldridge)
528 Written evidence from the BBC ()
529 Written evidence from ORBIT The Observatory for Responsible Research and Innovation in ICT ()
530 Written evidence from Big Brother Watch ()
531 (Javier Ruiz Diaz)
532 (Olivier Thereaux)
533 (Andrew de Rozairo, Kriti Sharma, James Luke)
534 Written evidence from HM Government ()
535 (Elizabeth Denham)
536 (Elizabeth Denham)
537 (Matt Hancock MP)
538 , p 49
539 (Dr Jérôme Pesenti)
540 Written evidence from Dr Andrew Blick ()
541 , p 36
542 Written evidence from NVIDIA ()
543 Written evidence from Balderton Capital (UK) LLP ()
544 Written evidence from Microsoft ()
545 Written evidence from Ocado Group plc ()
546 Written evidence from Will Crosthwait (); The Economic Singularity Supper Club () and Department of Computer Science University of Liverpool ()
547 (Dr Jérôme Pesenti)
548 (Dr Alan Bernstein) and (Professor Wolfgang Wahlster)
549 Written evidence from Government of Japan () and Government of the Republic of Korea ()
550 (Matt Hancock MP)
551 , p 40
552 Prime Minister Theresa May, Speech at Davos 2018, 25 January 2018: [accessed 1 February 2018]
553 Written evidence from IEEE European Public Policy Initiative Working Group on ICT () and Amnesty International ()
554 Written evidence from Mr Thomas Cheney ()
555 Written evidence from ORBIT The Observatory for Responsible Research and Innovation in ICT ()
556 Written evidence from the Government of China ()
557 See Appendix 5.
558 Written evidence from Information Systems Audit and Control Association (ISACA) London Chapter ()
559 Written evidence from Simul Systems Ltd (); Ocado Group plc () and Will Crosthwait ()
560 Financial technology, or fintech, refers to any technology used to support or enable banking and financial services.
561 Alison DeNisco Rayome, ‘3 guiding principles for ethical AI, from IBM CEO Ginni Rometty’, TechRepublic (17 January 2017): [accessed 5 February 2018]
562 Written evidence from Sage ()
563 Written evidence from DeepMind ()
564 Written evidence from the Market Research Society ()
565 (Eileen Burbidge)
566 ‘Partnership on AI strengthens its network of partners and announces first initiatives’, Partnership on AI (16 May 2017): [accessed 5 February 2018]
567 Written evidence from IEEE European Public Policy Initiative Working Group on ICT ()
568 Written evidence from British Standards Institution ()
569 Eddie Copeland, ‘10 principles for public sector use of algorithmic decision making’, Nesta blog (21 February 2018): [accessed 1 March 2018]
570 Written evidence from The Alan Turing Institute ()
571 (Andrew de Rozairo)
572 (Kriti Sharma)
574 (Dr Timothy Lanfear)
575 Supplementary written evidence from Dr Mike Lynch ()
576 (Eileen Burbidge)
577 Cabinet Office, Data Science Ethical Framework (19 May 2016): [accessed 31 January 2018]
579 Dr Jerry Fishenden, ‘Improving data science ethics’, New tech observations from the UK (5 July 2017): [accessed 31 January 2018]
580 Sarah Gates, ‘Updating the Data Science Ethical Framework’, Government Digital Service blog (27 November 2017): [accessed 31 January 2018]
581 , p 4
583 Department for Digital, Culture, Media and Sport, Digital Charter (25 January 2018): [accessed 31 January 2018]
585 Written evidence from medConfidential (); Future Advocacy (); Doteveryone () and Royal Statistical Society ()