AI in the UK: ready, willing and able? Contents

Summary of conclusions and recommendations

Engaging with artificial intelligence

General understanding, engagement and public narratives

1.The media provides extensive and important coverage of artificial intelligence, which occasionally can be sensationalist. It is not for the Government or other public organisations to intervene directly in how AI is reported on, nor to attempt to promote an entirely positive view among the general public of its possible implications or impact. Instead, the Government must understand the need to build public trust and confidence in how to use artificial intelligence, as well as explain the risks. (Paragraph 50)

Everyday engagement with AI

2.Artificial intelligence is a growing part of many people’s lives and businesses. It is important that members of the public are aware of how and when artificial intelligence is being used to make decisions about them, and what implications this will have for them personally. This clarity, and greater digital understanding, will help the public experience the advantages of AI, as well as to opt out of using such products should they have concerns. (Paragraph 58)

3.Industry should take the lead in establishing voluntary mechanisms for informing the public when artificial intelligence is being used for significant or sensitive decisions in relation to consumers. This industry-led approach should learn lessons from the largely ineffective AdChoices scheme. The soon-to-be established AI Council, the proposed industry body for AI, should consider how best to develop and introduce these mechanisms.
(Paragraph 59)

Designing artificial intelligence

Access to, and control of, data

4.The Government plans to adopt the Hall-Pesenti Review recommendation that ‘data trusts’ be established to facilitate the ethical sharing of data between organisations. However, under the current proposals, individuals who have their personal data contained within these trusts would have no means by which they could make their views heard, or shape the decisions of these trusts. We therefore recommend that as data trusts are developed under the guidance of the Centre for Data Ethics and Innovation, provision should be made for the representation of people whose data is stored, whether this be via processes of regular consultation, personal data representatives, or other means. (Paragraph 82)

5.Access to data is essential to the present surge in AI technology, and there are many arguments to be made for opening up data sources, especially in the public sector, in a fair and ethical way. Although a ‘one-size-fits-all’ approach to the handling of public sector data is not appropriate, many SMEs in particular are struggling to gain access to large, high-quality datasets, making it extremely difficult for them to compete with the large, mostly US-owned technology companies, who can purchase data more easily and are also large enough to generate their own. In many cases, public datasets, such as those held by the NHS, are more likely to contain data on more diverse populations than their private sector equivalents, and more control can be exercised before they are released. (Paragraph 83)

6.We recommend that wherever possible and appropriate, and with regard to its potential commercial value, publicly-held data be made available to AI researchers and developers. In many cases, this will require Government departments and public organisations making a concerted effort to digitise their records in unified and compatible formats. When releasing this data, subject to appropriate anonymisation measures where necessary, data trusts will play an important role. (Paragraph 84)

7.We support the approach taken by Transport for London, who have released their data through a single point of access, where the data is available subject to appropriate terms and conditions and with controls on privacy. The Centre for Data Ethics and Innovation should produce guidance on similar approaches. The Government Office for AI and GovTech Catalyst should work together to ensure that the data for which there is demand is made available in a responsible manner. (Paragraph 85)

8.We acknowledge that open data cannot be the last word in making data more widely available and usable, and can often be too blunt an instrument for facilitating the sharing of more sensitive or valuable data. Legal and technical mechanisms for strengthening personal control over data, and preserving privacy, will become increasingly important as AI becomes more widespread through society. Mechanisms for enabling individual data portability, such as the Open Banking initiative, and data sharing concepts such as data trusts, will spur the creation of other innovative and context-appropriate tools, eventually forming a broad spectrum of options between total data openness and total data privacy. (Paragraph 86)

9.We recommend that the Centre for Data Ethics and Innovation investigate the Open Banking model, and other data portability initiatives, as a matter of urgency, with a view to establishing similar standardised frameworks for the secure sharing of personal data beyond finance. They should also work to create, and incentivise the creation of, alternative tools and frameworks for data sharing, control and privacy for use in a wide variety of situations and contexts. (Paragraph 87)

10.Increasingly, public sector data has value. It is important that public organisations are aware of the commercial potential of such data. We recommend that the Information Commissioner’s Office work closely with the Centre for Data Ethics and Innovation in the establishment of data trusts, and help to prepare advice and guidance for data controllers in the public sector to enable them to estimate the value of the data they hold, in order to make best use of it and negotiate fair and evidence-based agreements with private-sector partners. The values contained in this guidance could be based on precedents where public data has been made available and subsequently generated commercial value for public good. The Information Commissioner’s Office should have powers to review the terms of significant data supply agreements being contemplated by public bodies.
(Paragraph 88)

Intelligible AI

11.Based on the evidence we have received, we believe that achieving full technical transparency is difficult, and possibly even impossible, for certain kinds of AI systems in use today, and would in any case not be appropriate or helpful in many cases. However, there will be particular safety-critical scenarios where technical transparency is imperative, and regulators in those domains must have the power to mandate the use of more transparent forms of AI, even at the potential expense of power and accuracy. (Paragraph 99)

12.We believe that the development of intelligible AI systems is a fundamental necessity if AI is to become an integral and trusted tool in our society. Whether this takes the form of technical transparency, explainability, or indeed both, will depend on the context and the stakes involved, but in most cases we believe explainability will be a more useful approach for the citizen and the consumer. This approach is also reflected in new EU and UK legislation. We believe it is not acceptable to deploy any artificial intelligence system which could have a substantial impact on an individual’s life, unless it can generate a full and satisfactory explanation for the decisions it will take. In cases such as deep neural networks, where it is not yet possible to generate thorough explanations for the decisions that are made, this may mean delaying their deployment for particular uses until alternative solutions are found. (Paragraph 105)

13.The Centre for Data Ethics and Innovation, in consultation with the Alan Turing Institute, the Institute of Electrical and Electronics Engineers, the British Standards Institute and other expert bodies, should produce guidance on the requirement for AI systems to be intelligible. The AI development sector should seek to adopt such guidance and to agree upon standards relevant to the sectors within which they work, under the auspices of the AI Council. (Paragraph 106)

Addressing prejudice

14.We are concerned that many of the datasets currently being used to train AI systems are poorly representative of the wider population, and AI systems which learn from this data may well make unfair decisions which reflect the wider prejudices of societies past and present. While many researchers, organisations and companies developing AI are aware of these issues, and are starting to take measures to address them, more needs to be done to ensure that data is truly representative of diverse populations, and does not further perpetuate societal inequalities. (Paragraph 119)

15.Researchers and developers need a more developed understanding of these issues. In particular, they need to ensure that data is pre-processed to ensure it is balanced and representative wherever possible, that their teams are diverse and representative of wider society, and that the production of data engages all parts of society. Alongside questions of data bias, researchers and developers need to consider biases embedded in the algorithms themselves—human developers set the parameters for machine learning algorithms, and the choices they make will intrinsically reflect the developers’ beliefs, assumptions and prejudices. The main ways to address these kinds of biases are to ensure that developers are drawn from diverse gender, ethnic and socio-economic backgrounds, and are aware of, and adhere to, ethical codes of conduct. (Paragraph 120)

16.We recommend that a specific challenge be established within the Industrial Strategy Challenge Fund to stimulate the creation of authoritative tools and systems for auditing and testing training datasets to ensure they are representative of diverse populations, and to ensure that when used to train AI systems they are unlikely to lead to prejudicial decisions. This challenge should be established immediately, and encourage applications by spring 2019. Industry must then be encouraged to deploy the tools which are developed and could, in time, be regulated to do so. (Paragraph 121)

Data monopolies

17.While we welcome the investments made by large overseas technology companies in the UK economy, and the benefits they bring, the increasing consolidation of power and influence by a select few risks damaging the continuation, and development, of the UK’s thriving home-grown AI start-up sector. The monopolisation of data demonstrates the need for strong ethical, data protection and competition frameworks in the UK, and for continued vigilance from the regulators. We urge the Government, and the Competition and Markets Authority, to review proactively the use and potential monopolisation of data by the big technology companies operating in the UK. (Paragraph 129)

Developing artificial intelligence

Investment in AI development

18.The UK AI development sector has flourished largely without attempts by the Government to determine its shape or direction. This has resulted in a flexible and innovative grassroots start-up culture, which is well positioned to take advantage of the unpredictable opportunities that could be afforded by AI. The investment environment for AI businesses must be able to cope with this uncertainty, and be willing to take the risks required to seize the chances AI offers. (Paragraph 135)

19.We welcome the changes announced in the Autumn Budget 2017 to the Enterprise Investment and Venture Capital Trust schemes which encourage innovative growth, and we believe they should help to boost investment in UK-based AI companies. The challenge for start-ups in the UK is the lack of investment available with which to scale up their business. (Paragraph 150)

20.To ensure that AI start-ups in the United Kingdom have the opportunity to scale up, without having to look for off-shore investment, we recommend that a proportion of the £2.5 billion investment fund at the British Business Bank, announced in the Autumn Budget 2017, be reserved as an AI growth fund for SMEs with a substantive AI component, and be specifically targeted at enabling such companies to scale up. Further, the Government should consult on the need to improve access to funding within the UK for SMEs with a substantive AI component looking to scale their business.
(Paragraph 151)

21.To guarantee that companies developing AI can continue to thrive in the UK, we recommend that the Government review the existing incentives for businesses operating in the UK who are working on artificial intelligence products, and ensure that they are adequate, properly promoted to companies, and designed to assist SMEs wherever possible. (Paragraph 152)

Turning academic research into commercial potential

22.The UK has an excellent track record of academic research in the field of artificial intelligence, but there is a long-standing issue with converting such research into commercially viable products. (Paragraph 159)

23.To address this we welcome, and strongly endorse, the recommendation of the Hall-Pesenti Review, which stated “universities should use clear, accessible and where possible common policies and practices for licensing IP and forming spin-out companies”. We recommend that the Alan Turing Institute, as the National Centre for AI Research, should develop this concept into concrete policy advice for universities in the UK, looking to examples from other fields and from other nations, to help start to address this long-standing problem. (Paragraph 160)

Improving access to skilled AI developers

24.We welcome the expanded public funding for PhD places in AI and machine learning, as well as the announcement that an industry-funded master’s degree programme is to be developed. We do believe that more needs to be done to ensure that the UK has the pipeline of skills it requires to maintain its position as one of the best countries in the world for AI research.
(Paragraph 168)

25.We recommend that the funding for PhD places in AI and machine learning be further expanded, with the financial burden shared equally between the public and private sector through a PhD matching scheme. We believe that the Doctoral Training Partnership scheme and other schemes where costs are shared between the private sector, universities and research councils should be examined, and the number of industry-sponsored PhDs increased. (Paragraph 169)

26.We further recommend that short (3–6 months) post-graduate conversion courses be developed by the Alan Turing Institute, in conjunction with the AI Council, to reflect the needs of the AI development sector. Such courses should be suitable for individuals in other academic disciplines looking to transfer into AI development and design or to have a grounding in the application of AI in their discipline. These should be designed so as to enable anyone to retrain at any stage of their working lives. (Paragraph 170)

27.We recommend that the Government ensures that publically-funded PhDs in AI and machine learning are made available to a diverse population, more representative of wider society. To achieve this, we call for the Alan Turing Institute and Government Office for AI to devise mechanisms to attract more female and ethnic minority students from academic disciplines which require similar skillsets, but have more representative student populations, to participate in the Government-backed PhD programme. (Paragraph 174)

28.We acknowledge the considerable scepticism of at least some technology companies who believe that the apprenticeship levy is of little use to them, despite the success that others in the sector have had with apprenticeships. The Government should produce clear guidance on how the apprenticeship levy can be best deployed for use in the technology sector, in particular in SMEs and start-ups. (Paragraph 175)

29.The Government’s announcement that it will increase the annual number of Tier 1 (exceptional talent) visas from 1,000 to 2,000 per year is welcome. While top-tier PhD researchers and designers are required, a thriving AI development sector is also dependent on access to those able to implement artificial intelligence research, whose occupations may fall short of the exceptional talent requirements. (Paragraph 181)

30.We are concerned that the number of workers provided for under the Tier 1 (exception talent) visa scheme will be insufficient and the requirements too high level for the needs of UK companies and start-ups. We recommend that the number of visas available for AI researchers and developers be increased by, for example, adding machine learning and associated skills to the Tier 2 Shortage Occupations List. (Paragraph 182)

Maintaining innovation

31.We believe that the Government must commit to underwriting, and where necessary replacing, funding for European research and innovation programmes, after we have left the European Union. (Paragraph 188)

32.The state has an important role in supporting AI research through the research councils and other mechanisms, and should be mindful to ensure that the UK’s advantages in AI R&D are maintained. There is a risk that the current focus on deep learning is distracting attention away from other aspects of AI research, which could contribute to the next big advances in the field. The Government and universities have an important role to play in supporting diverse sub-fields of AI research, beyond the now well-funded area of deep learning, in order to ensure that the UK remains at the cutting edge of AI developments. (Paragraph 191)

Working with artificial intelligence

Productivity

33.We support the Government’s belief that artificial intelligence offers an opportunity to improve productivity. However, to meet this potential for the UK as a whole, the AI Council must take a role in enabling AI to benefit all companies (big and small) and ensuring they are able to take advantage of existing technology, in order for them to take advantage of future technology. It will be important that the Council identifies accelerators and obstacles to the use of AI to improve productivity, and advises the Government on the appropriate course of action to take. (Paragraph 199)

34.We welcome the Government’s intentions to upgrade the nation’s digital infrastructure, as far as they go. However, we are concerned that it does not have enough impetus behind it to ensure that the digital foundations of the country are in place in time to take advantage of the potential artificial intelligence offers. We urge the Government to consider further substantial public investment to ensure that everywhere in the UK is included within the rollout of 5G and ultrafast broadband, as this should be seen as a necessity. (Paragraph 203)

Government adoption, and procurement, of artificial intelligence

35.The Government’s leadership in the development and deployment of artificial intelligence must be accompanied by action. We welcome the announcement of the GovTech Catalyst and hope that it can open the doors of Whitehall to the burgeoning AI development sector in the UK. We also endorse the recommendation of the Hall-Pesenti Review aimed at encouraging greater use of AI in the public sector. (Paragraph 215)

36.To ensure greater uptake of AI in the public sector, and to lever the Government’s position as a customer in the UK, we recommend that public procurement regulations are reviewed and amended to ensure that UK-based companies offering AI solutions are invited to tender and given the greatest opportunity to participate. The Crown Commercial Service, in conjunction with the Government Digital Office, should review the Government Service Design Manual and the Technology Code of Practice to ensure that the procurement of AI-powered systems designed by UK companies is encouraged and incentivised, and done in an ethical manner. (Paragraph 216)

37.We also encourage the Government to be bold in its approach to the procurement of artificial intelligence systems, and to encourage the development of possible solutions to public policy challenges through limited speculative investment and support to businesses which helps them convert ideas to prototypes, in order to determine whether their solutions are viable. The value of AI systems which are deployed to the taxpayer will compensate for any money lost in supporting the development of other tools.
(Paragraph 217)

38.Finally, with respect to public procurement, we recommend the establishment of an online bulletin board for the advertisement of challenges which the Government Office for AI and the GovTech Catalyst have identified from across Government and the wider public sector where there could be the potential for innovative tech- and AI-based solutions. (Paragraph 218)

Impact on the labour market

39.The labour market is changing, and further significant disruption to that market is expected as AI is adopted throughout the economy. As we move into this unknown territory, forecasts of AI’s growing impact—jobs lost, jobs enhanced and new jobs created—are inevitably speculative. There is an urgent need to analyse or assess, on an ongoing basis, the evolution of AI in the UK, and develop policy responses. (Paragraph 231)

National Retraining Scheme

40.The UK must be ready for the disruption that AI will have on the way in which we work. We support the Government’s interest in developing adult retraining schemes, as we believe that AI will disrupt a wide range of jobs over the coming decades, and both blue- and white-collar jobs which exist today will be put at risk. It will therefore be important to encourage and support workers as they move into the new jobs and professions we believe will be created as a result of new technologies, including AI. The National Retraining Scheme could play an important role here, and must ensure that the recipients of retraining schemes are representative of the wider population. Industry should assist in the financing of the National Retraining Scheme by matching Government funding. This partnership would help improve the number of people who can access the scheme and better identify the skills required. Such an approach must reflect the lessons learned from the execution of the Apprenticeship Levy. (Paragraph 236)

Living with artificial intelligence

Education and artificial intelligence

41.It is clear to us that there is a need to improve digital understanding and data literacy across society, as these are the foundations upon which knowledge about AI is built. This effort must be undertaken collaboratively by public sector organisations, civil society organisations (such as the Royal Society) and the private sector. (Paragraph 249)

42.The evidence suggests that recent reforms to the computing curriculum are a significant improvement on the ICT curriculum, although it is still too early to say what the final results of this will be. The Government must be careful not to expand computing education at the expense of arts and humanities subjects, which hone the creative, contextual and analytical skills which will likely become more, not less, important in a world shaped by AI. (Paragraph 250)

43.We are, however, concerned to learn of the absence of wider social and ethical implications from the computing curriculum, as originally proposed. We recommend that throughout the curriculum the wider social and ethical aspects of computer science and artificial intelligence need to be restored to the form originally proposed. (Paragraph 251)

44.While we welcome the measures announced in the Autumn Budget 2017 to increase the number of computer science teachers in secondary schools, a greater sense of urgency and commitment is needed from the Government if the UK is to meet the challenges presented by AI. (Paragraph 257)

45.The Government must ensure that the National Centre for Computing is rapidly created and adequately resourced, and that there is support for the retraining of teachers with associated skills and subjects such as mathematics. In particular, Ofsted should ensure that schools are making additional time available to teachers to enable them to train in new technology-focused aspects of the curriculum. We also urge the Government to make maximum use across the country of existing lifelong learning facilities for the training and regular retraining of teachers and other AI experts. (Paragraph 258)

46.Supplementary to the Hall-Pesenti Review, the Government should explore ways in which the education sector, at every level, can play a role in translating the benefits of AI into a more productive and equitable economy.
(Paragraph 259)

Impact on social and political cohesion

47.There are many social and political impacts which AI may have, quite aside from people’s lives as workers and consumers. AI makes the processing and manipulating of all forms of digital data substantially easier, and given that digital data permeates so many aspects of modern life, this presents both opportunities and unprecedented challenges. As discussed earlier in our report, there is a rapidly growing need for public understanding of, and engagement with, AI to develop alongside the technology itself. The manipulation of data in particular will be a key area for public understanding and discussion in the coming months and years. (Paragraph 265)

48.We recommend that the Government and Ofcom commission research into the possible impact of AI on conventional and social media outlets, and investigate measures which might counteract the use of AI to mislead or distort public opinion as a matter of urgency. (Paragraph 266)

Inequality

49.The risk of greater societal and regional inequalities emerging as a consequence of the adoption of AI and advances in automation is very real, and while the Government’s proposed policies on regional development are to be welcomed, we believe more needs to be done in this area. We are not yet convinced that basic income schemes will prove to be the answer, but we watch Scotland’s experiments with interest. (Paragraph 275)

50.Everyone must have access to the opportunities provided by AI. The Government must outline its plans to tackle any potential societal or regional inequality caused by AI, and this must be explicitly addressed as part of the implementation of the Industrial Strategy. The Social Mobility Commission’s annual State of the Nation report should include the potential impact of AI and automation on inequality. (Paragraph 276)

Healthcare and artificial intelligence

51.Maintaining public trust over the safe and secure use of their data is paramount to the successful widespread deployment of AI and there is no better exemplar of this than personal health data. There must be no repeat of the controversy which arose between the Royal Free London NHS Foundation Trust and DeepMind. If there is, the benefits of deploying AI in the NHS will not be adopted or its benefits realised, and innovation could be stifled. (Paragraph 300)

52.The data held by the NHS could be considered a unique source of value for the nation. It should not be shared lightly, but when it is, it should be done in a manner which allows for that value to be recouped. We are concerned that the current piecemeal approach taken by NHS Trusts, whereby local deals are struck between AI developers and hospitals, risks the inadvertent under-appreciation of the data. It also risks NHS Trusts exposing themselves to inadequate data sharing arrangements. (Paragraph 301)

53.We recommend that a framework for the sharing of NHS data should be prepared and published by the end of 2018 by NHS England (specifically NHS Digital) and the National Data Guardian for Health and Care. This should be prepared with the support of the ICO and the clinicians and NHS Trusts which already have experience of such arrangements (such as the Royal Free London and Moorfields Eye Hospital NHS Foundation Trusts), as well as the Caldicott Guardians. This framework should set out clearly the considerations needed when sharing patient data in an appropriately anonymised form, the precautions needed when doing so, and an awareness of the value of that data and how it is used. It must also take account of the need to ensure SME access to NHS data, and ensure that patients are made aware of the use of their data and given the option to opt out.
(Paragraph 302)

54.Many organisations in the United Kingdom are not taking advantage of existing technology, let alone ready to take advantage of new technology such as artificial intelligence. The NHS is, perhaps, the most pressing example of this. The development, and eventual deployment, of AI systems in healthcare in the UK should be seen as a collaborative effort with both the NHS and the AI developer being able to benefit. To release the value of the data held, we urge the NHS to digitise its current practices and records, in consistent formats, by 2022 to ensure that the data it holds does not remain inaccessible and the possible benefits to society unrealised. (Paragraph 303)

Mitigating the risks of artificial intelligence

Legal liability

55.In our opinion, it is possible to foresee a scenario where AI systems may malfunction, underperform or otherwise make erroneous decisions which cause harm. In particular, this might happen when an algorithm learns and evolves of its own accord. It was not clear to us, nor to our witnesses, whether new mechanisms for legal liability and redress in such situations are required, or whether existing mechanisms are sufficient. (Paragraph 317)

56.Clarity is required. We recommend that the Law Commission consider the adequacy of existing legislation to address the legal liability issues of AI and, where appropriate, recommend to Government appropriate remedies to ensure that the law is clear in this area. At the very least, this work should establish clear principles for accountability and intelligibility. This work should be completed as soon as possible. (Paragraph 318)

Criminal misuse of artificial intelligence and data

57.The potential for well-meaning AI research to be used by others to cause harm is significant. AI researchers and developers must be alive to the potential ethical implications of their work. The Centre for Data Ethics and Innovation and the Alan Turing Institute are well placed to advise researchers on the potential implications of their work, and the steps they can take to ensure that such work is not misused. However, we believe additional measures are required. (Paragraph 328)

58.We recommend that universities and research councils providing grants and funding to AI researchers must insist that applications for such money demonstrate an awareness of the implications of the research and how it might be misused, and include details of the steps that will be taken to prevent such misuse, before any funding is provided. (Paragraph 329)

59.We recommend that the Cabinet Office’s final Cyber Security Science & Technology Strategy take into account the risks as well as the opportunities of using AI in cybersecurity applications, and applications more broadly. In particular, further research should be conducted into methods for protecting public and private datasets against any attempts at data sabotage, and the results of this research should be turned into relevant guidance.
(Paragraph 333)

Autonomous weapons

60.Without agreed definitions we could easily find ourselves stumbling through a semantic haze into dangerous territory. The Government’s definition of an autonomous system used by the military as one where it “is capable of understanding higher-level intent and direction” is clearly out of step with the definitions used by most other governments. This position limits both the extent to which the UK can meaningfully participate in international debates on autonomous weapons and its ability to take an active role as a moral and ethical leader on the global stage in this area. Fundamentally, it also hamstrings attempts to arrive at an internationally agreed definition. (Paragraph 345)

61.We recommend that the UK’s definition of autonomous weapons should be realigned to be the same, or similar, as that used by the rest of the world. To produce this definition the Government should convene a panel of military and AI experts to agree a revised form of words. This should be done within eight months of the publication of this report. (Paragraph 346)

Shaping artificial intelligence

Leading at home

62.Artificial intelligence’s potential is an opportunity the Government is embracing. The Government’s recent enthusiasm and responsiveness to artificial intelligence in the UK is to be welcomed. We have proposed a number of recommendations for strengthening recent policy announcements, based on the extensive evidence we have received as a Committee. We encourage the Government to continue to be proactive in developing policies to harness the potential of AI and mitigate the risks. We do, however, urge the Government to ensure that its approach is focused and that it provides strategic leadership—there must be a clear roadmap for success. Policies must be produced in concert with one another, and with existing policy. Industry and the public must be better informed about the announcements, and sufficient detail provided from the outset. (Paragraph 366)

63.The pace at which this technology will grow is unpredictable, and the policy initiatives have been many. To avoid policy being too reactive, and to prevent the new institutions from overlapping and conflicting with one another, we recommend that the Government Office for AI develop a national policy framework for AI, to be in lockstep with the Industrial Strategy, and to be overseen by the AI Council. Such a framework should include policies related to the recommendations of this report, and be accompanied, where appropriate, by a long-term commitment to such policies in order to realise the benefits. It must also be clear within Government who is responsible around the Cabinet table for the direction and ownership of this framework and the AI-related policies which fall within it. (Paragraph 367)

64.The roles and remits of the new institutions must be clear, if they are to be a success. The public and the technology sector in the UK must know who to turn to for authoritative advice when it comes to the development and use of artificial intelligence. To ensure public confidence, it must also be clear who to turn to if there are any complaints about how AI has been used, above and beyond the matters relating to data use (which falls within the Information Commissioner’s remit). (Paragraph 368)

65.We recommend that the Government Office for AI should act as the co-ordinator of the work between the Centre for Data Ethics and Innovation, the GovTech Catalyst team and the national research centre for Artificial Intelligence Research (the Alan Turing Institute), as well as the AI Council it is being established to support. It must also take heed of the work of the more established bodies which have done work in this area, such as the Information Commissioner’s Office and the Competition and Markets Authority. The work programmes of all the new AI-specific institutions should be subject to agreement with one another, on a quarterly basis, and should take into account the work taking place across Government in this area, as well as the recommendations from Parliament, regulators, and the work of the devolved assemblies and governments. The UK has a thriving AI ecosystem, and the Government Office for AI should seek to inform its work programme through wide public consultation as it develops Government policy with regard to artificial intelligence. The programme should be publicly available for scrutiny. (Paragraph 369)

66.We welcome the new focus for the Alan Turing Institute as the national research centre for artificial intelligence. We want it to be able to fulfil this role, and believe it has the potential to do so. As such, the new focus must not simply be a matter of rebranding. The successful institutes in Canada and Germany, such as the Vector Institute and the German Research Center for Artificial Intelligence, offer valuable lessons as to how a national research centre should be operated. (Paragraph 370)

67.The Government must ensure that the Alan Turing Institute’s funding and structure is sufficient for it to meet its new expanded remit as the UK’s national research centre for AI. In particular, the Institute’s current secondment-based staffing model should be assessed to ensure its suitability, and steps taken to staff the Institute appropriately to meet the demands now placed upon it. (Paragraph 371)

Regulation and regulators

68.Blanket AI-specific regulation, at this stage, would be inappropriate. We believe that existing sector-specific regulators are best placed to consider the impact on their sectors of any subsequent regulation which may be needed. We welcome that the Data Protection Bill and GDPR appear to address many of the concerns of our witnesses regarding the handling of personal data, which is key to the development of AI. The Government Office for AI, with the Centre for Data Ethics and Innovation, needs to identify the gaps, if any, where existing regulation may not be adequate. The Government Office for AI must also ensure that the existing regulators’ expertise is utilised in informing any potential regulation that may be required in the future and we welcome the introduction of the Regulator’s Pioneer Fund. (Paragraph 386)

69.The additional burden this could place on existing regulators may be substantial. We recommend that the National Audit Office’s advice is sought to ensure that existing regulators, in particular the Information Commissioner’s Office, are adequately and sustainably resourced. (Paragraph 387)

Assessing policy outcomes

70.It is essential that policies towards artificial intelligence are suitable for the rapidly changing environment which they are meant to support. For the UK to be able to realise the benefits of AI, the Government’s policies, underpinned by a co-ordinated approach, must be closely monitored and react to feedback from academia and industry where appropriate. Policies should be benchmarked and tracked against appropriate international comparators. The Government Office for AI has a clear role to play here, and we recommend that progress against the recommendations of this report, the Government’s AI-specific policies within the Industrial Strategy and other related polices, be reported on an annual basis to Parliament.
(Paragraph 391)

A vision for Britain in an AI world

71.The transformative potential for artificial intelligence on society at home, and abroad, requires active engagement by one and all. The Government has an opportunity at this point in history to shape the development and deployment of artificial intelligence to the benefit of everyone. The UK’s strengths in law, research, financial services and civic institutions, mean it is well placed to help shape the ethical development of artificial intelligence and to do so on the global stage. To be able to demonstrate such influence internationally, the Government must ensure that it is doing everything it can for the UK to maximise the potential of AI for everyone in the country. (Paragraph 402)

72.We recommend that the Government convene a global summit in London by the end of 2019, in close conjunction with all interested nations and governments, industry (large and small), academia, and civil society, on as equal a footing as possible. The purpose of the global summit should be to develop a common framework for the ethical development and deployment of artificial intelligence systems. Such a framework should be aligned with existing international governance structures. (Paragraph 403)

An AI Code

73.Many organisations are preparing their own ethical codes of conduct for the use of AI. This work is to be commended, but it is clear that there is a lack of wider awareness and co-ordination, where the Government could help. Consistent and widely-recognised ethical guidance, which companies and organisations deploying AI could sign up to, would be a welcome development. (Paragraph 419)

74.We recommend that a cross-sector ethical code of conduct, or ‘AI code’, suitable for implementation across public and private sector organisations which are developing or adopting AI, be drawn up and promoted by the Centre for Data Ethics and Innovation, with input from the AI Council and the Alan Turing Institute, with a degree of urgency. In some cases, sector-specific variations will need to be created, using similar language and branding. Such a code should include the need to have considered the establishment of ethical advisory boards in companies or organisations which are developing, or using, AI in their work. In time, the AI code could provide the basis for statutory regulation, if and when this is determined to be necessary. (Paragraph 420)





© Parliamentary copyright 2018