AI in the UK: ready, willing and able? Contents

Chapter 8: Mitigating the risks of artificial intelligence

304.In the course of our inquiry we encountered several serious issues associated with the use of artificial intelligence that require careful thought, and deliberate policy, from the Government. These include the issue of determining legal liability, in cases where a decision taken by an algorithm has an adverse impact on someone’s life, the potential criminal misuse of artificial intelligence and data, and the use of AI in autonomous weapons systems.

Legal liability

305.The emergence of any new technology presents a challenge for the existing legal and regulatory framework. This challenge may be made most apparent by the widespread development and use of artificial intelligence. Cooley (UK) LLP told us that “as artificial intelligence technology develops, it will challenge the underlying basis of legal obligations according to present concepts of private law (whether contractual or tortious)”.417

306.A serious issue which witnesses brought to our attention was who should be held accountable for decisions made or informed by artificial intelligence. This could be a decision about receiving a mortgage, in diagnosing illness, or a decision taken by an automated vehicle on the road.

307.Arm, a multinational semiconductor and software design company, asked: “what happens when a genuine AI machine makes a decision which results in harm? In such cases unravelling the machine’s thought processes may not be straightforward”.418 The IEEE’s European Public Policy Initiative Working Group on ICT told us that one of the major legal issues which needed to be addressed was the establishment of “liability of industry for accidents involving autonomous machines” because “this poses a challenge to existing liability rules where a legal entity (person or company) is ultimately responsible when something goes wrong”.419

308.Our witnesses explained why addressing the question of legal liability was so important. The Royal College of Radiologists said “legal liability is often stated as a major societal hurdle to overcome before widespread adoption of AI becomes a reality”.420 Dr Mike Lynch said a legal liability framework and insurance were “vital to allow these systems to actually be used. If insurance and legal liability are not sorted out this will be a great hindrance to the technology being adopted”.421 We agree with our witnesses in this regard. Unless a clear understanding of the legal liability framework is reached, and steps taken to adjust such a framework if proven necessary, it is foreseeable that both businesses and the wider public will not want to use AI-powered tools.

309.Our witnesses considered whether, in the event AI systems malfunction, underperform or otherwise make erroneous decisions that cause individuals harm, new mechanisms for legal liability and redress in these situations were needed. Kemp Little LLP told us that our current legal system looks to establish liability based on standards of behaviour that could be reasonably expected, and looks to establish the scope of liability based on the foreseeability of an outcome from an event.422 They told us “AI challenges both of these concepts in a fundamental way”.423 This is because of the difficulties which exist in understanding how a decision has been arrived at by an AI system. Kemp Little LLP also suggested that “the law needs to consider what it wants the answers to be to some of these questions on civil and criminal liabilities/responsibilities and how the existing legal framework might not generate the answers the law would like”.424 In contrast, Professor Reed thought the existing legal mechanisms worked: “The law will find a solution. If we have a liability claim, the law will find somebody liable or not liable”.425 Professor Reed did, however, also tell us that some of the questions asked to identify liability “may be answerable only by obtaining information from the designers who are from a different country. It will make litigation horribly expensive, slow, and very difficult”.426

310.Professor Karen Yeung, then Professor of Law and Director of the Centre for Technology, Ethics, Law and Society, Dickson Poon School of Law, King’s College London, said that she did “not think that our existing conceptions of the liability and responsibility have yet adapted” and “that if it comes to court the courts will have to find a solution, but somebody will have been harmed already”.427 Professor Yeung told us “it is in the interests of industry and the general public to clarify and provide assurance that individuals will not suffer harm and not be uncompensated”.428 Paul Clarke, Ocado, said:

“AI definitely raises all sorts of new questions to do with accountability. Is it the person or people who provided the data who are accountable, the person who built the AI, the person who validated it, the company which operates it? I am sure much time will be taken up in courts deciding on a case-by-case basis until legal precedence is established. It is not clear. In this area this is definitely a new world, and we are going to have to come up with some new answers regarding accountability”.429

311.Others from industry did not agree. Dr Mark Taylor, Dyson, told us that he does “not foresee a situation with our products where we would fall outside the existing legislation” as anything Dyson sells complies with the laws and regulations of the market in which they sell them.430 Dr Joseph Reger, Chief Technology Officer for Europe, the Middle East, India and Africa at Fujitsu, told us that “we need a legal system that keeps up … because these products are hitting the market already and therefore the questions of liability, responsibility and accountability need to have a new definition very soon”.431 It is clear to us, therefore, that the issue of liability needs to be addressed as soon as possible, in order to ensure that it is neither a barrier to widespread adoption, nor decided too late for the development of much of this technology.

312.euRobotics highlighted the work of the Committee on Legal Affairs (JURI) in the European Parliament in this area. JURI established a Working Group on legal questions related to the development of robotics and artificial intelligence in the European Union on 20 January 2015. The resulting report, Civil Law Rules on Robotics, was published on 27 January 2017.432 The report made recommendations to the European Commission and called for EU-wide rules for robotics and artificial intelligence, in order to fully exploit their economic potential and to guarantee a standard level of safety and security.

313.JURI requested, amongst many recommendations, that draft legislation to clarify liability issues (in particular for driverless cars), and for a mandatory insurance scheme and supplementary fund to compensate victims of accidents involving self-driving cars. The Commission was also asked to consider giving legal status to robots, in order to establish who is liable if they cause damage. On 16 February 2017, the European Parliament adopted JURI’s report.433

314.Our witnesses also raised the issue of legal personality—the term used to establish which entities have legal rights and obligations, and which can do such things as enter into contracts or be sued434—for artificial intelligence.435 A group of academic witnesses said “it cannot be ignored that the development of AI and of robotics may produce also the need to legislate about whether they should have legal personality”.436

315.Dr Sarah Morley and Dr David Lawrence, both of Newcastle University Law School, said “the decision to award legal status to AI will have many ramifications for legal responsibility and for issues such as legal liability”.437 On the other hand, Dr Morley and Dr Lawrence told us, “if AI are not awarded legal personality then the Government will need to decide who takes legal responsibility for these technologies, be it the developers (companies) or the owners”.438 They also said that there may be issues for criminal liability.439

316.Professor Yeung said that the issue of whether or not algorithms should have legal personality “must be driven by how you envisage the distribution of loss, liability and responsibility more generally”.440 Professor Yeung told us that the nature of the compensation system was also important, and that a negligence-based system, relying on a chain of causation—the series of events used to assess liability for damages—would be broken by “the lack of reasonable foresight” offered by an algorithm. Professor Reed was less concerned about this particular issue, and told us “you never apply law to technology; you always apply law to humans and the way they use technology, so there will always be someone who is using the algorithm on whom responsibility can be placed”.441

317.In our opinion, it is possible to foresee a scenario where AI systems may malfunction, underperform or otherwise make erroneous decisions which cause harm. In particular, this might happen when an algorithm learns and evolves of its own accord. It was not clear to us, nor to our witnesses, whether new mechanisms for legal liability and redress in such situations are required, or whether existing mechanisms are sufficient.

318.Clarity is required. We recommend that the Law Commission consider the adequacy of existing legislation to address the legal liability issues of AI and, where appropriate, recommend to Government appropriate remedies to ensure that the law is clear in this area. At the very least, this work should establish clear principles for accountability and intelligibility. This work should be completed as soon as possible.

Criminal misuse of artificial intelligence and data

319.There was some concern amongst our witnesses that AI will, and indeed could already be, super-charging conventional cyber-attacks, and facilitating an entirely new scale of cyber-attack.

320.There is some debate within the cybersecurity community as to whether hackers are already using AI for offensive purposes. At the recent Black Hat USA 2017 cybersecurity conference, a poll found that 62% of attendees believed that machine learning was already being deployed by hackers.442

321.The Future of Humanity Institute highlighted the potential use of AI for ‘spear phishing’, a kind of cyber-attack where an email is tailored to a specific individual, organisation or business, usually with the intent to steal data or install malware on a target computer or network.443 Using AI, a normally labour intensive form of cyber-attack could be automated, thereby substantially increasing the number of individuals or organisations that can be targeted.

322.AI systems can also have particular vulnerabilities which do not exist in more conventional systems. The field of ‘adversarial AI’ is a growing area of research, whereby researchers, armed with an understanding of how AI systems work, attempt to fool other AI systems into making incorrect classifications or decisions. In recent years, image recognition systems in particular have been shown to be susceptible to these kinds of attacks. For example, it has been shown that pictures, or even three-dimensional models or signs, can be subtly altered in such a way that they remain indistinguishable from the originals, but fool AI systems into recognising them as completely different objects.444

323.In written evidence, the Reverend Dr Lyndon Drake gave the following examples:

“ … an ill-intentioned person might display a printed picture to a self-driving car with the result that the car crashes. Or someone might craft internet traffic that gives an automated weapons system the impression of a threat, resulting in an innocent person’s death. Of course, both of these are possible with non-machine learning systems too (or indeed with human decision-makers), but with non-machine learning approaches the reasoning involved can be interrogated, recovered, and debugged. This is not possible with many machine learning systems”.445

324.Adversarial AI also has implications for AI-enabled approaches to cybersecurity. While we heard from a number of witnesses who argued that AI was already helping to prevent cyber-attacks, some researchers have argued that AI-powered cybersecurity systems might be tricked into allowing malware through firewalls.446 NCC Group, a cybersecurity company, told us that the black box nature of most machine learning-based products in use today, which prevents humans understanding much about how data is being processed, means adversaries have:

“ … a myriad of vectors available to attempt the manipulation of data that might ultimately affect operations. In addition, a growing number of online resources are available to support adversarial machine learning tasks … We believe that it is inevitable that attackers will start using AI and machine learning for offensive operations. Tools are becoming more accessible, datasets are becoming bigger and skills are becoming more widespread, and once criminals decide that it is economically rational to use AI and machine learning in their attacks, they will”.447

325.However, it is not yet clear how serious this problem is likely to be in real-world scenarios. Most examples to date have not been considered ‘robust’—while they may fool an AI system from a particular angle, usually if an image is rotated or zoomed in slightly, the effect is lost. Recent experiments, however, have shown the possibility of creating more robust attacks. Many AI developers have started to consider adversarial hacking, and in some cases are considering possible countermeasures. When we asked Professor Chris Hankin, Director of the Institute for Security Science and Technology at Imperial College, about the implications of this, he informed us that “at the moment certainly, AI is not the only answer we should be thinking about for defending our systems”.448

326.During our visit to Cambridge, researchers from the Leverhulme Centre for the Future of Intelligence said that many developments in AI research have many different applications, which can be put to good use, but can equally be abused or misused. They claimed that AI researchers can often be naïve about the possible applications of their research. They suggested that a very small percentage (around 1%) of AI research regarding applications with a high risk of misuse should not be published, on the grounds that the risks outweighed the benefits. As much AI research, even in some cases from large corporations, is published on an open access or open source basis, this would contravene the general preference for openness among many AI researchers.

327.When we put this to Dr Mark Briers, Strategic Programme Director for Defence and Security, Alan Turing Institute, he said that “there is an ethical responsibility on all AI researchers to ensure that their research output does not lend itself to obvious misuse and to provide mitigation, where appropriate”.449 However, drawing on the example of 3D printing, where the same technology that can illicitly produce firearms is also producing major medical advances, he believed that “principles and guidelines, as opposed to definitive rules” were more appropriate. Professor Hankin also agreed with this approach, noting precedents in cybersecurity, where many vendors provide ‘bug bounties’ to incentivise the disclosure of security vulnerabilities in computer systems by researchers and other interested parties, so that they can be patched before knowledge of their existence is released into the public domain.450

328.The potential for well-meaning AI research to be used by others to cause harm is significant. AI researchers and developers must be alive to the potential ethical implications of their work. The Centre for Data Ethics and Innovation and the Alan Turing Institute are well placed to advise researchers on the potential implications of their work, and the steps they can take to ensure that such work is not misused. However, we believe additional measures are required.

329.We recommend that universities and research councils providing grants and funding to AI researchers must insist that applications for such money demonstrate an awareness of the implications of the research and how it might be misused, and include details of the steps that will be taken to prevent such misuse, before any funding is provided.

330.Witnesses also told us that the potential misuse of AI should be considered in terms of the data fed into these systems. The subject of adversarial attacks illustrates more widely how misleading data can also harm the integrity of AI systems. In Chapter 3 we considered the issue of how biased datasets can lead an AI system to the wrong conclusions, but systems can also be corrupted on purpose. As 10x Future Technology put it, “as data increases in value as a resource for training artificial intelligences, there will be new criminal activities that involve data sabotage: either by destroying data, altering data, or injecting large quantities of misleading data”.451 NCC Group said: “If attackers can taint data used at training or operation phases, unless we are able to identify the source of any of those taints (which could be akin to finding a needle in a haystack), it might be extraordinarily difficult to prosecute criminals using traditional means”.452

331.There are a number of possible solutions to these issues. NCC Group highlighted recent research on countering adversarial attacks by devising means to detect and reject ‘dangerous data’ before it can reach the classification mechanisms of an AI system.453 However, they believed it was more important to ensure that the data used to train and operate AI systems were not put at risk of interference in the first place, and “to counter such risks, clear processes and mechanisms need to be in place by which AI applications carefully vet and sanitise their respective data supply chains, particularly where data originates from untrusted sources, such as the Internet and end-users”.454 They suggested that mandatory third-party validation of AI systems should be considered, in order to periodically check their effectiveness, especially in the case of cybersecurity systems which are safeguarding other systems.

332.We note these concerns, and are surprised that the Cabinet Office’s recently published Interim Cyber Security Strategy, while making reference to the opportunities for deploying AI in cybersecurity contexts, does not make any mention of the associated risks.455 This is particularly important given the current push for more Government data to be opened up for public use, and we are convinced that measures must be put in place to ensure that the integrity and veracity of this data is not corrupted, and that advice is provided to the private sector to ensure their datasets are similarly protected against malicious use.

333.We recommend that the Cabinet Office’s final Cyber Security Science & Technology Strategy take into account the risks as well as the opportunities of using AI in cybersecurity applications, and applications more broadly. In particular, further research should be conducted into methods for protecting public and private datasets against any attempts at data sabotage, and the results of this research should be turned into relevant guidance.

Autonomous weapons

334.Perhaps the most emotive and high-stakes area of AI development today is its use for military purposes. While we have not explored this area with the thoroughness and depth that only a full inquiry into the subject could provide, there were particular aspects which needed acknowledging, even in brief. The first distinction that was raised by witnesses was between the relatively uncontroversial use of AI for non-violent military applications, such as logistics and strategic planning, and its use in autonomous weapons, or so-called ‘killer robots’. The former uses, while representing an area of substantial innovation and growth at the moment, were deemed uncontentious by all of our witnesses and the issues they present appear to be broadly in alignment with the ethical aspects of civilian AI deployment.456 As such, we have chosen to focus exclusively on the issue of autonomous weaponry.

335.We quickly discovered that defining the concept of autonomous weaponry with any precision is fraught with difficulty. The term cannot simply be applied to any weapon which makes use of AI; indeed, as one respondent pointed out, “no modern anti-missile system would be possible without the use of AI systems”.457 Most witnesses used the term to describe weapons which autonomously or semi-autonomously target or deploy violent force, but within this there are many shades of grey, and our witnesses disagreed with one another when it came to describing autonomous weapons in use or development. For example, Dr Alvin Wilby, Vice-President of Research, Technical and Innovation, Thales, suggested that the Israeli Harpy drone, which is “capable of loitering over an area and deciding which target to go for”, and has already been deployed in armed conflict, would count as an autonomous weapon.458 But Professor Noel Sharkey, Professor of robotics and artificial intelligence, University of Sheffield, disputed this definition, arguing that the Harpy drone was a relatively simple computational system, and demonstrated the extent to which the “the term AI is running out of control” within the arms industry.459 Professor Sharkey also highlighted how arms manufacturers had a tendency to play up the sophistication and autonomy of their products in marketing, and downplay them when scrutinised by international bodies such as the United Nations (UN).

336.It was generally agreed that the level of human control or oversight over these weapons was at the heart of the issue. While it is now common to simply refer to there being a ‘human in the loop’ with many semi-autonomous weapons in use or active development, it emerged that this could mean many things. Professor Sharkey outlined a number of different levels of autonomy.460 This ranged from ‘fire-and-forget’ missiles, such as the Brimstone missile used by UK armed forces, which have a pre-designated target but can change course to seek this out; through to more autonomous systems with a brief ‘veto period’, such as the US Patriot missile system, in which a human can override the automated decision; and finally fully autonomous weapons, which seek out their own targets without human designation.

Box 10: UK Government definitions of automated and autonomous systems

The Ministry of Defence (MoD) most recently defined autonomous weapons in official guidance on unmanned aircraft systems in September 2017, and has made a relatively unusual distinction between automated and autonomous systems.

Automated system

In the unmanned aircraft context, an automated or automatic system is one that, in response to inputs from one or more sensors, is programmed to logically follow a predefined set of rules in order to provide an outcome. Knowing the set of rules under which it is operating means that its output is predictable.

Autonomous system

An autonomous system is capable of understanding higher-level intent and direction. From this understanding and its perception of its environment, such a system is able to take appropriate action to bring about a desired state. It is capable of deciding a course of action, from a number of alternatives, without depending on human oversight and control, although these may still be present. Although the overall activity of an autonomous unmanned aircraft will be predictable, individual actions may not be.

Source: Ministry of Defence, Unmanned aircraft systems (12 September 2017): https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/673940/doctrine_uk_uas_jdp_0_30_2.pdf [accessed 7 February 2018]

337.The distinctions, whilst technical, take on a greater significance given the current moves to place restrictions on autonomous weapons under international law. At a meeting of experts, convened by the UN, in April 2016, 94 countries recommended beginning formal discussions about lethal autonomous weapons systems. The talks are to consider whether these systems should be restricted under the Convention on Certain Conventional Weapons, a disarmament treaty that has regulated or banned several other types of weapons, including incendiary weapons and blinding lasers. In November 2017, 86 countries participated in a meeting of the UN’s Convention on Certain Conventional Weapons Group of Governmental Experts. 22 countries now support a prohibition on fully autonomous weapons, including, most recently, Brazil, Uganda and Iraq.461

338.In September 2017, the MoD issued updated guidance stating that “the UK does not possess fully autonomous weapon systems and has no intention of developing them. Such systems are not yet in existence and are not likely to be for many years, if at all”.462 It is important to note that the UK distinguishes between ‘autonomous’ and ‘automated’ military systems (see Box 10).

339.The Government has also opposed the proposed international ban on the development and use of autonomous weapons. The Government argues that existing international human rights law is adequate, and that the UN Convention on Certain Conventional Weapons currently allows for adequate scrutiny of automated and autonomous weapons under its mechanisms for legal weapons review.463

340.Professor Sharkey argued that requiring an autonomous weapon system to be “aware and show intention”, as stated in MoD guidance, was to set the bar so high that it was effectively meaningless.464 He also told us that it was “out of step” with the rest of the world, a point seemingly acknowledged by the MoD in their guidance, which states that “other countries and industry often have very different definitions or use the terms [autonomous and automated] interchangeably”.465

341.In practice, this lack of semantic clarity could lead the UK towards an ill-considered drift into increasingly autonomous weaponry. Professor Sharkey noted that BAE Systems has described its Taranis unmanned vehicle as ‘autonomous’, and that this capacity has been “widely tested in Australia”.466 On the other hand, Major Kitty McKendrick, speaking in her capacity as a visiting fellow at Chatham House, argued that she would not consider systems that have been told to “look for certain features [and identify targets] on that basis” as “genuinely autonomous” as they are acting “in accordance with a predictable program”.467

Box 11: Definitions of lethal autonomous weapons systems used by other countries

The following are definitions of lethal autonomous weapons systems (LAWS) used by other countries.

Austria

Autonomous weapons systems (AWS) are weapons that in contrast to traditional inert arms, are capable of functioning with a lesser degree of human manipulation and control, or none at all.

France

LAWS should be understood as implying a total absence of human supervision, meaning there is absolutely no link (communication or control) with the military chain of command. The delivery platform of a LAWS would be capable of moving, adapting to its land, marine or aerial environments and targeting and firing a lethal effector (bullet, missile, bomb, etc.) without any kind of human intervention or validation.

The Holy See

An autonomous weapon system is a weapon system capable of identifying, selecting and triggering action on a target without human supervision.

Italy

LAWS are systems that make autonomous decisions based on their own learning and rules, and that can adapt to changing environments independently of any pre-programming and they could select targets and decide when to use force, and would be entirely beyond human control.

The Netherlands

A weapon that, without human intervention, selects and attacks targets matching certain predefined characteristics, following a human decision to deploy the weapon on the understanding that an attack, once launched, cannot be stopped by human intervention.

Norway

Weapons that would search for, identify and attack targets, including human beings, using lethal force without any human operator intervening.

Switzerland

AWS are weapons systems that are capable of carrying out tasks governed by international humanitarian law, in partial or full replacement of a human in the use of force, notably in the targeting cycle.

USA

A weapon system that, once activated, can select and engage targets without further intervention by a human operator.

Source: Written evidence from Professor Noel Sharkey (AIC0248)

342.While the definitions in Box 11, which mostly represent NATO-member countries, vary in their wording, none would appear to set the bar as high as the UK. All of these definitions focus on the level of human involvement in supervision and target setting, and do not require “understanding higher-level intent and direction”, which could be taken to mean at least some level of sentience.

343.When we asked Matt Hancock MP about the UK’s definition, he said:

“There is not an internationally agreed definition of lethal autonomous weapons systems. We think that the existing provisions of international humanitarian law are sufficient to regulate the use of weapons systems that might be developed in the future. Of course, having a strong system and developing it internationally within the UN Convention on Certain Conventional Weapons is the right way to discuss the issue. Progress was made in Geneva by the group of government experts just last month. It is an important area that we have to get right”.468

344.We were encouraged by the Minister’s willingness to enter into this debate, and consider the need for a change in Government policy in this area. Regardless of the merits or otherwise of an international ban, as Mike Stone, former Chief Digital and Information Officer, Ministry of Defence, emphasised there is a need for a “very clear lexicon” in this area which does not necessarily apply in most civilian domains.469

345.Without agreed definitions we could easily find ourselves stumbling through a semantic haze into dangerous territory. The Government’s definition of an autonomous system used by the military as one where it “is capable of understanding higher-level intent and direction” is clearly out of step with the definitions used by most other governments. This position limits both the extent to which the UK can meaningfully participate in international debates on autonomous weapons and its ability to take an active role as a moral and ethical leader on the global stage in this area. Fundamentally, it also hamstrings attempts to arrive at an internationally agreed definition.

346.We recommend that the UK’s definition of autonomous weapons should be realigned to be the same, or similar, as that used by the rest of the world. To produce this definition the Government should convene a panel of military and AI experts to agree a revised form of words. This should be done within eight months of the publication of this report.


417 Written evidence from Cooley (UK) LLP (AIC0217)

418 Written evidence from Arm (AIC0083)

419 Written evidence from the IEEE European Public Policy Initiative Working Group on ICT (AIC0106)

420 Written evidence from Royal College of Radiologists (AIC0146)

421 Written evidence from Dr Mike Lynch (AIC0005)

422 Written evidence from Kemp Little LLP (AIC0133)

423 Ibid.

424 Ibid.

425 Q 32 (Professor Chris Reed)

426 Ibid.

427 Q 32 (Professor Karen Yeung)

428 Ibid.

429 Q 111 (Paul Clarke)

430 Q 111 (Dr Mark Taylor)

431 Q 111 (Dr Joseph Reger)

432 European Parliament, Report with recommendations to the Commission on Civil Law Rules on Robotics (27 January 2017): http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//TEXT+REPORT+A8-2017–0005+0+DOC+XML+V0//EN [accessed 12 January 2018]

434 Further, as defined by the Oxford Legal Dictionary (7th edition, 2014), legal personality is “principally an acknowledgement that an entity is capable of exercising certain rights and being subject to certain duties on its own account under a particular system of law. In municipal systems, the individual human being is the archetypal “person” of the law, but certain entities, such as limited companies or public corporations, are granted a personality distinct from the individuals who create them. Further, they can enter into legal transactions in their own name and on their own account.”

435 Written evidence from Weightmans LLP (AIC0080).

436 Written evidence from Dr Aysegul Bugra, Dr Matthew Channon, Dr Ozlem Gurses, Dr Antonios Kouroutakis and Dr Valentina Rita Scotti (AIC0051)

437 Written evidence from Dr Sarah Morley and Dr David Lawrence (AIC0036)

438 Ibid.

439 Ibid.

440 Q 31 (Professor Karen Yeung)

441 Q 31 (Professor Chris Reed)

442 Cylance, ‘Black Hat attendees see AI as double-edged sword’ (1 August 2017): https://www.cylance.com/en_us/blog/black-hat-attendees-see-ai-as-double-edged-sword.html [accessed 23 January 2018]

443 Written evidence from Future of Humanity Institute (AIC0103)

444 Written evidence from Dr Julian Estevez (AIC0021)

445 Written evidence from the Reverend Dr Lyndon Drake (AIC0108)

446 Written evidence from NCC Group plc (AIC0240) and Q 147 (Professor Chris Hankin)

447 Written evidence from NCC Group plc (AIC0240)

448 Q 147 (Professor Chris Hankin)

449 Q 146 (Dr Mark Briers)

450 Q 146 (Professor Chris Hankin). ‘Bug bounties’ are monetary rewards paid by software companies to individuals who find and disclose vulnerabilities to them.

451 Written evidence from 10x Future Technology (AIC0024)

452 Written evidence from NCC Group plc (AIC0240)

453 Ibid.

454 Ibid.

455 Cabinet Office, Interim cyber security science & technology strategy: Future-proofing cyber security (December 2017), pp 8–9: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/663181/Embargoed_National_Cyber_Science_and_Technology_Strategy_FINALpdf.pdf [accessed 30 January 2018]

456 Q 154 (Mike Stone, Professor Noel Sharkey, Major Kitty McKendrick, Dr Alvin Wilby)

457 Written evidence from the Reverend Dr Lyndon Drake (AIC0108)

458 154 (Dr Alvin Wilby)

459 Q 157 (Professor Noel Sharkey)

460 Q 155 (Professor Noel Sharkey)

461 ‘Support grows for new international law on killer robots’, Campaign to stop killer robots (17 November 2017): https://www.stopkillerrobots.org/2017/11/gge/ [accessed 1 February 2018]

462 Ministry of Defence, Unmanned Aircraft Systems (September 2017), p 14: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/673940/doctrine_uk_uas_jdp_0_30_2.pdf [accessed 18 January 2018]

464 Q 155 (Professor Noel Sharkey)

466 Q 156 (Professor Noel Sharkey)

467 Q 157 (Major Kitty McKendrick)

468 Q 199 (Matt Hancock MP)

469 Q 155 (Mike Stone)




© Parliamentary copyright 2018