some default text...
AI in the UK: ready, willing and able? Contents

Chapter 2: Engaging with artificial intelligence

39.The representation of artificial intelligence in popular culture is light-years away from the often more complex and mundane reality. Based on representations in popular culture and the media, the non-specialist would be forgiven for picturing AI as a humanoid robot (with or without murderous intentions), or at the very least a highly intelligent, disembodied voice able to assist seamlessly with a range of tasks. As discussed in the previous chapter, this is not a true reflection of its present capability, and grappling with the pervasive yet often opaque nature of artificial intelligence is becoming increasingly necessary for an informed society. This chapter focuses on the public’s understanding of, and engagement with, AI and its implications, and how it can be improved.

General understanding, engagement and public narratives

40.Public perceptions of a subject as varied and amorphous as artificial intelligence will always be difficult to pinpoint with any precision, and are likely to change rapidly with every new innovation, scandal or accident which emerges into the public consciousness. AI is now such a wide-ranging subject that perceptions are increasingly dependent on who is using the technology, and to what purposes.38 The Royal Society told us that their recent assessment of public attitudes towards machine learning in particular found that:

“ … participants took a broadly pragmatic approach, assessing the technology on the basis of: the perceived intention of those using the technology; who the beneficiaries would be; how necessary it was to use machine learning, rather than other approaches; whether there were activities that felt clearly inappropriate; and whether a human is involved in decision-making. Accuracy and the consequences of errors were also key considerations”.39

41.Nevertheless, some of our witnesses suggested to us that while the British public is broadly aware of artificial intelligence, they often have inaccurate impressions of how it works, where it is to be found and its implications for them.40 A survey specifically on machine learning, published on behalf of the Royal Society in April 2017, found that awareness of machine learning applications was relatively high, with 76% of respondents having heard of computers that can recognise speech and answer questions, and 89% being aware of at least one of the eight examples of machine learning used in the survey. However, it also found very limited awareness of how these applications worked, with just 9% of those surveyed having even heard the term ‘machine learning’, and 3% claiming that that they knew a great deal or fair amount about it.41

42.Awareness and understanding of AI also varies across different segments of society. The Raymond Williams Foundation told us that “the more distant a person is from the subjects of science, technology, engineering and mathematics (especially statistics) … the less likely s/he is to appreciate the changes that are underway”.42 Perhaps more surprisingly, Dr Ansgar Koene highlighted the fact that even young people were often surprised by the growing extent of AI in automated decision-making processes today.43

43.Many AI researchers and witnesses connected with AI development told us that the public have an unduly negative view of AI and its implications, which in their view had largely been created by Hollywood depictions and sensationalist, inaccurate media reporting.44 As well as unduly frightening people, witnesses said that these definitions were concentrating attention on threats which are still remote, such as the possibility of ‘superintelligent’ artificial general intelligence, while distracting attention away from more immediate risks and problems.45

44.Such witnesses wanted a more positive take on AI and its benefits to be conveyed to the public, and feared that developments in AI might be threatened with the kind of public hostility directed towards genetically modified (GM) crops in the 1990s and 2000s.46 In that case, companies and researchers struggled to articulate how they would benefit individual consumers, particularly in developed economies, and the perception took hold that large corporations would be the primary beneficiaries. Given that many of the efficiency and productivity benefits of AI adoption are likely to occur ‘behind the scenes’, and will not necessarily take the form of consumer-orientated products, there is a risk that GM-style opposition to AI could also grow. It is therefore up to businesses to ensure that benefits are passed on to consumers, in the form of innovative new AI-powered products, and cheaper and better services which are clearly linked to AI.

45.Journalists covering AI informed us that it was often difficult to cover AI and automation issues in a responsible and balanced manner, given the current level of public interest in the subject.47 Sarah O’Connor, employment correspondent for the Financial Times, told us that “if you ever write an article that has robots or artificial intelligence in the headline, you are guaranteed that it will have twice as many people click on it”, and at least some journalists were sensationalising the subject in order to drive web traffic and advertising revenues.48

46.However, there were those, both within and without AI development, who felt that in many cases AI developers and companies were at least partly responsible for public misunderstandings and confusion. Professor Kathleen Richardson and Nika Mahnič said that research scientists often inflated the potential of AI to attract prestigious research grants, resulting in “huge EU funded projects [which] are now promoting unfounded mythologies about the capabilities of AI”.49 Professor Sir David Spiegelhalter, President of the Royal Statistical Society, while noting the propensity for “utter puff” on AI from the media, maintained that ultimate responsibility for clarity lay with AI researchers and practitioners, and asked why they were not “working with the media and ensuring that the right sorts of stories appear”.50 Other witnesses highlighted historical precedents, warning that excessive hyping had damaged the credibility of AI and AI researchers during earlier phases of its development, and there was a risk that these mistakes were being repeated in the present.51

47.There was also a wide variety of views as to what the intended purpose of public engagement in relation to AI should be. As mentioned, many AI researchers were concerned that the public were being presented with overly negative or outlandish depictions of AI, and that this could trigger a public backlash which could make their work more difficult. They told us that public engagement should be about building trust in AI to prevent this from happening.52

48.Other witnesses warned against simplistically attempting to build trust in AI, as at least some applications of AI would not be worthy of trust. For example in cases where the technology may be used to mislead or deceive users, citizens and consumers would need the skills to decide whether to trust it for themselves.53 Professor David Edgerton, Hans Rausing Professor of the History of Science and Technology, King’s College London, warned against Parliament or the Government attempting to “ensure that people embrace changes that are dictated from above”.54 In his view their responsibility should be to “give people choices over which they can exercise their collective judgment. We should not assume that the stories that we tell about AI will reflect what will come to pass”.55

49.One set of choices which members of the public will need to face is how personal data is used and, in some cases, abused. While we cannot know with certainty what shape AI will take in the future, it is highly likely that data will continue to be important. A number of witnesses believed that AI provided added impetus to the need to better educate the public on the use of their data and the implications for their privacy.56 Professor Peter McOwan, Vice Principal (Public Engagement and Student Enterprise), Queen Mary University of London, told us that AI systems have become better at automatically combining separate datasets, and can piece together much more information about us than we might realise we have provided. He cited as an example ‘pleaserobme.com’, a short-lived demonstration website, which showed how publicly accessible social media data could be combined to automatically highlight home addresses of people who were on holiday.57

50.The media provides extensive and important coverage of artificial intelligence, which occasionally can be sensationalist. It is not for the Government or other public organisations to intervene directly in how AI is reported on, nor to attempt to promote an entirely positive view among the general public of its possible implications or impact. Instead, the Government must understand the need to build public trust and confidence in how to use artificial intelligence, as well as explain the risks.

Everyday engagement with AI

51.Beyond a general awareness of AI in the abstract, the average citizen is, and will be increasingly, exposed to AI-enabled products and services in their day-to-day existence, often with little to no knowledge that this is the case. Although an improved general awareness and knowledge of AI and its implications may be desirable, some witnesses argued that there were limits to what the public could be reasonably expected to learn, or needed to know, with regards to an often highly technical subject. As such, they believed there should be a focus on particular aspects, scenarios and implications of AI and associated technology, rather than AI in the abstract. The Information Commissioner’s Office (ICO) stated that there was a “need to be realistic about the public’s ability to understand in detail how the technology works”, and it would be better to focus on “the consequences of AI, rather than on the way it works”, in a way that empowered individuals to exercise their rights.58

52.Witnesses also made the point that, while consumers often had relatively few AI-specific concerns for now, they were gradually becoming more aware of the algorithmic nature of particular products and services, and the role of data in powering them. Colin Griffiths, of Citizens Advice, explained that while AI was enabling new products and services for consumers, particularly with regards to tailoring them to individual consumers, it was also enabling new things to be done to consumers which might not be in their interests, and which they might not be comfortable with.59 Will Hayter, Project Director of the Competition and Markets Authority, agreed:

“ … the pessimistic scenario is that the technology makes things difficult to navigate and makes the market more opaque, and perhaps consumers lose trust and disengage from markets. The more optimistic scenario is that the technology is able to work for consumers”.60

53.A number of witnesses highlighted the need for transparency when AI was being used with respect to consumers. For example, the Electronic Frontier Foundation told us of the need for transparency regarding the use of AI for dynamic or variable pricing systems, which allow businesses to vary their prices in real time.61 While this is mostly used at present to adjust prices in accordance with market fluctuations, as with online flight booking sites, it is increasingly allowing retailers to adjust prices according to what a specific individual customer is willing or able to pay, without necessarily making it apparent how much other customers are paying for the same thing.

54.While witnesses acknowledged the limits to consumer education, and noted that additional consumer protections may be necessary, many witnesses felt that mechanisms for informing consumers about the use of AI should be explored.62 The Market Research Society told us that “consumer facing marks with consumer recognition” could be a “useful tool in building consumer trust across markets and can form a vital part of the framework for regulating the use of AI”, a view shared by several other witnesses.63 Professor Spiegelhalter said there might be certain difficulties in terms of defining what AI meant for these purposes, but was also broadly supportive.64 However, Will Hayter was more sceptical of such an approach, pointing to the relative obscurity of the AdSense logo—intended to identify online adverts which have specifically been targeted at individuals—as a discouraging precedent.65

55.Other witnesses were supportive of telling consumers when they were dealing with AI, especially with regards to chatbots, which have gradually begun to replace some forms of online customer service previously performed by humans. Future Intelligence highlighted how on some websites and services, AI chatbots inform the user: “I’m just a bot but I’ll try to find the answers for you”.66 Doteveryone suggested that organisations using AI in this way should be required to declare where and how they use it, “similar to declarations of the use of CCTV or telephone recording”, and should be ready to explain AI functions and outcomes in plain terms.67 With respect to the legal sector in particular, journalist and author Joanna Goodman, and academics from the University of Westminster Law School, told us that legal services using AI should:

“ … be explicit in communicating about its technology to the user so that the user understands what kind of AI it is, how it works, and whether, or at what stage in the process a human is involved; consider appropriate levels of transparency in how they use AI to interact with customers; provide clarity on how AI benefits the customer experience; inform customers about the provisions in place to safeguard their data”.68

56.The need for a more context-specific approach to informing people about AI is clear. While general improvements in public understanding of, and engagement with, AI are good, many people will still be unable or unwilling to grapple with this in the abstract. Nor, indeed, should they have to. They should be informed, in clear, non-technical language, in a way which allows them to make a reasoned decision regarding their own best interests.

57.Whatever the organisation or company in question, people should be provided with relevant information on how AI is involved in making significant or sensitive decisions about them, at the point of interaction. Without this, it will be very difficult for individuals and consumers to understand why they are presented with different information, offers and choices from their fellow citizens, to understand when they are being treated fairly or unfairly, and challenge decisions.

58.Artificial intelligence is a growing part of many people’s lives and businesses. It is important that members of the public are aware of how and when artificial intelligence is being used to make decisions about them, and what implications this will have for them personally. This clarity, and greater digital understanding, will help the public experience the advantages of AI, as well as to opt out of using such products should they have concerns.

59.Industry should take the lead in establishing voluntary mechanisms for informing the public when artificial intelligence is being used for significant or sensitive decisions in relation to consumers. This industry-led approach should learn lessons from the largely ineffective AdChoices scheme. The soon-to-be established AI Council, the proposed industry body for AI, should consider how best to develop and introduce these mechanisms.


38 Written evidence from The Royal Society (AIC0168)

39 Ibid.

40 Written evidence from Raymond Williams Foundation (AIC0122); Professor John Naughton (AIC0144); Baroness Harding of Winscombe (AIC0072); Google (AIC0225); Department of Computer Science University of Liverpool (AIC0192); Dr Toby Walsh (AIC0078) and Transport Systems Catapult (AIC0158)

41 Ipsos MORI Social Research Institute, Public views of Machine Learning: Findings from public research and engagement conducted on behalf of the Royal Society (April 2017): https://royalsociety.org/~/media/policy/projects/machine-learning/publications/public-views-of-machine-learning-ipsos-mori.pdf [accessed 5 February 2018]

42 Written evidence from Raymond Williams Foundation (AIC0122)

43 Written evidence from Dr Ansgar Koene (AIC0208)

44 Written evidence from Dr Toby Walsh (AIC0078); Dr Will Slocombe (AIC0056); Ocado Group plc (AIC0050) and Foundation for Responsible Robotics (AIC0188)

45 Written evidence from Professor John Naughton (AIC0144) and Dr Toby Walsh (AIC0078)

46 Written evidence from Baroness Harding of Winscombe (AIC0072); University College London (AIC0135) and Royal Academy of Engineering (AIC0140)

47 Q 10 (Sarah O’Connor, Rory Cellan-Jones and Andrew Orlowski)

48 Q 10 (Sarah O’Connor)

49 Written evidence from Professor Kathleen Richardson and Ms Nika Mahnič (AIC0200)

50 Q 216 (Professor Sir David Spiegelhalter)

51 Written evidence from Dr Jerry Fishenden (AIC0028)

52 Written evidence from Transport Systems Catapult (AIC0158); Dr Malcolm Fisk (AIC0012) and Innovate UK (AIC0220)

53 Written evidence from Dr Ozlem Ulgen (AIC0112); Q 123 (Dr Julian Huppert) and Q 216 (Professor Sir David Spiegelhalter)

54 Q 214 (Professor David Edgerton)

55 Ibid.

56 Written evidence from Big Brother Watch (AIC0154); Royal Academy of Engineering (AIC0140); Information Commissioner’s Office (AIC0132) and 220 (Professor Sir David Spiegelhalter)

57 Q 218 (Professor Peter McOwan)

58 Written evidence from Information Commissioner’s Office (AIC0132)

59 Q 86 (Colin Griffiths)

60 Q 86 (Will Hayter)

61 Written evidence from Electronic Frontier Foundation (AIC0199)

62 Written evidence from Dr Jerry Fishenden (AIC0028); Professor John Preston (AIC0014) and Dr Will Slocombe (AIC0056)

63 Written evidence from Market Research Society (AIC0130); Deloitte (AIC0075) and Future Intelligence (AIC0216)

64 Q 217 (Professor Sir David Spiegelhalter)

65 Q 87 (Will Hayter)

66 Written evidence from Future Intelligence (AIC0216)

67 Written evidence from Doteveryone (AIC0148)

68 Written evidence from Joanna Goodman, Dr Paresh Kathrani, Dr Steven Cranfield, Chrissie Lightfoot and Michael Butterworth (AIC0104)




© Parliamentary copyright 2018