237.The social implications of any new technology can often be overlooked in the excitement to embrace it. In this chapter we focus on the need to prepare future generations to engage and work with artificial intelligence, the potential affect it may have on social and political cohesion, and on inequality in the UK.
238.Artificial intelligence, regardless of the pace of its development, will have an impact on future generations. The education system needs to ensure that it reflects the needs of the future, and prepares children for life with AI and for a labour market whose needs may well be unpredictable. Education in this context is important for two reasons. First, to improve technological understanding, enabling people to navigate an increasingly digital world, and inform the debate around how AI should, and should not, be used. Second, to ensure that the UK can capitalise on its position as a world leader in the development of AI, and grow this potential.
239.Our witnesses told us of the need to improve the data skills, digital understanding and literacy of young people in the UK. Google said: “one of the most important steps we must take so that everyone can benefit from the promise of AI is to ensure that current and future workforces are sufficiently skilled and well-versed in digital skills and technologies”. Baker McKenzie, a multinational law firm, told us that “education and training will be essential to prepare the workforce to use these emerging technologies effectively”. In December 2017, the Federation of Small Businesses reported that 26% of small business owners lacked confidence in their basic digital skills and 22% believed that a lack of basic digital skills among their staff was preventing them from becoming more digital. The Government recognised this too, and said they have “an important role to play in ensuring that our workforce is equipped to respond and is taking actions at all stages of the digital skills pipeline”.
240.Paul Clarke told us “what is not talked about enough is the fact that those skills lie at the end of what is a pipeline of digital literacy that stretches all the way back to primary school”. He said “I use the phrase ‘digital literacy’ as opposed to ‘coding’. I see digital literacy as being a much bigger portfolio. It includes things such as data literacy: how you harness data, how you visualise it, how you model it, how you understand bias”. Dr Mark Taylor, Global Strategy and Research Director for Dyson, agreed with Paul Clarke, and told us “it is extremely difficult to hire AI talent in the UK”.
241.We heard that more emphasis should be placed on computer science in the overall curriculum. The UK Computing Research Committee, an Expert Panel of the British Computer Society, told us “the UK lags behind many other states in terms of the attention paid to the teaching of Computing Science (as opposed to IT-training which focuses on the ability to use particular applications)”. The Committee also warned us that “initiatives to improve computing science education in the UK are poorly coordinated”. Dr Huma Shah and Professor Kevin Warwick, researchers in artificial intelligence, told us they “strongly believe that AI as a subject should be embedded into the school curriculum from primary age”. They explained that this could help increase the diversity of those working in AI.
242.Other witnesses pointed out that, with limited learning time, an increased focus on computer science will necessarily mean a reduction in other subjects. In particular they expressed concerns that the teaching of arts and humanities subjects which are closely linked with creative thinking, communication and the understanding of context, may suffer. Andrew Orlowski, Executive Editor of The Register, cautioned against the over-emphasis on computer science and literacy he believed was occurring in UK schools:
“ … my children go to an outstanding primary school in north London where they are taught algorithms every week but they are taught history once or twice a term and art, maybe, once or twice a term. There is an opportunity cost; there is only so much time for educating people. I question the value of teaching them algorithms. That is probably part of a balanced curriculum, but if they do not know culture and history how can they account for the world? … You need those things probably more than you need to know how to use a computer”.
243.Miles Berry, Principal Lecturer, School of Education, University of Roehampton, agreed that “the breadth and balance of the curriculum is absolutely paramount”. Future Advocacy said the Government’s reforms to technical education should “encompass a drive on STEM skills and coding in schools, but must also encourage creativity, adaptability, caring and interpersonal skills which will provide a crucial comparative advantage for humans over machines over a longer timeframe”.
244.Others added nuance to this debate by suggesting the focus should be on digital understanding, rather than skills. Doteveryone said: “The best preparation the general public can have for AI, and indeed any technological change, is to have digital understanding”. They added “where digital skills enable people to use digital technologies to perform tasks, digital understanding enables them to appreciate the wider context of and around those actions”. Graham Brown-Martin, an education and technology researcher, told us that it was “far more important for young people to understand that the digital world is a built environment in exactly the same way the physical world is and that it contains all the biases and other limitations of the physical world” than it is for them to be able to code. Professor Rosemary Luckin, Professor of Learner Centred Design at University College London, said “understanding the limitations of technology is really important, as is being able to demand from technology rather than being demanded of by technology”.
245.We were told of the adverse effect an increasingly digital world was having on children in the UK. Professor Maja Pantic, Professor of Affective and Behavioural Computing, Imperial College London, said that children have reduced attention spans, shallower cognitive capabilities and experience a loss of identity as a result of time online and using social media. Professor Pantic warned us that the idealised world represented on social media “leads to many illnesses including eating disorders … and serious mental illnesses”. Professor Pantic told us the increasing use of AI would add to this problem. Miles Berry told us the computing curriculum now requires schools to teach children from as early as five years old how to protect themselves in the digital world, for example, by keeping personal information private.
246.After the Royal Society highlighted significant shortcomings in the National Curriculum’s approach to computer education in 2012, the Government introduced a new computing curriculum from September 2014, aimed at addressing these problems and shifting education away from the use of basic software (the focus of ICT) towards coding and software development. Professor Hall told us that it was too early to tell what impact it was having. Miles Berry helped design this new curriculum, and told us that the draft submitted to Ministers went “much further” than the current curriculum does in addressing personal morality in relation to technology. Berry said: “We included as an aim that children should be taught to develop an awareness of the individual and societal opportunities, challenges and risks raised by digital technology”.
247.Professor Luckin said that establishing which ethics should be taught “would be a bit of a minefield” but “it is a conversation that we have to start having, because it is really important”. Graham Brown-Martin noted the absence of ethics in other digital arenas:
“At the moment within social media platforms we are seeing the results of not having ethics, which is potentially very damaging. You are talking about a question for society to answer in the public domain about what our ethics are. Just because we can do something does not mean that we should do it, and I think we are on that cusp”.
248.On our visit to Cambridge, Microsoft Research told us that one of the central issues with computer science education at present was that it tended to be taught only, or primarily by, computer scientists.
249.It is clear to us that there is a need to improve digital understanding and data literacy across society, as these are the foundations upon which knowledge about AI is built. This effort must be undertaken collaboratively by public sector organisations, civil society organisations (such as the Royal Society) and the private sector.
250.The evidence suggests that recent reforms to the computing curriculum are a significant improvement on the ICT curriculum, although it is still too early to say what the final results of this will be. The Government must be careful not to expand computing education at the expense of arts and humanities subjects, which hone the creative, contextual and analytical skills which will likely become more, not less, important in a world shaped by AI.
251.We are, however, concerned to learn of the absence of wider social and ethical implications from the computing curriculum, as originally proposed. We recommend that throughout the curriculum the wider social and ethical aspects of computer science and artificial intelligence need to be restored to the form originally proposed.
252.Artificial intelligence, and more broadly, computer science, are fast-moving and complex areas to understand. Microsoft told us “it is vital that teachers continue to be supported in a way that enables them to deliver the new curriculum in the most effective way possible”. Other witnesses expressed similar views.
253.In November 2017, the Royal Society published its report After the reboot: computing education in UK schools. This report found that there were no Teacher Subject Specialism Training courses available for computing; that in England the Government met only 68% of its recruitment target for new entrants to computing teacher training courses from 2012 to 2017; and that teachers felt the Government had changed the subject they teach, without providing them with sufficient support to teach it effectively. This confirms much of what our witnesses told us.
254.In February 2015 the House of Lords Select Committee on Digital Skills, in the summary to its report, Make or Break: The UK’s Digital Future, issued a robust call to arms which emphasised the need for urgency and cohesion in the delivery of the nation’s digital future. Three years later it is informative to revisit the well-considered recommendations of that Committee.
255.The Autumn Budget 2017 announced a number of measures aimed at improving computing education at the primary, secondary and further education stages. These included:
256.These steps are welcome, and look to address many of the immediate concerns of both the Royal Society and our own witnesses. However, until it is solved the issue will continue to be an acute shortage of confident well-trained, specialist teachers working across the public sector.
257.While we welcome the measures announced in the Autumn Budget 2017 to increase the number of computer science teachers in secondary schools, a greater sense of urgency and commitment is needed from the Government if the UK is to meet the challenges presented by AI.
258.The Government must ensure that the National Centre for Computing is rapidly created and adequately resourced, and that there is support for the retraining of teachers with associated skills and subjects such as mathematics. In particular, Ofsted should ensure that schools are making additional time available to teachers to enable them to train in new technology-focused aspects of the curriculum. We also urge the Government to make maximum use across the country of existing lifelong learning facilities for the training and regular retraining of teachers and other AI experts.
259.Supplementary to the Hall-Pesenti Review, the Government should explore ways in which the education sector, at every level, can play a role in translating the benefits of AI into a more productive and equitable economy.
260.AI may have many social and political impacts which extend well beyond people’s lives as workers and consumers. The use of sophisticated data analytics for increasingly targeted political campaigns has attracted considerable attention in recent years, and a number of our witnesses were particularly concerned about the possible use of AI for turbo-charging this approach. The Leverhulme Centre for the Future of Intelligence highlighted the risk that “sophisticated algorithms could be used to tailor messages to large numbers of individuals to a degree impossible for traditional advertisers. Such systems will increasingly blur the lines between offering, persuading and manipulating”. Indeed, even the upbeat idea posed to us by John McNamara, a Master Inventor at IBM, of AI avatars trained on our “tastes, likes, dislikes [and] political views”, which could “scour all available data (from Hansard to the Daily Mail) to provide you with a recommendation on who to vote for and why, based on your world view” raised troubling questions about the appropriate role for AI in mediating our democracy.
261.Witnesses also outlined the impact that AI might have on our wider perception of the world around us. The rise of ‘filter bubbles’—the idea that social media is increasingly feeding us information which aligns with our preconceived notions of the world, and closing us off from information which contradicts that world view—has been a much documented phenomenon. Witnesses said this phenomenon would likely be further exaggerated by AI. The Charities Aid Foundation (CAF) argued that “as a growing proportion of our experience becomes mediated by these AI-driven interfaces, the danger is that they will seek to present us with choices and interaction based on existing preferences and thus will limit our experience even further (perhaps without us even realising it)”. This in turn could lead to “heightened social isolation and decreased community cohesion”. The BBC also expressed concerns that AI could “come to control the information we see and the choices offered to us, and there is real worry over the role AI (and the organisations controlling AI services) will play in shaping the norms and values of society”. The CAF also noted that the rise of ‘non-traditional interfaces’, such as conversational AI assistants, could heighten this effect by only providing one set of information in their responses.
262.AI makes the processing and manipulating of all forms of digital data substantially easier and cheaper. Given that digital data permeates so many aspects of modern life, this presents opportunities, but also unprecedented challenges. AI could increasingly prove to have reality-distorting implications in other domains as well. In recent years researchers have shown how AI can be used to convincingly alter photographs and video footage, turning daylight scenes to night and placing words in the mouths of public figures. When we visited the BBC Blue Room we were shown an AI application which allows artificial copies of any individual’s voice to be replicated with relative ease. Witnesses also said a major challenge posed by AI was its potential use in the creation of fake news. Just as computer-generated imagery has transformed cinema and television in the past 20 years, we are now witnessing the emergence of AI applications which are allowing similar manipulations of everyday still and video footage and audio recordings on an industrial scale, without the need for extensive funding or expertise. This risks creating a world where nothing we see or hear can be taken on trust, and where ‘fake news’ becomes the default rather than the outlier.
263.Our witnesses also expressed concern that if too many political decisions are delegated to machines, the feelings of powerlessness and exclusion felt by some could be further amplified. As Future Intelligence said, “the most challenging point relating to AI and democracy is the lack of choice that is offered to the population at large about the adoption of technology. It is, to say the least, undemocratic”. Dr Andrew Blick, Senior Lecturer in Politics and Contemporary History at King’s College London, suggested to us that AI might have a profound impact on the way that decisions are made by Government Ministers, the advice given by and to the civil service. He added that “if artificial intelligence can lead to the more effective delivery of services required by the public, it is desirable from a democratic perspective” but could challenge the concept of ministerial responsibility to Parliament.
264.On 23 January 2018, the Government announced that it was establishing a National Security Communications Unit within the Cabinet Office “tasked with combating disinformation by state actors and others”. The role of the unit, and the impact of this approach, remains to be seen.
265.There are many social and political impacts which AI may have, quite aside from people’s lives as workers and consumers. AI makes the processing and manipulating of all forms of digital data substantially easier, and given that digital data permeates so many aspects of modern life, this presents both opportunities and unprecedented challenges. As discussed earlier in our report, there is a rapidly growing need for public understanding of, and engagement with, AI to develop alongside the technology itself. The manipulation of data in particular will be a key area for public understanding and discussion in the coming months and years.
266.We recommend that the Government and Ofcom commission research into the possible impact of AI on conventional and social media outlets, and investigate measures which might counteract the use of AI to mislead or distort public opinion as a matter of urgency.
267.The growing prevalence of AI raises questions about how economic inequality will be addressed in future. Some economists, most notably David Autor, have argued that the polarisation of the job market over the past thirty years, towards low-skilled and high-skilled jobs, and away from medium-skilled jobs, is likely to be reversed, as some low- and medium-skilled jobs are likely to be relatively resistant to automation, while some highly-skilled but relatively routine jobs may be automatable with AI. But most agree that automation is likely to mean that highly-skilled workers, who are typically more adaptable and will have a larger stake in AI, are likely to take a growing proportion of income, while low-skilled workers, who have typically struggled to adapt to technological change and will have at least some work taken away from them by machines, are more likely to struggle.
268.The Charities Aid Foundation echoed the concerns of many when they told us that AI “could exacerbate the situation by concentrating wealth and power in the hands of an even smaller minority of people who own and control the technology and its applications”. Research Councils UK emphasised that “the resources and expertise required for the Big Data approach to AI is likely to concentrate economic power in the hands of a relatively small number of organisations and companies”, while the BBC noted that “while AI is expected to impact both white and blue collar jobs, we are concerned that the most vulnerable in society will suffer the most disruption to their employment due to AI”. Sarah O’Connor said:
“The big question that people in the economics and labour market world are thinking about is: how will those gains be distributed? If indeed AI leads to vast increases in efficiency, using fewer workers, does that mean that all the wealth that is created from that will go to the people who own the AI—the intellectual property—and the data that feeds into it? If so, what does that mean for the people who might be displaced out of jobs? Will there be new jobs to replace those old ones? If there are, will they be of the same quality?”
269.Olly Buston said that, given the nature of jobs most amenable to automation, inequality may develop unevenly across different parts of the country, with most parts of London potentially faring well, and parts of the Midlands and the north of England suffering the most. Sarah O’Connor recently emphasised the need to think about automation-related inequality in terms of places, as people are often far less mobile than economists might like, and without new jobs in smaller towns and more deprived parts of the UK, regional inequality will prove very difficult to tackle.
270.A range of approaches have been suggested, which fall into two broad categories. The first is to focus on re-training, as the Government appears to be doing with its recent announcement of a National Retraining Scheme, as discussed in Chapter 5.
271.The second is to pursue more radical policies for redistributing the gains from AI and automation. Of these, the most discussed is the concept of a ‘universal basic income’ (UBI), whereby everyone would be provided with a standardised monthly income from the Government. This would replace most if not all other forms of welfare payment, and would be paid regardless of whether people were in work or not. We received a range of opinions on this subject. Many of our witnesses expressed interest in UBI, and were supportive of pilot schemes being carried out in various parts of the world, with Scotland being the most recent example. However, a number of reservations were also expressed, with some witnesses believing it was premature to consider such a radical measure, while others argued that work provided people with a sense of meaning and purpose, which could not be addressed with a simple cash payment.
272.Future Advocacy raised the idea of a so-called ‘robot tax’, most prominently suggested by Bill Gates, on companies which adopt automating technologies, in order to support redistributive and retraining initiatives. They noted that it might “provide a solution to the potential problem that reduced employment will lead to reduced income tax and National Insurance revenues”, which together account for almost 60% of total tax revenue. However, they also told us that more work needed to be done to ensure such an idea would foster rather than hinder innovation.
273.The Government’s response to this question has so far been mixed at best. As discussed in Chapter 5, the National Retraining Scheme is a promising initiative, although it will require a considerable political commitment to ensure its success. In terms of the potential for regional inequality, we are pleased that the Government’s Industrial Strategy discussed regional development at length, and plans for local industrial strategies, which will complement the national strategy, are to be applauded. On the other hand, the Government has not explicitly engaged with the possibility of AI and automation-related inequality, either in its Industrial Strategy or its response to our call for evidence. Meanwhile, the early interest taken by the Prime Minister in improving social mobility appears to have been deprioritised, and the recent resignation of all four members of the Social Mobility Commission, citing a lack of progress, does not bode well. More recently, the Prime Minister stated at Davos that, alongside the “opportunities of technology” such as AI, there was a need to “shape this change to ensure it works for everyone”, but it remains to be seen whether these sentiments are followed up with concrete action.
274.The Scottish Government has taken a different tack, and are supporting four areas in Scotland—Glasgow, Fife, Edinburgh and North Ayrshire—in the design of pilot basic income schemes, with £100,000 allocated to the schemes in the draft budget, and additional funds coming from local budgets. The civil service has estimated that a Scotland-wide roll-out would cost around £12.3 billion. Nicola Sturgeon MSP, First Minister of Scotland, said: “It might turn out not to be the answer, it might turn out not to be feasible. But as work and employment changes as rapidly as it is doing, I think it’s really important that we are prepared to be open-minded about the different ways that we can support individuals to participate fully in the new economy”. However, it is expected that it will take between 12 and 18 months to design the schemes, and it seems likely that it will take several years after that before any results are forthcoming.
275.The risk of greater societal and regional inequalities emerging as a consequence of the adoption of AI and advances in automation is very real, and while the Government’s proposed policies on regional development are to be welcomed, we believe more needs to be done in this area. We are not yet convinced that basic income schemes will prove to be the answer, but we watch Scotland’s experiments with interest.
276.Everyone must have access to the opportunities provided by AI. The Government must outline its plans to tackle any potential societal or regional inequality caused by AI, and this must be explicitly addressed as part of the implementation of the Industrial Strategy. The Social Mobility Commission’s annual State of the Nation report should include the potential impact of AI and automation on inequality.
320 Written evidence from Google ()
321 Written evidence from Baker McKenzie ()
322 Federation of Small Businesses, Learning the Ropes – Skills and training in small businesses (11 December 2017), p 8: [accessed 25 January 2018]
323 Written evidence from HM Government ()
324 (Paul Clarke)
326 (Dr Mark Taylor)
327 Written evidence from UK Computing Research Committee ()
329 Written evidence from Dr Huma Shah and Professor Kevin Warwick ()
330 (Dr Timothy Lanfear) and written evidence from Dr Jerry Fishenden ()
331 (Andrew Orlowski)
332 (Miles Berry)
333 Written evidence from Future Advocacy ()
334 Written evidence from Doteveryone ()
336 (Graham Brown-Martin)
337 (Professor Rosemary Luckin)
338 Written evidence from Professor Maja Pantic ()
339 (Miles Berry)
340 Royal Society, Shut down or restart: The way forward for computing in UK schools (January 2012): [accessed 23 January 2018]
341 Royal Society, After the reboot: computing education in UK schools (November 2017), p 17: [accessed 23 January 2018]
342 (Professor Dame Wendy Hall)
343 (Miles Berry)
344 (Professor Rosemary Luckin)
345 (Graham Brown-Martin)
346 Written evidence from Microsoft ()
347 See written evidence from Google (); The Association for UK Interactive Entertainment (Ukie) () and Ocado Group plc ()
348 The Royal Society, After the Reboot – Computing Education in UK Schools (November 2017): [accessed 31 January 2018]
349 Select Committee on Digital Skills, (Report of Session 2014–15, HL Paper 111)
350 Written evidence from Leverhulme Centre for the Future of Intelligence ()
351 Written evidence from Mr John McNamara ()
352 Written evidence from Charities Aid Foundation ()
353 Written evidence from BBC ()
354 Written evidence from Charities Aid Foundation ()
355 James Vincent, ‘New AI research makes it easier to create fake footage of someone speaking’, The Verge (12 July 2017): [accessed 1 February]; James Vincent, ‘NVIDIA uses AI to make it snow on streets that are always sunny’, The Verge (5 December 2017): [accessed 1 February 2018]
356 See, for examples of this,
357 Written evidence from Accenture UK Limited (); Future Intelligence () and (Dr Ing Konstantinos Karachalios)
358 Written evidence from Future Intelligence ()
359 Written evidence from Dr Andrew Blick ()
360 ‘Government announces anti-fake news unit’, BBC News (23 January 2018): [accessed 24 January 2018]
361 David Autor, ‘Why are there still so many jobs? The history and future of workplace automation’ in The Journal of Economic Perspectives, vol. 29, no. 3 (2015): [accessed 1 March 2018]
362 Written evidence from Charities Aid Foundation ()
363 Written evidence from Research Councils UK () and BBC ()
364 (Sarah O’Connor)
365 (Olly Buston)
366 Sarah O’Connor, ‘Our robot era demands a different approach to retraining’, Financial Times (23 January 2018): [accessed 24 January 2018]
367 Written evidence from Dr Andrew Pardoe (); Deep Learning Partnership (); 10x Future Technology (); Future Advocacy (); IEEE European Public Policy Initiative Working Group on ICT (); Mr Thomas Cheney () and Amnesty International ()
368 Written evidence from Sage () and (Sarah O’Connor)
369 Written evidence from Dr Paula Boddington (); The Knowledge, Skills and Experience Foundation (); (Olly Buston) and Charities Aid Foundation ()
370 Written evidence from Future Advocacy ()
371 , pp 214–239
372 Jim Pickard, ‘Theresa May’s Social Mobility Commission walks out’, Financial Times (3 December 2017): [accessed 22 January 2018]
373 Philip Sim, ‘Citizen’s income: Could it work in Scotland?’, BBC News (27 December 2017): [accessed 22 January 2018]
374 Libby Brooks, ‘Scotland united in curiosity as councils trial universal basic income’, The Guardian (25 December 2017): [accessed 22 January 2018]
375 Philip Sim, ‘Citizen’s income: Could it work in Scotland?’, BBC News (27 December 2017): [accessed 22 January 2018]