AI in the UK: ready, willing and able? Contents

Appendix 6: Note of Committee visit to Cambridge: Thursday 16 November 2017

On 16 November, the Select Committee on Artificial Intelligence visited Cambridge to see the work being done there on artificial intelligence. The Committee visited Microsoft Research’s Cambridge office, Prowler.io and Healx, two AI-focused start-ups, and the Leverhulme Centre for the Future of Intelligence (CFI), an interdisciplinary research group based at the University of Cambridge.

Six members of the Committee were in attendance,647 as was Dr Mateja Jamnik, Specialist Adviser to the Committee.

Microsoft Research

The Committee began its tour of Cambridge with a visit to Microsoft Research’s Cambridge Office, where the Committee met with David Frank, Government Affairs Manager, Professor Christopher Bishop, Technical Fellow and Laboratory Director, and Abigail Sellen, Deputy Laboratory Director and Principal Researcher. Refreshments and lunch were provided to the Committee in the course of this meeting. Professor Bishop started by noting that Microsoft Research, the arms-length, international research wing of Microsoft, was celebrating its twentieth anniversary in Cambridge, and pre-dated the arrival of many of the other large US technology companies in the area. Professor Bishop had been at Microsoft since 1997, and became laboratory director two years ago.

Professor Bishop emphasised that the laboratory’s work covered more than just machine learning, and encompassed a wide variety of academic and professional disciplines, with engineers, professional designers and social scientists among their staff. They had recently opened a wet laboratory on the site, where they were experimenting with programmable biology. Microsoft Research saw themselves as sitting between business and academia, and aimed for a collaborative and sustainable relationship with the latter—in this vein, Professor Bishop noted that they were cautious not to simply ‘hoover up’ top computer science academics, as this could be detrimental to the long-term teaching and training of skilled researchers which they themselves depended on. When asked about whether the hype around AI should be believed, Professor Bishop said that AI was overhyped, but that there was a transformational moment in software development occurring (similar to the moment when Moore’s Law for hardware was identified and observed). In developing their own systems, Microsoft did not use their customers’ data: instead they used the data generated by staff to inform AI systems such as Clutter in MS Outlook.

Abigail Sellen then gave a presentation focused on ethical aspects of their work. There were three overarching principles to this:

She told us that the key areas for progress at the present moment included a focus on addressing the potential power imbalances created by AI, unlocking the blackbox aspects of many AI systems by developing intelligible systems, mitigating the bias in algorithms, and democratising AI by ensuring that AI tools reach as many people as possible.

In her view, one of the central issues with computer science education was that it tended to be taught only, or primarily by, computer scientists, and did not integrate contributions from other disciplines (she noted that she herself had come from a background in cognitive psychology). This was leading to, for example, a focus on the intelligence, rather than intelligibility, of AI systems, which was becoming increasingly problematic. Some in the AI research community argue that research should be focused away from more complex, impenetrable deep learning models, towards more intelligible, additive models, and Microsoft Research was exploring this area.

Professor Iain Buchan

Professor Iain Buchan, Director of Healthcare Research, presented to the Committee on the democratisation of AI for healthcare, a shift in focus for Microsoft in recent years. Among the conventional approaches to healthcare and public health, he highlighted the fact that traditional computer modelling often became outdated very quickly, and that new healthcare technologies had tended to focus only on particular areas. Microsoft analysis suggested that many GPs around the world were still reliant on crude health ‘dashboards’, which highlighted what was going wrong with patient health, but offered no solutions for how to solve these problems.

Microsoft was focusing on a much more systematic, holistic approach to healthcare data, which aimed to use AI, alongside the integration of new sources and forms of patient data, to find new solutions to a range of diseases and health issues, and empower patients to take more control of their own personal wellbeing. Professor Buchan suggested that in practice, this might mean being able to reverse Type 2 diabetes in 5–10% of cases using only improved diet and lifestyle changes, assisted, for example, through a smartphone app. Similarly, Microsoft believed that a data-driven approach to understanding allergies could produce promising results. In order for AI and data to be used to improve healthcare, Professor Buchan told the Committee that trust was essential, and that organising principles between researchers and hospitals were needed.

InnerEye demo

Members of the Committee were shown a demonstration of Microsoft’s ‘InnerEye’ technology, which was being developed to assist oncologists in the analysis of x-ray and MRI scans. At present, these scans often had to be laboriously marked up by hand. For example, before treating cancer with targeted x-rays from a LINAC machine, a scan needs to be marked up to show both the target of the treatment, and any organs which need to be protected. This is time-consuming work normally performed by highly-qualified oncologists, and therefore also very expensive. The Committee was shown how the software analysed a test scan in 20 seconds, in a process which would normally take anywhere between 90 minutes and, in the most complex cases, two days. The researchers who had developed it noted that this software had the potential to dramatically reduce the cost of analysing scans, allowing far more scans to be taken over the course of a treatment, and for more accurately targeted treatment as a result. They also emphasised that the software was not perfect, and would generally need to be checked and amended by oncologists, reaffirming their principle that this technology would augment, rather than replace, human workers.

In follow-up questions, the researchers revealed that as the system did not rely on deep learning, comparatively small datasets of less than 200 people could be used to train the system. The data used was generally from older datasets, with fewer issues over privacy involved. While the system was capable of learning from new data (for example improving mark-up analysis by learning from the corrections made by human oncologists) Microsoft had not pursued this, as this would have privacy implications, and new frameworks would be required.

Microsoft Research and accessibility

The last presentation from Microsoft Research focused on their work using AI to help those with disabilities. Cecily Morrison (a researcher in the Human Experience and Design group) and David Frank emphasised that this work often had more widespread utility as well, as many features could also be helpful for non-disabled people. They thought their work was fundamentally about finding new ways to add information about the world, and that it helped to bridge the last obstacles towards a fully inclusive society.

The Committee was then shown two demonstrations of AI-powered products developed to help the blind. The first was Microsoft’s SeeingEye app, which uses machine learning to describe things using a smartphone’s camera. A wide range of things could be described, from what an item of food was (from scanning the barcode) through to identifying particular people if they had already been pre-registered in the app. The other product was Project Torino, a physical computing system which helped teach blind primary school children the basic principles of coding. It used plastic hubs connected via cords which, when linked to a computer, could be used to code musical compositions or tell a story.

Prowler.io

The Committee then visited the offices of Prowler.io, a company founded in January 2016 and met with Vishal Chatrath, co-founder and CEO, and his team. Since then they had closed their first round of seed-funding in August 2016, and acquired over 50 staff, of 24 different nationalities, with 24 PhDs. Their founders explained that they had founded the company because they observed that most AI start-ups were focused on using AI for visual recognition and classification, a problem they believed to be largely solved. They set out to develop technology which could reliably make the millions of ‘micro decisions’ found in complex systems in dynamic environments in which there was often high degrees of uncertainty. In particular, they were focusing on transport, financial services and the games industry.

They identified two issues with conventional machine learning approaches:

On the second point, they emphasised their interest in the transparency of AI systems and the traceability of decisions made by them, and observed that this was not only about ethical principles, but also more mundane issues, such as the ability to acquire liability insurance for their products, a crucial consideration for real-world deployment. They were keen to move beyond the machine learning systems used today, and combined three widely used approaches (probabilistic modelling, multi-agent systems and reinforcement learning) to create their own innovative methodology, which they claimed to be the first of its kind. The aim was to build an approach to AI which would be observable, interpretable, and controllable.

The Committee was shown a case study where Prowler.io had developed software to try to model the demand for taxis across the city of Porto, Portugal, which they believed could improve efficiency across the entire system by 40%. They explained that, in their view, many attempts to model the movements of ride sharing and private-hire fleets had thus far not been very good, as the data tended to update too slowly, and many were based on the problematic assumption that more data from more driving would improve the models. In their view, this was not the case, as no model developed in this way would be able to account for very infrequent occurrences. Their approach, which integrated probabilistic modelling with real-world data, could help with this.

This highlighted one of their key objectives, which was to develop systems which would require far less data to work than those currently dependent on deep learning models. While it was always good to have more data, human beings generally did not require very much data to make a decision, and Prowler.io aimed to replicate that ability. They also noted that data, much like crude oil, needed to be refined before it could be used. In general, in the industry too much emphasis was placed on the data itself, and not enough is placed on the processes whereby it is processed and actually understood. As one of their team members put it, “we need big knowledge, not big data”.

When asked why they had decided to base themselves in Cambridge, they compared it to Silicon Valley—where, in their view, the development of AI was 80% hype and 20% technological development whereas Cambridge achieved the reverse ratio. They also explained that while the presence of Cambridge University was important, and some respected the long tradition of scientific achievement within the city, more prosaically, the density of large technology companies now resident there made a bigger difference. Each of them was taking a personal risk with the company, but if the company folded, most were confident that they could still find work elsewhere in the city. They emphasised that a liberal visa regime, and a positive, open message on the value of immigration in general was crucial in attracting skilled labour to the UK. They believed that government funding was less important, as raising private sector investment had not been very challenging. They also noted that Cambridge would need to develop its transport, housing and office infrastructure if it wanted to take full advantage of the AI boom.

Healx

The Committee visited Healx, a three-year-old health-orientated AI start-up with 15 employees and £1.8 million in investments. We met with Michale Bouskila-Chubb (Head of Business Development), Dr Ian Roberts (Head of Biometrics) and Richard Smith (Head of Software Engineering).Their focus was on using AI to combat rare diseases. While very few people worldwide were afflicted with each type of rare disease, collectively there were over 7,000 diseases which fell into this classification, with more than 350 million people suffering from them worldwide. Given that any particular disease had so few sufferers, it was usually considered uneconomical by drugs companies to develop bespoke drugs to cure them. Healx aims to address this problem by using AI to identify drugs which have already received clinical approval, and repurpose them to treat rare diseases. They were a for-profit company, and charged subscription fees to the charities and pharmaceutical companies that they worked with. In some cases, they applied for ‘protection of use’ patents on drugs which they discover may have new applications, and then sold these licenses on to pharmaceutical companies.

Working closely with patient groups and charities, they used a mixture of computational biology and machine learning to understand rare diseases and identify drugs with relevant properties. When identifying drugs, they attempted to feed in data from a wide range of sources, from medical databases to journal articles. In one of their earliest cases, studying a disease which affected around 600 children worldwide, they identified a potentially relevant treatment, and progressed through to early-stage testing.

When the discussion moved on to the challenges they faced, they highlighted four main areas. Data was a crucial area, and they noted that data sharing could often be arduous, and that gaining access to medical data could be difficult as a small company which had not yet established its credibility. While open access publishing was a valuable resource for them, they could still only access around 40% of the relevant literature, with the rest kept behind expensive paywalls by academic publishers.

Like many other companies, they also struggled to recruit people with the necessary skills in machine learning, and when they did, these salaries could be very high. In terms of funding more generally, they believed that the start-up ecosystem was good at providing funding, but they had not been able to attract any interest from Government agencies.

Their final set of challenges related to communication, and they observed that managing expectations around AI could be difficult, as the hype that now surrounded it often led people to believe that AI worked like magic. They found it particularly important to communicate the limitations of AI when dealing with hopeful patient groups who were often desperate for cures. They also faced scepticism from within the pharmacological world, where many scientists were often critical of the prospects of AI for drug discovery.

The Leverhulme Centre for the Future of Intelligence

In the final part of the visit, the Committee were hosted by the Leverhulme Centre for the Future of Intelligence (CFI) at the University of Cambridge. A number of academics from within the Centre, alongside a small number of external experts, had been brought together to discuss the implications of AI from an interdisciplinary academic perspective, and to provide an overview of the CFI’s multiple strands of work on this theme.

Proceedings were introduced by Dr Stephen Cave, Executive Director and Senior Research Fellow at the CFI, before Professor Huw Price, Academic Director of the CFI, gave an overview of the Centre’s current work. He explained that the CFI had a number of outposts, including Imperial College and Berkley University in the US, and that they were attempting to bring together a diverse community of thinkers to discuss the implications of AI. They faced challenges in terms of the very broad range of issues, the need to cover both short and long-term issues, and the need to approach these questions in a highly interdisciplinary way which resonated with technologists and policymakers. He further explained that the CFI supports 10 sub-projects in total, had joined the Partnership on AI, and had recently supported a number of international conference on AI, including two in Japan.

Trust and transparency

The first presentation was given by Professor Zoubin Ghahramani, Dr Adrian Weller and Dr Tameem Adel, who worked on the CFI’s ‘Trust and Transparency’ sub-project. They emphasised the need for tools to be developed to facilitate transparency and interpretability, which fell into three broad categories:

There were many aspects that needed to be considered when developing explainable AI systems. They noted that their work often overlapped with cognitive psychology, as it was often not clear what constituted a good explanation, and this could change depending on the audience. They also emphasised the need to develop trustworthy approaches, rather than simply trust, as there would be some cases where people should exercise scepticism. Equally, there needed to be AI systems which could deal with uncertainty and understand their own limits, alert users when they did not understand an issue, and seek out new information to rectify this.

AI narratives

The next set of presentations were given by Dr Sarah Dillon, Kanta Dihal and Beth Singler, of the CFI’s AI Narratives sub-project. Dr Dillon began by talking about the importance of fictional stories to policymaking and public debate in the area of AI. The real issues, in her view, were not about malevolent machines, but rather about AI systems which were incompetent, or whose values were not sufficiently aligned to society. Cultural values were often unclear, and this could pose challenges when attempting to reflect these values in AI. Ultimately, without scrutiny of these issues, AI risked replicating and perpetuating dominant narratives of the past.

Dr Dillon explained how science fiction could act as a ‘repository of thought experiments’ about the future, and Kanta Dihal focused on Isaac Asimov’s famous Three Laws of Robotics. Though first published in a story in 1942, she briefly explained how their paradigm of dominance versus subjugation between humans and intelligent machines had shaped thinking ever since, for example forming the basis of regulations used by the US Navy. She noted that there was a certain perversity to this; Asimov’s robot stories were usually based on the idea that these laws were fundamentally flawed, and explored ways in which they generated unintended consequences.

Beth Singler finished this section by talking about a series of short films about AI (Pain in the Machine, Good in the Machine, and Ghost in the Machine), which her team filmed to provoke debate about AI and its implications. Each film was released with surveys, which were then used to generate quantitative data about public opinions on the subjects raised.

Bad actors in AI

Dr Seán Ó hÉigeartaigh, Dr Shahar Avin and Dr Martina Kunz, from the Centre for the Study of Existential Risk, a sister organisation to the CFI, gave a brief overview of their work on ‘bad actors’ in relation to artificial intelligence. They focused on the potential misuse to which AI could be put by hackers and other individuals with malicious intent. Their sub-project began when they asked the question: what is different about AI with respect to cybersecurity, and how does it break existing security paradigms?

Among the points they covered, they mentioned the risks that AI could super-charge conventional targeted cyberattacks by allowing hackers to personalise attacks on a much greater scale than before. They also noted that researchers needed to consider the multiple uses to which their research could be put, not simply the optimistic scenarios they would like to see them used for. Finally, they discussed the dangers of an international arms race or a new Cold War developing between nations regarding the development and use of AI. Although they believed that efforts should be taken to shift the international development of AI from a competitive to a collaborative footing, overall, they were not optimistic about the possibility of international restrictions.

Kinds of intelligence

The final presentation was given by Dr Marta Halina, Dr Henry Shelvin and Professor José Hernández-Orallo, whose focus was on studying the kinds of intelligence found in the natural world, in order to map out potential or desirable directions for artificial intelligence. They observed that current understanding of intelligence was extremely limited, and that there were not any good ways of measuring it in its various forms, or benchmarks by which to assess the progress of projects like DeepMind’s AlphaGo.


647 Lord Clement-Jones, Baroness Grender, Lord Hollick, Viscount Ridley, Baroness Rock and Lord St John of Bletso.




© Parliamentary copyright 2018