Photo: Iulian Ursu (cc)
My speech notes, for a talk given to the Westminster Abbey Institute on 31 May 2018
This evening I’d like to present a problem within what I believe to be the most transformative technology of our lives: artificial intelligence. I’ll suggest why I think that problem will involve some colossal rows involving money, guns, and lawyers. And as well as explaining the problem, I’d like your help to find the right way for us to respond professionally and personally, so I look forward to the discussion afterwards.
- Who am I, and why am I here?
-
- I’m a married father of two – living in south west London. Missing the family during half term holiday.
-
- Like you I’m an alumni of the Fellow’s programme at the institute. I’ve always sought an inflated sense of purpose in what I do, which is easy to find in public life. But it is difficult to hang on to that sense of purpose without moral and spiritual reflection, which I think the Institute catalyses.
-
- I’m someone who has slipped between the public and private sector, technology and policy, institutions and startups. I’ve worked in HM Treasury, and in Africa; in the Dept. Energy & Climate Change, and in a small Engineering charity; in the Cabinet Office and in an AI firm; because I really value the perspective that one can get by moving to a different position.
-
- Right now, I work at ASI Data Science. We help people create artificial intelligence to solve business and policy problems. I’d say we’re now the leading specialist AI agency in Europe. My job is to make sure we don’t go bust.
- We have worked with more than 150 organisations as diverse as Amnesty International on their marketing, Easyjet on their staff scheduling; Isaac Physics to help people learn faster; and Siemens, on predictive maintenance for their trains;
- Right now, I work at ASI Data Science. We help people create artificial intelligence to solve business and policy problems. I’d say we’re now the leading specialist AI agency in Europe. My job is to make sure we don’t go bust.
-
-
- In Government, we’ve used AI in the Home Office to help spot and remove terrorist propaganda on social media platforms; in the Department of Education to help better forecast the number of teachers required in each local authority; we’ve used it in the NHS to work towards a more effective way to predict the recurrence of cancer; and in local government to help better identify houses in multiple occupation.
-
-
-
- We believe that AI is for Everyone. It is a remarkable technology that has the potential to solve hard problems and automate hard work. It isn’t just for the Internet Advertising giants. It isn’t even just for companies and governments. It should benefit workers, and customers, and citizens: everyone.
-
- What is AI?
-
- I think it is helpful to define AI in the context of science and the scientific method:
- Science as described by Karl Popper, is the process of collecting, analysing, and presenting data from the world and testing theories that describe that data.
- I think it is helpful to define AI in the context of science and the scientific method:
-
-
- That second step – analysis and deduction is the most mysterious – it requires human intelligence, which to Karl Popper was a black box.
-
-
-
- Data Science is the Scientific method using software. Artificial Intelligence is the set of techniques that replicate that analytical, deductive step that is done by human intelligence in the scientific method.
-
-
- So AI isn’t magic, it’s science. And it isn’t just advanced statistics. AI is different because:
- More complex functional approximation – e.g. Image recognition
- Continuously learns – e.g. AlphaGo Zero
- The goal is usually automation not human ‘insights’ e.g. trading algorithms
- So AI isn’t magic, it’s science. And it isn’t just advanced statistics. AI is different because:
-
- AI started in the 1950s, and has progressed in fits and starts, or ‘winters’ and ‘summers’ as they are called in the field. For simplicity there are two types of AI to be aware of:
- Symbolic AI (1950s-1980s): Pre-defined models; Hand-coded rules; Small data sets
- Statistical AI or machine learning (1990s-present): Learning algorithms; Unspecified rules; Large data sets
- AI started in the 1950s, and has progressed in fits and starts, or ‘winters’ and ‘summers’ as they are called in the field. For simplicity there are two types of AI to be aware of:
-
- Symbolic AI failed because it was always constrained to working on very small problems – it couldn’t deal with the complexity of the real world. Statistical AI has advanced because of the accessibility of vast sources of data; and the exponential increase in computing power.
-
- Money, Guns & Lawyers are all intricately bound up in the future of AI:
- Enormous amounts of money are now pouring into the field. Like the historical investment in electrification or the Internet, I don’t expect it to slow down any time soon.
- Money, Guns & Lawyers are all intricately bound up in the future of AI:
-
-
- Military at the forefront of experimentation – autonomous weapons particularly controversial. Quick show of hands – who would be comfortable with the British Military using autonomous weapons?
- We have said we won’t – but missile defense systems are largely autonomous already, as are cyber attacks.
- Russian and Israeli firms (e.g. Kalashnikov and Airobotics) are developing autonomous combat drones using AI, and more will follow.
- South Korean guns on the sentry posts along the border with North Korea are partly autonomous.
- Military at the forefront of experimentation – autonomous weapons particularly controversial. Quick show of hands – who would be comfortable with the British Military using autonomous weapons?
-
-
-
- And if you thought the conversations provoked by GDPR were tricky – the lawyers are going to have an absolute field day with AI – who is liable for the decisions that AI models make? What counts as discrimination? What rationale has to be provided for decisions made by machines? The list goes on and on.
-
- Why does AI matter as a technology?
- General Purpose Technology – like electricity, or the shipping container, or machine tools – there are so many secondary uses and implications – we can’t even imagine all of them.
-
- Let’s take self-driving cars. They are coming. One example of social and economic transformation driven by AI, but one among many. I’m pretty sure that our children may not ever need to learn to drive. What are the implications of AI in Cars (h/t to Benedict Evans):
-
- Reduce c.1 million Road Deaths/year globally. Something over 90% of all accidents are now caused by driver error, and a third of fatal accidents in the USA involved alcohol. Huge economic effect to these accidents: property damage, medical and emergency services, legal costs, lost work and congestion. UK analysis found a cost of £30bn every year.
- Fewer people employed as drivers. There are something over 230,000 taxi and private car drivers in the USA and around 1.5m long-haul truck-drivers.
- Taxis become 75% cheaper, because the wage of the person driving the vehicle accounts for three quarters of the cost.
- Higher road capacity & faster journeys. No lanes, no separation, no stopping distances, and no signals, means profoundly different traffic patterns. Accidents themselves cause as much as a third of congestion.
Slightly more speculatively, we might project:
-
-
- No more looking for parking spaces & no more need for on-street parking
- Collapsed distinction between cars & buses
- Renaissance of Rural Pubs
- Media Consumption time rises
- Falling crime from car camera footage
-
-
- Beyond self-driving cars there are hundreds of other applications. You must have seen some of them in the media. The coverage is a mixture of good and bad – “AI will give us better French Fries” says one, “AI Could lead to Nuclear war by 2040” says another.
-
- But AI isn’t attracting such a lot of interest just because of its economic potential – or even because of the potential for apocalypse (we don’t worry about asteroids in the same way) – but because AI is entangled in stories about ourselves – about our identities.
- AI and what it means to be human
-
- A lot of people, from Alan Turing in the 1950s to Stuart Russell today, along with most of Silicon Valley, and influential philosophers like Peter Singer and Julian Savulescu all appear to fix on cognition and analytical intelligence as the principal way to define a person, and a person’s value.
-
- It is interesting to me that even the critics of AI – like Yuval Harari, the author of Homo Deus, are often just as reductionist in their comparison between human and machine intelligence.
-
- I think that is an impoverished definition: I think character trumps cognition, but we’ll come back to that.
-
- Defined in this way – it sets machine intelligence up to be in direct competition with human intelligence, and therefore with us as people. Stories spread and anxiety grows about how machines will judge us. How they may replace us.
-
- And in terms of cognition – the machines are getting brighter. It may be helpful to think about intelligence in some qualitative categories rather than just as a general scale. In ascending order of difficulty for machines we have:
- Calculation – this is simple
- Prediction – this can be complex, but is now common
- Recognition – this is new
- Understanding – contextual intuition is easy for us v. hard for machines
- And in terms of cognition – the machines are getting brighter. It may be helpful to think about intelligence in some qualitative categories rather than just as a general scale. In ascending order of difficulty for machines we have:
-
- Computers are getting better at this at the same time that we’re increasingly aware of our own cognitive limitations. We aren’t often straightforwardly rational. We can be and usually are biased, emotional, lazy, or distracted in our thinking – that’s me anyway – I’m less sure about you.
-
- Our cognitive biases are widely exploited. Quite a few methods of psychological manipulation are now used as reliable business models – advertising in particular.
-
- And it is easy to think that emotional or subjective thinking is less good than the super rational kind. That we should only ever aspire to the life scientific.
-
- But what if it takes more than intelligence to understand the world? What if intelligence is insufficient? What if, as Martin Heidegger suggested, the world is more than a set of facts.
- AI Ethics: how will AI be used?
-
- Like any technology or tool, AI is a capability that opens up a series of possibilities. It isn’t inherently good or bad, but it will be used in positive and less positive ways, and I’d love to explore not only what that might look like, but how we can understand the current framework and biases that are likely to characterise the ethos of applied AI.
-
- The most common way of thinking about AI ethics is to imagine all the things that could go wrong, why they might go wrong, and start to construct ways of avoiding those failures.
- Autonomous vehicles crashing
- AI Chatbots spitting obscenities into the Internet
- More scarily – Deep Fakes
- Less visibly – algorithms used to triage applications discriminating on ethnicity
- The most common way of thinking about AI ethics is to imagine all the things that could go wrong, why they might go wrong, and start to construct ways of avoiding those failures.
-
- But I think an equally fruitful way to explore AI ethics is to focus on what ‘going right’ looks like:
- Cost efficiency or customer satisfaction?
- Efficient allocation of labour, or full employment?
- And in an era where even the quality of our sleep is now quantified – what should happen to those things that count, but cannot be counted? Like mercy? Or grace?
- Most counter-intuitively, might a purely maximalist approach to optimising outcomes overlook the value within their opposites: the power within weakness, the trap of independence, or the good that can emerge from suffering.
- But I think an equally fruitful way to explore AI ethics is to focus on what ‘going right’ looks like:
-
- AI as a technology lends itself to an ethic of rationality where the world really is just a set of facts, and where a Western Cartesian, objective distant representation of the world is thought sufficient to make sense of it.
-
- One reason for the uptake of AI across government is because it is entirely in tune with the gospel of New Public Management:
- The philosophy that You can’t manage what you can’t measure. Started in the private Sector: Tom Peters – In Search of Excellence (1982):
- Then came to the Public Sector: Osborne and Gaebler – Reinventing Government (1993)
- It was (excellently) embodied by Michael Barber – who set up the PM’s Delivery Unit – and has recently written – How to Run a Government
- Remarkably resilient management philosophy. Was reflected in a centrist political consensus for the last 25 years. Almost unobjectionable. Who could be against efficiency?
- Strengths:
- It broke open the closed shop of the professions – teaching, medicine, law, and recognised that accountability is necessary, even for experts.
- Offers a freedom from rules – by specifying outcomes and outputs, and encouraging innovation in process terms.
- Not partisan but technocratic – so can be embraced by left and right – you can target equality or performance – as long as you’ve got measurable KPIs it works.
- One reason for the uptake of AI across government is because it is entirely in tune with the gospel of New Public Management:
-
- However AI within a philosophy of managerialism looks rather threatening (unless you’re a manager). ‘Good’ looks like taking people out of the loop, with the goal of greater efficiency.
- And managerialism has recently run aground:
- The bureaucracy of DWP dramatised in I, Daniel Blake. The Bureaucracy of the Home Office in the Windrush controversy. The bureaucracy of HMRC when it comes to charitable donations and reclaiming tax. In the NHS. Yes even the NHS: the unsubtle hints that terminating a fetus with Down’s syndrome is probably for the best. And most prosaically – how many times have you been kept on hold? How many times to be funnelled into a ‘process’ for the convenience of the company or organisation, rather than the citizen?
-
- All five are examples of managerialism deployed in the interests of the organisation rather than the citizen.
-
- Moreover, everywhere we are faced with ‘wicked problems’ can’t be solved analytically, or with more operational efficiency.
- Debt counselling → relational imperative, not a transactional one.
- Adult Skills Retraining → motivation of students and providers and employers, changing culture as well as services.
- Brexit → not an economic argument, but a political clash of identities
- Moreover, everywhere we are faced with ‘wicked problems’ can’t be solved analytically, or with more operational efficiency.
-
- I suspect they can only be solved relationally – not transactionally, because they involve a clash of values rather than an analytical mystery.
- An ethics of AI, to just deliver efficiency could do more harm than good:
- In Healthcare → automating diagnostics → what about the patient-doctor relationship?
- In Education → personalised pedagogy through MOOCs → what about the inspiration of human tutorials
- Out of the box, AI works to get tasks and transactions done; AI doesn’t help the formation of character, or the provision of kindness. In the language of David Brooks – AI is about the resume virtues, not the eulogy virtues.
- Unless we recognise the essence of public services and duties within relationships and character, we will use AI to optimise our way to an impersonal and unresponsive state.
- What can we do?
-
- Well, I’d love to hear your suggestions. But here are three that occur to me:
-
- First – Let’s not fall prey to CP Snow’s Two Cultures – in his 1959 lecture Snow declared “If the scientists have the future in their bones,…then the traditional culture responds by wishing the future did not exist.” Arguably that division continues today. We can stand against it by learning how to commission, sponsor and assure the use of AI where we are working by taking an AI for Executives type training.
-
- Second, we should lobby for the use of AI, but also for it to be transparent and accountable – don’t blindly trust the machines, and don’t just rely on the lawyers.
-
- Last, and perhaps most critically, we should use the efficiency savings from AI to invest in relationships, community, and social cohesion – for ourselves, as well as for the public that we serve.
In conclusion
I believe AI is a general purpose technology, much like electricity or computers, that has such broad application that in 50 years time there will scarcely be a corner of life untouched by its impact.
But there are two possible futures that involve artificial intelligence. Not the apocalyptic vs. the ecstatic visions of commentators such like Elon Musk and Ray Kurzweil that capture the headlines. That is merely a speculative debate about what the technology will be able to do in 20 years time. More important for us right now is the two ways in which the technology will be used today.
We stand today between a future that is increasingly quantified, optimised, and managed; and a future that emphasises the relational heart of work and services. These two futures will co-exist, but they will be felt very differently. Because of the power of AI, I think it will tip the balance, one way or another. And there will be rows. Those rows will involve money, guns and lawyers. The way those rows are resolved will define our our politics, our experience of work, and our social integration for a generation to come.
Questions?
I’m grateful to a number of people for their thoughts and suggestions for this talk – including Adrian Brown, Will Davies, Sally Phillips….