AI Safety: correcting 200,000 years of human error

Text of a speech given at the Slaughter & May Data Privacy Forum

Amid the dizzying possibilities within the future of technology, AI, and data, in recent years somehow we have often wound up worrying about how AI is going to go wrong.

  • Sometimes apocalyptically wrong – some of our greatest scientists and engineers like Stephen Hawking and Elon Musk predict that AI may spell the end for humanity.
  • Sometimes we worry about AI going philosophically wrong – worrying who our self-driving car should crash into when it has to make the choice.
  • Sometimes it is going officially wrong – like when the recommendation engine for probation officers is biased against ethnic minorities.
  • And sometimes it is just going prosaically wrong, like suggesting we should buy that Marvin Gaye album over and over again.

But whatever the cause – the popular imagination is pretty worried about the downside of these ‘Weapons of Math Destruction’.

And I think I understand why. AI impinges on our own identities. In an era when work for so many of us has come to define our sense of purpose: we look for love and find work instead. If work is our purpose, then the prospect of technological unemployment is not just an economic threat, but an existential threat. Our worries about AI are really worries about ourselves.

But at the same time, we’ve stopped worrying about ‘software’ going wrong. We just assume that it will. From cyber hacks to Windows crashes, the demonstrated fallibility of software is so obvious that it is hard to see how it can reliably go right.

But more reliable machines are possible. And it is right that we should strive to improve the safety and performance of machines, as well as their treatment of data privacy. We don’t build bridges, and then think about bolting on safety considerations afterwards. In physical engineering, safety considerations are built in from the start. So it should be with software.

As software and AI take a more critical place in our daily lives, code standards will need to rise. We shouldn’t move fast and break things – we should move fast and make things better. The Government has recently set up the Centre for Data Ethics and Innovation to do just this. They aim to ensure that Trust can become the watchword for Britain’s AI innovation. 

 

Human Error

But today I don’t want to focus on the machines getting things wrong. I’d like to look at the less remarked upon, but more common incidence of people getting things wrong.

A slightly less than scientific study reported in the Daily Mirror that the average person will make 773,618 decisions over a lifetime – and that they will come to regret 143,262 of them. Let’s look at some of those errors:

  • According to the World Health Organisation, more than 1.25 million people die each year as a result of road traffic crashes. Injuries from road traffic accidents are the leading cause of death for those between the age of 15 and 29. Between 20 and 50 million more people suffer non-fatal injuries. I suspect that in 20 years time we will look back with astonishment that we were allowed to drive a two tonne lump of metal around, at speed, on our own.
  • In medicine, a wide variety of research studies suggest that breakdowns in the diagnostic process result in a staggering toll of harm and patient deaths. The average incidence of diagnostic errors in medicine is estimated at 10-15%. To pick just one area – up to 30% of breast cancers are missed in mammography.
  • In manufacturing, 23% of all unplanned downtime is the result of human error. In Telecoms, it was over 50%.
  • A few years ago, the Mars Climate Orbiter: a $330 million dollar spacecraft was destroyed because of a failure by the scientists to properly convert between metric and imperial units of measurement.
  • In HR decisions, Bosses sometimes hire, fire, and promote the wrong people.
  • And all of us I think can recognise a certain reluctance to change our minds in the face of contrary evidence – I, perhaps we, tend to search for reasons as to why we were right all along – not why we might be wrong

If you’re looking for a systematic treatment of human error – you may be surprised to learn that the most cogent explanation I found was in, of all places, the literature produced by the Health and Safety Executive. It tells us that there are three types of human error:

  • First, Errors of execution – slips and lapses –  which occur in familiar tasks which we can carry out without much conscious attention; we’re vulnerable when our attention is diverted – these make up over half of recorded errors. 
  • Second, there are Errors of planning – decision-making failures – when we get it wrong, but believe it to be right
  • Last – there are Violations – are intentional rule-breaking. And we do love stories about the good reasons for breaking bad rules.

We don’t talk about this much, because it is normal – everyday human experience. There are ways of reducing these errors, but it is hard.

The rise of behavioural economics and cognitive psychology have taught us about over 100 ways in which people can be irrational, or suffer from ‘cognitive biases’. For example “People tend to overestimate the importance of what we know, and under-recognise the effects of what we do not know. We see patterns where there aren’t any. We tend to give weight to more recent events, and confuse chronological order with causation. We cling to certainty even when it is much costlier than uncertainty.”

Those biases are not just identified, but often explained with an evolutionary theory about why they exist. We understand all this. But all of that scientific understanding hasn’t made us much more rational.

Please don’t misunderstand me. I think people are great. Everyone is trying their best. No-one comes to work to do a bad job. And there is real beauty in good character, even if our rationality is built with crooked timber.

But this is a challenge. Untended, it is a problem that is likely to get worse rather than better in the coming years. 

  • The flood of information now produced means we can no longer read and digest all that has been written, even when it is directly relevant and we can access it at the click of a button. Police investigators, intelligence officers, solicitors and criminal barristers have their work cut out.
  • We increasingly live within social networks and cities too large for our brains to traverse. 
  • And the complexity of our supply chains and systems of policy and governance are steadily growing too.

But we now live early in an age where help is at hand. AI has come out of the lab, and into our offices, homes, and our pockets.

 

The possibility of Safe, high-performing AI

The opportunity for reducing human error is not because machines are infallible, but because they are more predictable and more transparent. 

We can model the behaviour of a machine learning algorithm much more easily than we can for a person. Our minds are more of a black box than any algorithm. 

And we can now see that in many areas machines, or people and machines together, are driving down rates of error.

  • Currently the AI in self-driving cars is about as safe as a human driver. And it won’t be licenced for general use on the roads until it is much safer than it is now.
  • AI is being used alongside people in case-processing in government for passports and visas – significantly reducing overall error rates and the incidence of fraudulent applications.
  • And AI is now powering predictive maintenance for our trains and planes – anticipating component failure better than people can, and helping the mechanics to service vehicles before they break, not afterwards.

Slightly depressingly – the source of AI error is generally human – either in the data, or in the code. 

  • In the data – there is always the risk of AI just learns human errors and repeats them – rather than correcting them. That’s what happened in the probation algorithm I mentioned earlier.
  • And in the code – we need to test for edge cases more seriously. For example, in the Uber autonomous vehicle crash the car had been programmed to take into account of a cyclist or a pedestrian but not a pedestrian pushing a bike.

Test Driven Development is a software method that observes when things go wrong, and writes tests to make sure that the software doesn’t go wrong in that way again. Data Scientists are getting better at applying these techniques to machine learning. 

 

Cognitive Transparency

Not only can computers reduce the error rate that we are subject to, but we can have a level of transparency and articulacy about reasoning of these algorithms that we have never reached in human rationality. 

Very often, we just aren’t sure why people think the thoughts they do. How we come to a decision is rather mysterious, despite our ability to rationalise that decision after the fact with a compelling story. 

While we can point to some general causes of errors as the HSE does, individual, specific cases are much harder to pin down.

Because of this predictability, and auditability, there is a real opportunity to eliminate most of the sources of human error that have dogged us through history.

It would be a scandal if we used untrained people to perform skilled tasks. There are plenty of laws that forbid it in many fields. But it should also be a scandal when we don’t use trained machines to give the best service to citizens, businesses, and customers. Especially when the cost and availability of expert advice and decision making support excludes the most vulnerable in society.

We don’t use log-tables for maths any more. We don’t rely on the knowledge for our navigation around town. Some might call that dumbing down – but I think it is progress. On balance I think we should worry less about the robotic future than we should worry about the human fallibility of the present. As we introduce AI into our offices and factories, we should strive for those algorithms to be safe, but we should consider human alternative more critically than we often do.

Despite such a litany of errors this morning – I hope with me that you will remain optimistic. Our future is not going to be one of indolence and unemployment. The Robots are coming to work with us, and they won’t be perfect either, but they will help us to fail in new and more interesting ways.

So in the words of that ever optimistic playwright Samuel Beckett, perhaps that should be our mission after all “Ever tried. Ever failed. No matter. Try again. Fail again. Fail better.”