Text of a speech given at the Slaughter & May Data Privacy Forum
Amid the dizzying possibilities within the future of technology, AI, and data, in recent years somehow we have often wound up worrying about how AI is going to go wrong.
- Sometimes apocalyptically wrong – some of our greatest scientists and engineers like Stephen Hawking and Elon Musk predict that AI may spell the end for humanity.
- Sometimes we worry about AI going philosophically wrong – worrying who our self-driving car should crash into when it has to make the choice.
- Sometimes it is going officially wrong – like when the recommendation engine for probation officers is biased against ethnic minorities.
- And sometimes it is just going prosaically wrong, like suggesting we should buy that Marvin Gaye album over and over again.
But whatever the cause – the popular imagination is pretty worried about the downside of these ‘Weapons of Math Destruction’.
And I think I understand why. AI impinges on our own identities. In an era when work for so many of us has come to define our sense of purpose: we look for love and find work instead. If work is our purpose, then the prospect of technological unemployment is not just an economic threat, but an existential threat. Our worries about AI are really worries about ourselves.
Continue reading “AI Safety: correcting 200,000 years of human error”