What does it mean to be human in a world of AI?

Speech – House of Lords, 19 November 2024

Three men walked into a bar and discussed what it means to be human:

  • To be is to do – said John Paul Sartre: our humanity is self-defined by our actions. Who we are and why we are here is for us to determine.
  • No no no, Plato replied. To do, is to be. The purpose of life is not in performance, but in the recognition of the eternal truths that underpin the world and our part in it.
  • Frank Sinatra – overhearing their exchange – summarised his own philosophy: Do be do be do.

Thank you Dami for the introduction, and to Ali and Kumar for the invitation. It is a pleasure to join you this evening. Tonight – I want to explore what the current and near term developments in AI – Artificial Intelligence might mean for our humanity. Moreover I’d challenge us not just on how to think about AI, but how we might respond to the opportunities – or perhaps the threats – that it presents to us as individuals, families, and as a society.

There are three ways in which I think AI bears upon our humanity. I’ll deal briefly with each in turn. Do engage in the Q&A if there are areas you’d like to explore in more detail.

  • First – on the world of work. Many people have defined a really significant part of their humanity – their identity from their work. As with other major technological shifts, AI will have some pretty profound impacts on the jobs that very many of us do.
  • Second – in our relationships. In contrast to the blockbuster technologies of the past – this is new. AI has afforded us the ability to create interactive agents. Personalities, though not persons. This too will be a game-changer.
  • And finally – for our survival. In a world of existential threat – from armed conflicts, to cyber attacks and bio terrorism, AI is likely to play an increasingly important role. But will it save us, or bury us?

So what will AI mean for our humanity in Work, Love, and War. With four minutes for each. First Work.

WORK

Since the industrial revolution, many people have seen their purpose in life bound up with their work. Our most common surnames identify us with the professions of our forebears: Smith, Fletcher, Miller, Baker. Even my own – Sargeant – from my forefathers career in the military.

Even when your surname doesn’t advertise your work – the question “And what do you do” – arises early in our interactions. These descriptions aren’t neutral. When someone says they are a surgeon, or a barrister, or a professor,  we make inferences about their status; their accomplishment; their worth.

In 1930, John Maynard Keynes famously promised us a 15-hour working week, fueled by technology increasing productivity. He was quite right about the impact of technology on productivity – incomes have risen eightfold since Keynes made his prediction. However we remain a society that works hard. Indeed today the rich are more likely to be enterprising workaholics, than idle inheritors.

So what will the impact of AI be on our work? Well, while we have been worried about Robots taking our jobs for the last 200 years, given the varied nature of tasks within most jobs – so far AI has replaced few of them. However the increasing ability of machine learning and generative AI to analyse research, diagnose illness, and create art, and even <achem> offer strategic advice – AI at the very least going to become a commonplace tool for most of us, and may significantly reduce the need for some jobs. Taxi Drivers, Call Centre Agents, and GPs may well go the way of Farriers, Coalminers, and Typesetters.

Which sounds pretty – threatening. Even those of us who profess to thrive on change and want to support organisations to transform themselves for the benefit of their staff and customers – we still surely prefer to be the ones doing the changing – rather than having the change done to us. Why should we not have a posture of grumpy resignation, if not outraged hostility towards such a significant set of shifts – like modern day Ned Luds?

Well – the main reason, I’d suggest – is that we’re in a bit of a spot right now. Economic growth has stagnated for the last couple of decades. Computers are everywhere except in the productivity statistics. We live within a low growth environment of scarcity, and with an aging population and a declining birth rate we will struggle to afford the future provision of the services that we desire as a society – from the NHS, to transport infrastructure, to social security, to pension obligations. The OBR, the government’s own budget watchdog, projects government spending is to soar over the next 50 years, with revenues flat, resulting in a national debt that trebles.

Politics in a stagnant economy gets ugly. AI is perhaps one of our best hopes to kick-start growth, increase wages, and reconstitute an affordable and peaceable future. Moving too slowly to apply AI to public services is going to threaten more lives and livelihoods than the risks of moving too fast.

What should we do then? In the face of tech that threatens the status quo, like the New York Times – should we sue, or should we sign? I would suggest – like the Beastie Boys did with music remixing in the 90s – we should experiment. We’ve been doing this at BCG – with our consultants making frequent use of AI models like Claude, Gemini, and ChatGPT to get their work done. It is definitely saving time – though it doesn’t seem to be reducing the hours.

If you haven’t tried GenerativeAI for yourselves – please do. One of the things that defines our humanity is our agency. So in our work and in our leisure, I would encourage us to experiment with AI to become producers rather than consumers. Authors – rather than just audiences. Let’s move fast and make things.

But more than that – I would question whether any work is a secure platform for human identity. For some of us – the disruption to our working lives that AI may create could even be a blessing in disguise – to remind us that our worth lies not in our productivity, or our professional status – but somewhere more essential, and unchanging. We are after all, human beings before we are human doings.

LOVE

The second area in which AI is likely to have an impact on our humanity – is in our relationships. 

The Printing Press, the Steam Engine, the Jacquard Loom, the Assembly Line, and the Shipping container. These have all transformed jobs, industries, and even our own conception of ourselves. But while many revolutionary general purpose technologies throughout history have transformed the world of work – they have all done so as tools and processes. 

AI promises to be a little different. AI gives us the ability to create Agents – not just dumb tools. From personal tutors, to therapists, to chauffeurs, to personal shoppers – in the next decade we may become surrounded by software servants. Downton Abbey is coming back, and this time it could enable Mr Carson and Mrs Hugues to live ‘upstairs’ alongside the Earl of Grantham and the Marchioness of Hexham.

But with AI Agents as servants – what impact might this have on us? Should we be as courteous to machines as we might be with people? When we speak to Alexa, do we say please? Are we practising for a life of command, or a life of service?

The interaction between intelligent AI and people has long been a rich vein of drama. Films and TV series like Westworld, Blade Runner, Kubrick’s 2001, The Matrix, and Ghost in the Shell all explore these themes – but for me, two stand out:

Mickey Mouse in Fantasia was Walt Disney’s beautiful animation of Goethe’s poem from 1797 – The Sorcerer’s apprentice. Mikey, as the apprentice, inexpertly enchants a broom to help him fetch buckets of water. As the broom works, accidents begin to happen. Mickey panics. The room awash with water – and Mikey without the magic to stop it. Mickey splits the broom with an axe, which replicates the broom and doubles the pace of work. Mickey soon loses control of his artificial agents, and chaos breaks loose in a way that Nick Bostrom has described more recently, only with less dramatic music and more paperclips.

The film has been used as a cautionary tale by everyone from Karl Marx to Elon Musk, and is a lesson in the risks of using technological power without Wisdom. And everyone is rightly opposed to foolishness. Even if no-one can quite agree in advance on the distinction between noble ambition and dangerous overreach.

There are certainly examples of introducing agents into an ecosystem that have gone wrong – but there are also plenty of examples where caution in applying technological innovation has caused more harm than good. For example, Golden Rice was developed in the 1990s to combat vitamin A deficiency. But  environmental groups and regulatory barriers delayed its deployment for over 20 years, at the cost of 5-10 million children becoming blind unnecessarily. When it comes to technological innovation democratic societies tend to have the brakes of a Rolls Royce and the engine of a Morris Minor.

Personal tutoring is pretty much the most effective pedagogical intervention known to man for increasing the amount that pupils learn. The educational psychologist Benjamin Bloom discovered that pupils receiving 1-2-1 tutoring performed on average two standard deviations better than students who received traditional classroom instruction. That’s 98% better! The reason we don’t do this is cost: teachers are expensive. AI agents could be a game changer for personalised learning. The risk is that we don’t adopt it fast enough.

The second film to illustrate the significance of AI Agents for our humanity is Spike Jonze’s Her (2013). We see Joachim Phoenix’s Theodore falling in love with an AI, tenderly showing how AI can erode the boundaries between human and artificial relationships. Few of us want to be alone forever, and the internet has given us the ability to foster relationships with people we have never physically met. AI may redefine what it means to be together.

With the glass half-empty – this might open the path to insularity and even tragedy, as witnessed in the death of a young man in Florida last month following interactions with a chatbot from the firm Character.ai. Surely there is a risk that technological mediation can genuinely displace human connection.

More optimistically from the film, the AI teaches Theodore to recover from his divorce, and love more deeply, more self-lessly than he was able to before – better equipping him to love in his human relationships.

However, our AI Agents – the super Alexa’s to come – will have at least one major difference from our relationships with each other. As people we connect most deeply when we are ourselves uncovered. When we are transparent to one another. And AI is one of the most opaque technologies humanity has yet created. However gifted AI becomes as a counsellor, coach, consultant – it will remain a class apart from our model for human relationships, because AI is inscrutable.

WAR

But, finally, whatever qualms we might have about amorous chatbots – we might have rather more about AI when applied to the battlefield. Killer robots are in their infancy, but autonomy is developing fast and being tested right now in Ukraine and elsewhere. To be human in the face of a drone swarm is to be vulnerable.

AI in war (a weapon of math destruction) will be immune to fear or favour. It offers the possibility of a new objectivity in strategic decision making, and will be immune also to the pursuit of triumph or the defence of honour that have motivated humans to war for centuries.

However there may be hope of a kind – traditionally war has targeted military personnel as the principal agents to defeat. Given the significance of technology – and the potential for autonomy – it may significantly reduce the incentive to target people – instead of data centres, and other critical national infrastructure.

As Eric Schmidt has said, AI in war will illuminate the best and worst expressions of humanity – serving as both the means to wage war, and to end it.

CLOSING

To be human in a world of AI in 2024 is to be sat right on a hinge of history. Given the significance of intelligence for civilisation, the consequence of AI as a technology for humanity could scarcely be grander.

But despite the wonder of being able to hold a conversation with a personality who isn’t human – I believe those consequences will be determined more by the way we use AI, than just the technological capabilities of AI.

This will require wisdom, even more than regulation. As you’ve heard – broadly, I’m hopeful – particularly given the risks and costs of inaction and the counterfactual degradation of our public services.

The way we all will use AI, in Work, in Love, and in War, will in ten years time likely become as unconscious as the way we use electricity. But right now – those patterns haven’t been formed. It will take some time. We’ve only just started to remember how to unmute ourselves on Zoom calls. And so now is a good time to discuss these things – and also to practise.

And, however much the potential AI might have to improve our economy, our education, and our health, the challenges of the human condition are likely to remain less tractable. Our inclination to serve ourselves may even be exacerbated by AI Agents that expect no reciprocity.

Even with the added intelligence from a world full of AI, I suspect we will still better understand what it means to be human through theology, rather than technology.

Thank you.