A sermon preached at St Martin-in-the-Fields on July 16, 2022 by Revd Dr Sam Wells
Reading for address: Romans 8: 1-11
Earlier this year, the CEOs of the leading global tech research organisations announced their recommendation that ‘Mitigating the risk of extinction from artificial intelligence should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.’ This statement marked a parting of the ways.
Up until this year there’d been two conversations about artificial intelligence. The dominant one was one of promise: it was about how artificial intelligence was coming to be used in more and more ways that both took over the roles performed by human beings, resulting in less burdensome work and more leisure, and offered to improve the results of human investigation. The quieter conversation was one of threat: it was about how AI could jeopardise human existence, either by becoming as superior to humans as humans are to gorillas, or by being so set on achieving its goal it stopped at nothing to obliterate humankind. What the tech CEOs are now saying is that the conversation about benign promise has been superseded by the conversation about looming threat. So it’s time for some serious consideration of AI and of how church and world should respond to it. I don’t think Christians have to approach this as powerless observers; instead it can be a moment for reflecting on what really matters about life and faith.
Artificial intelligence, known as AI, refers to the ability of a digital computer or a computer-controlled robot to perform tasks generally associated with intelligent human beings. There are plenty of potential blessings of AI. It can analyse huge amounts of data rapidly, and find patterns the human mind might not perceive – for example in medical diagnosis. It can automate mundane tasks, and even write student essays. It can customise consumer choices and medical treatments. It can enhance education and traffic flow. These are all ways in which hitherto we have seen AI as a tool that can assist us. Inevitably a lot of scrutiny has fallen on what could be the downsides of this new tool. Hacking and cyberattacks could jeopardise the confidentiality of data, while the use of AI for surveillance already evokes significant concern. In an era of fake news, it’s daunting to realise the potential of AI for spreading misinformation: we could quickly be in a world where it’s impossible to tell what’s true, and where only a minority of people even care.
These are the promises and pitfalls of AI that have been in the public domain for several years now, and are already a part of our lives when Amazon tells us ‘Customers who liked headphones also bought buggies,’ or when an iPhone asks us if we’d like to access our bank account by facial recognition. But what these CEOs are saying is that they foresee a time when AI will become not just a tool we can use, but a master that can control us.
We can divide responses to artificial intelligence into three groups. Let’s call them Not on Your Life, Yes, Please, and Yes, But. I want to explain how each of these groups has an implied story.
What we’ve been hearing in the last few weeks is the loud shout of Not on Your Life. This shout anticipates the moment when AI reaches what’s known as the ‘singularity,’ a point where technological growth reproduces itself, uncontrollably and irreversibly, almost inevitably dominating human civilisation. Consider a scenario imagined by the Swedish philosopher Nick Bostrum. ‘Suppose we have an AI,’ he says, ‘whose only goal is to make as many paper clips as possible. The AI will realise quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.’ As Stephen Hawking put it in 2014, ‘If a superior alien civilisation sent us a message saying, “We’ll arrive in a few decades,” would we just reply, “OK, call us when you get here–we’ll leave the lights on?” Probably not – but this is more or less what is happening with AI.’
The Not on Your Life story isn’t one of vigilance, in which governments must put careful safeguards in place to avoid a potentially promising development reinforcing oppressive tendencies already present in society. The Not on Your Life story is not one in which some are losers; it’s one in which we’re all losers. It sees the takeover of the human domain by computers and robots as simply a matter of time. This story often goes hand in hand with a sense of determinism – that technology is unstoppable and will hunt humans down whatever we now do.
The story of Yes, Please is one rooted in science fiction. This is an optimistic Silicon Valley story, by which humans can eventually gain mastery over all limitations. It’s a story closely linked to transhumanism – the attempt to extend human lifespan and cognitive capability. Proponents anticipate that people with augmented capabilities will develop into a transcendent species — the ‘posthuman.’ This is a quest for perfection and a conviction that such perfection can be achieved by harnessing advanced technology to human ability – particularly the human brain. It imagines that we can live forever by transferring our brainpower into a device that doesn’t decay. It sees life as surrounded by unnecessary constraints, and AI as a unique opportunity to overcome many of those constraints.
The story told by the third group, Yes, But, isn’t fundamentally opposed to AI. Its story is one of how all new technologies tend to exacerbate the power differentials already at play in the world. Most obviously, those who work in roles that computers or robots could easily displace are already wondering what the future means for them. More subtly, computer analysis reflects the data it’s given: bias against migrants, children, members of indigenous communities and people with disabilities could quickly be exacerbated rather than diminished by AI, and hardwired into employment screening or police records. And it’s certainly possible to tell the AI story as yet another, perhaps the greatest ever conspiracy of humankind to neglect the non-human creation in its quest for its own enhancement. At the simplest level, humans come to depend so much on AI that we can no longer think critically for ourselves, like a person that lives on fast food losing the ability to cook.
As often, what we’re engaged in as Christians responding to challenges from social innovation is a clash of stories. The three categories I’ve just described identify three different stories. It’s on the level of these competing narratives that we need to embark on a Christian evaluation of AI. So let’s take the three stories and see what’s at stake in each of them.
If we start with the Not on Your Life story, what’s really at stake is the Christian virtue of hope. Hope says human beings will never create a good outcome to their own story, either individually by concocting their own survival in a robot or collectively by creating an idealised utopia. Instead, God brings the future towards us and ushers us into ultimate companionship with the Trinity, one another and the new creation, in spite of and uninhibited by our selfishness and fear. We can’t know if AI will eradicate humankind, but we can trust that it won’t eliminate God. The truth is that humankind will die out eventually, by external catastrophe or internal folly. But hope informs us that what God has in store for us individually and collectively is so much more than what we now have, and that our ultimate future lies in God’s hands, not ours.
Then if we move to the Yes, Please story, what’s at stake is the Christian virtue of faith. The Yes Please story is essentially a remix of an ancient tendency to seek eternal survival through escape – escape from the human body, from the limitations of time and space, from the sheer givenness of the world. ‘Oh for the wings of a dove,’ says the Psalmist, ‘that I could fly away.’ Such verses show us the inclination to escape isn’t a new thing for those obsessed with technological enhancement – it’s been there since religion began. What faith says is that God is with us in and through our created existence: seeking technological escape by transcending our human limitations won’t get us any closer to God or eternity. Christianity fundamentally says we don’t get close to God by escape – we get close to God by God in Christ coming close to us. Transhumanism is an impossible project; and, more importantly, even if it succeeded in preserving parts of us, it would leave behind the parts most worth having.
Which brings us to the third story, the Yes, But story. There’s a lot that’s right about the Yes, But story. Christians affirm what they value about themselves and the world by the way they attend to those most vulnerable in their midst. The Yes, But story’s not wrong. It’s just insufficient. There’s a bigger story, expressed by the Christian virtue of love. The story of love is that human beings were created for relationship – with God, and meanwhile with themselves, one another and the creation. The most important things in life are forming, fostering and restoring relationship. Which means we have a simple test for whether developments in artificial intelligence, or indeed any other product of human invention, are positive or negative: do those developments strengthen, deepen and enrich relationship, or inhibit, evacuate and dismantle it? True, textured and sustainable relationships require dignity, trust, integrity, generosity, sacrifice, kindness, attention, delight – all the words that express the best things in life; all the words that combine to embody what we mean by love. The Yes But approach wants us to legislate safeguards and protocols around AI. That’s wise and necessary. But let’s not for a moment suppose you can legislate love. That’s something we each have to practise, promote and proclaim. Which is why we need church. The church’s response to AI isn’t to dream up some magic new solution. It’s to be renewed in appreciating the significance of its existing core practices: like sharing a weekly meal in which all are equal, all contribute, and all are fed; like caring for the vulnerable in ways that affirm mutuality and dignity; like upholding one another in prayer and practical gestures of solidarity.
Let me finish with two final words. First, what we’ve just perceived is that artificial intelligence is the product of a world that does not employ faith, hope and love as the metrics by which we evaluate everything new that arises. To be a Christian is to believe that, in Jesus and the Spirit, God gives us everything we need, and that faith, hope and love are the ways we respond to the plenteousness that God has already given us. Some technological developments are healthy, some less healthy: the point is, the way we know whether they’re healthy is the extent to which they facilitate faith, hope and love. So AI is testing the church as to whether it truly believes that faith, hope and love are what we’re made for. Paul says in Romans 8, ‘To set the mind on the Spirit is life and peace.’ If AI is a blessing, it will lead to life and peace.
Second and last, the development of AI and the crisis into which we’ve just stumbled, that AI might destroy everything, may yet be a gift to the church in this precise way. Let’s take a moment to ponder what it is that we’re so terrified AI might take away. Is it our control? That was an illusion anyway. Is it our identity? Our identity is a gift of God, not our own achievement or possession. Is it our existence as a species? That lay in God’s hands, not ours, all along. I suggest the thing AI really threatens to take away is our precious ability to relate to one another in profound and meaningful ways, replacing it with life as a concatenation of perpetual and calculated transactions. What Christians need to assert and embody and enrich is the quality of their relationships, with God, one another and the creation. AI is showing us the difference between artificial love and real love.
And here lies a question and an irony. The question is, Do we make those relationships the gravitational centre of our lives, right now – or are we already preoccupied with securing control, identity and survival by other means? The irony is, maybe it’s going to take the threat of AI to make us realise what’s most important in life lest we lose it. Which means the threat of AI may turn out to be showing us what our lives should really have been about all along. And thus, paradoxically, be an extraordinary gift.