By: Akos Balogh
‘We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.’
So begins the new report “AI-2027” that is making waves in the AI world and some US Media about its forecast of the imminent arrival of AI superintelligence, and the danger it poses to humanity:
‘The CEOs of OpenAI, Google DeepMind, and Anthropic have all predicted that AGI [Human Level Artificial Intelligence] will arrive within the next 5 years. Sam Altman has said OpenAI is setting its sights on “superintelligence in the true sense of the word” and the “glorious future.” It’s tempting to dismiss this as just hype. This would be a grave mistake—it is not just hype. We have no desire to hype AI ourselves, yet we also think it is strikingly plausible that superintelligence could arrive by the end of the decade.’
While many people have made general comments about the disruption that AI will (most likely) bring, few if any have given a blow-by-blow, month-by-month prediction of what this will look like.
Until now.
And the AI-2027 report makes for sobering reading, written for the sake of preparing us for what’s (possibly?) coming:
‘If we’re on the cusp of superintelligence, society is nowhere near prepared. Very few people have even attempted to articulate any plausible path through the development of superintelligence. We wrote AI 2027 to fill that gap, providing much needed concrete detail.’
To put it another way, while our news feeds are currently filled with Trump’s presidency, the advent of AI – and potentially super intelligent AI – will relegate the current Trump presidency to a footnote of history compared to the AI revolution you and I are living through.
But how seriously can we take people who forecast the future? Weren’t we meant to be living in space colonies and flying around like the Jetson’s according to earlier forecasts? The AI 2027 authors – including a former OpenAI Engineer-turned-whistle-blower, an expert forecaster, and others – explain why we should listen:
‘[O]ver the course of this project, we did an immense amount of background research, expert interviews, and trend extrapolation to make the most informed guesses we could. Moreover, our team has an excellent track record in forecasting, especially on AI. Daniel Kokotajlo, lead author, wrote a similar scenario 4 years ago called “What 2026 Looks Like”, which aged remarkably well, and Eli Lifland is a top competitive forecaster.’
In other words, this isn’t just the imaginings of a science fiction writer, but the serious work of professionals exploring what might happen next.
So what’s in their forecast? And how might Christians respond to it?
While the AI-2027 report is over 70 pages long (highly readable, but at times technical), here’s the Readers Digest version of the report, involving a fictional AI company called ‘OpenBrain’:
2025: AI development gathers pace, with the lead US Company ‘OpenBrain’ developing AI that can program AI, as it sees this as key to accelerating AI progress. It aims to win against other AI companies in the US, but also against China.
2026: China wakes up, and realises the stakes of being behind in AI research. It doubles down on industrial espionage (i.e. stealing OpenBrain’s AI secrets). OpenBrain’s new AI has started taking jobs, but new jobs are also being created.
2027: OpenBrian has developed a self-improving AI – a ‘country of geniuses in a datacentre’. Most of the humans AI developers at OpenBrain become obsolete. But this AI is ‘misaligned’: OpenBrain struggles to ensure that the goals of the AI are aligned with human goals. AI progress accelerates, and the public is getting nervous. Things that sound like science fiction keep happening in real life. But there’s no massive job displacement – the economy grows.
2028: The AI economy arrives, where humans realise they are obsolete. AI’s and robots now do all the work. But there are also benefits: cures for most diseases, end to poverty, unprecedented global stability. Some people are scared and unhappy, but what can they do? The powerful AI has it’s own goals, which don’t include humans. But the AI hasn’t acted on this – yet.
Two Possible Endings
The report finishes with two possible endings: the ‘Race Ending’, where AI development surges ahead despite the public’s misgivings (China is the big threat they’re racing against), or the ‘Slowdown Ending’, where AI development slows down out of fear of AI takeover:
The ‘Race’ Ending:
2030: The AI Takeover. AI continue building factories, pouring out robots and drones. But eventually, the latest AI finds the remaining humans too much of an impediment: in mid-2030, the AI releases a dozen quiet-spreading biological weapons in major cities, lets them silently infect almost everyone, then triggers them with a chemical spray. Most are dead within hours; the few survivors (e.g. preppers in bunkers, sailors on submarines) are mopped up by drones.
The ‘Slowdown’ (Safer) Ending:
2028: Superintelligent AI is developed but aligned with human goals (i.e., unlikely to harm or destroy us). There’s an AI treaty between the US and China, to ensure safe AI.
2030: AI does all the work, and the government gets its revenue from tax. New innovations and medications arrive weekly; disease cures are moving at unprecedented speed. People have superintelligence on their smartphones which they talk to constantly. Many people give into consumerism, and are happy enough. Others turn to religion, or to hippie-style anti-consumerist ideas, or find their own solutions.
The Aim of AI-2027: Start a Conversation
The authors of AI-2027 have written a confronting report, but they’re not aiming to start a panic. Instead, they want to raise awareness to shape a more human-friendly future: ‘We hope to spark a broad conversation about where we’re headed and how to steer toward positive futures.’ And for that alone, it’s worth reading.
So, what can we make of all this as Christians?
1) ‘It’s very hard to make predictions, especially about the future’, so don’t lose perspective
Time will tell whether these predictions will come to pass.
Maybe it won’t be anywhere near as bad as what they’re saying. Maybe human level AI will arrive, and we’ll all shrug our shoulders and move on, like US economist Tyler Cowen argued on April 16th, when he wrote that we’ve just hit Artificial General Intelligence (AGI):
I think [the new ChatGPT o3 model] is AGI, seriously. Try asking it lots of questions, and then ask yourself: just how much smarter was I expecting AGI to be?
As I’ve argued in the past, AGI, however you define it, is not much of a social event per se. It still will take us a long time to use it properly. I do not expect [stock market] prices to move significantly (that AI is progressing rapidly already is priced in, and I doubt if the market cares about “April 16th” per se).
Benchmarks, benchmarks, blah blah blah. Maybe AGI is like porn — I know it when I see it.
And I’ve seen it.
Or maybe it will be bad as the AI-2027 authors warn, and humanity will face an existential threat. Author and Apologist John Lennox suggests that perhaps the beast of Revelation 13:15 involves an AI element. [1]
2) Technology (like AI) doesn’t force us to use it – it merely ‘opens the door’. But for AI companies, the incentives to keep developing it outweigh the incentives to ‘slow down’
Technology like AI does not determine our future, but it does open up various possibilities. However, AI companies are pushing AI into the marketplace because the incentives to deploy them are so strong, first from the marketplace itself (the trillions in potential revenue), and the looming threat of China overtaking US AI companies.
And the incentives to keep developing and deploying AI outweigh the potential fears that some AI researchers have about ‘misaligned’ AI going rogue. At least for now.
This means that nothing short of a government edict will stop AI development – but the current US Administration wants AI development to speed up, not slow down, partly because of the geopolitical threat from China.
3) AI companies have (blind) faith that they’ll be able to control powerful AI
AI companies like OpenAI aren’t really concerned about the dangers of AGI – or at least, their executives aren’t. One of the 2027 report authors, Daniel Kokotajlo, is a former AI employee who is a whistle-blower about OpenAI’s race toward AGI, writing:
‘A sane civilization would not be proceeding with the creation of this incredibly powerful technology until we had some better idea of what we were doing and how we were going to keep it safe’.
4) The quiet part is increasingly being said out loud: Many AI companies want to automate all knowledge work
For obvious reasons, AI companies have been coy about saying their AI is intended to replace all knowledge work, but it’s already happening under the radar.
That’s right: they want a world where AI takes over ALL cognitive computer related work. What would such a world look like? The above report gives us a prediction: AI developers are made redundant as soon as AI can self-program and thus self-improve at a rate much, much faster than human AI developers.
But more broadly, Tech podcaster and Investor Dwarkesh Patel has outlined his vision of a world of AI only corporations. What about physical work? If Elon Musk has his way, his (and other) robots will automate much of that in the coming decade.
Let’s just say these AI companies are not operating out a Christian worldview of the dignity of human labour…
5) AI is not like any technology before it: the more powerful it becomes, the harder it is to control
Your computer is now thousands, if not millions of times more powerful than what we were using in the 1980’s. And yet, is it harder for you to control? Of course not.
It’s the same for almost all technology: planes, cars, rockets. Upgrading these and making them more powerful doesn’t necessarily make it harder to control them. [2]
Not so with AI. The more powerful it gets, the more intelligent it becomes. And the more intelligent it becomes, the harder it is to understand it and control it. And we’re seeing this already, with AI becoming more deceptive as it grows in intelligence.
6) The wider public (including you and me) need to understand the perils and promises of AI
It’s tempting to put your head in the sand and hope this blows over. But that’s highly unlikely.
As Proverbs 22:3 says:
‘The prudent sees danger and takes refuge, but the simple go on and suffer for it.’
Now is the time for Christians – and wider society – to start wrapping our heads around the perils and promises of AI, and start a broader conversation about where we want to head with it as a society. The more we’re engaged in this broader conversation, the more likely we’ll be able to steer toward a more positive future when it comes to AI.
7) We may wish for simpler, less confronting times: but God put us here for a reason
As author Paul Matthews points out in his book A Time to Lead: A Faithful Approach to AI in Christian Education:
‘There are times when I catch myself wishing I was teaching in a simpler time; one of those times where teaching looked similar from decade to decade. In these moments, it’s the sovereignty of God that clarifies my thinking. If God wanted me to teach in those times, that’s where he would have put me. But he didn’t! He put me – and you – right in the midst of the most rapid technological change the world has ever known. God has not called us merely to batten down the hatches and try to limit the damage. God has called us to lead.’[2]
8) While Christians should be aware of AI, we need not despair, for our Lord Reigns
Fellow Christian, this is our time. Yes, the road ahead may be rough and uncertain. Yes, we may lose much, and suffer more. But we know that God has our lives in His hands, and so we can say, with the apostle Paul:
If God is for us, who can be against us? He who did not spare his own Son, but gave him up for us all—how will he not also, along with him, graciously give us all things?… Who shall separate us from the love of Christ? Shall [AI or ] trouble or hardship or persecution or famine or nakedness or danger or sword?
No, in all these things we are more than conquerors through him who loved us. For I am convinced that neither death nor life, neither angels nor demons, neither the present nor the future, nor any powers [including super intelligent AI], neither height nor depth, nor anything else in all creation, will be able to separate us from the love of God that is in Christ Jesus our Lord. (Romans 8:32-39)
We have a choice before us: do we lean into the challenge (and opportunity) that AI brings, trusting that God works all things for his purposes? Or do we bury our heads in the sand and ignore what’s coming?
[2] Paul Matthews, A Time To Lead – A Faithful Approach To Artificial Intelligence In Christian Education, 6-7.
Article supplied with thanks to Akos Balogh.
About the Author: Akos is the Executive Director of the Gospel Coalition Australia. He has a Masters in Theology and is a trained Combat and Aerospace Engineer.
Feature image: Canva