A spectre is hanging over humanity: the spectre of superintelligent AI. While governments busy themselves with the mundane work of politics and putting out the fire of the day, the most consequential technological development since the splitting of the atom is accelerating beyond anyone’s ability to control it.
We are entering an era where the AI systems themselves are threats, not just humans
Anthropic, one of the world’s leading AI companies, recently announced a new AI system, Claude Mythos. The model can autonomously find and exploit critical security vulnerabilities in every major operating system and internet browser underpinning our digital infrastructure, including flaws that survived decades of human review.
Anthropic withheld the model from public release because, in their own words, ‘the fallout for economies, public safety and national security could be severe’. The UK’s AI Security Institute (AISI) confirmed the assessment: Mythos is substantially more capable at cyber offence than any model it has previously tested.
But the government’s response has been tepid. They have simply had the AISI publish a blogpost about Mythos and had the Technology Secretary tell businesses they should brush up on cybersecurity and sign up for a cyber attack early warning service.
The government is missing the forest for the trees. Yes, cyberattacks will become easier. But the real significance of Mythos is that it can do all of this on its own: identifying vulnerabilities, developing exploits, and chaining them together across networks, without human direction. We are entering an era where the AI systems themselves are threats, not just humans. And this is the least capable these systems will ever be. The length of tasks AI systems can complete autonomously is doubling every few months.
Think back to February 2020. Covid case numbers were still low in most countries, and governments and the mainstream media were focusing only on that: today’s case count, yesterday’s deaths. At the same time, epidemiologists were sounding the alarm. What mattered to them was not the current number of cases, but how fast that number was doubling. A virus doubling every few days looks manageable right up until the moment the health system is overwhelmed. Only a month later, the world was shutting down.
We are now making the same mistake again. The government is watching the immediate problem – cyberattacks getting easier – and ignoring the speed at which AI is advancing.
At the current rate of improvement, many AI experts believe superintelligent AI could arrive within the next two to five years. Many of those same experts, including Nobel laureates and AI company CEOs, warn that AI poses an extinction risk to humanity.
The window of opportunity to act and prevent catastrophe is still open. By acting today, we will spare ourselves the need for more drastic measures later. But on AI, the government has lost the nerve to act with conviction.
It has also lost the habit of foresight that once came naturally to British statecraft. In 1924, when the most destructive weapon in existence was the artillery shell, Winston Churchill published an essay asking ‘Shall we all commit suicide?’. He argued that science was on the verge of producing weapons so powerful that the League of Nations, ‘airy and unsubstantial, framed of shining but too often visionary idealism,’ would prove incapable of guarding the world from them. He was writing 20 years before Hiroshima.
Seven years later, in ‘Fifty Years Hence’, Churchill described with startling precision the physics of nuclear fusion and the horsepower a pound of water might yield if its atoms could be induced to combine. ‘There is no question among scientists that this gigantic source of energy exists,’ he wrote. ‘What is lacking is the match to set the bonfire alight.’ The match was found in 1945.
Churchill did what serious statesmen are supposed to do. He looked at the trajectory of scientific progress, took the warnings of scientists seriously, and asked what governments needed to do to prevent catastrophe. Today’s warnings come from the very people building these systems, and they are not talking about a risk decades away.
Britain is not powerless to act, and is in fact better placed than most to lead on addressing the threat from superintelligent AI. Britain convened the first global AI Safety Summit at Bletchley Park. Over a hundred UK parliamentarians have backed a statement from my organisation ControlAI recognising the extinction risk from AI and identifying superintelligent AI as a national and global security threat. The House of Lords held two substantive debates on superintelligent AI in January alone, including on whether to pursue an international moratorium. There is political will for action in Westminster, even if Downing Street has not yet caught up.
The response must match the scale of the threat, and superintelligent AI should be treated as what it is: a national and global security risk of the highest order. That starts with the government saying so, openly, and working with allies on how to confront it. It must end with preventing the development of superintelligent AI at home and building an international coalition to prohibit it globally.
If we don’t, there will be no chance for inquiries, apologies, or promises to do better next time. There won’t even be anyone left to blame.
Comments