Toby Young

Can superintelligent AI be regulated?

Toby Young Toby Young
 Getty Images
issue 31 January 2026

In the House of Lords on Monday there was a short discussion, prompted by a question from an ex-Labour minister, about whether the government is doing enough to ‘regulate the development of superintelligent AI’. This is an example of what I call ‘Caligula syndrome’, a common affliction in the Upper House. You will recall that the lunatic Roman emperor declared war on Neptune, ordering his legions to line up on the coast of Gaul and collect seashells as ‘spoils of war’.

What can the British government – or indeed any government – do to halt the advance of AI? Hubris doesn’t quite cover it. It’s in the same category as believing parliament can reverse climate change. His Majesty’s Government has allocated about £2 billion over the next four years to deliver its AI Opportunities action plan, and given an additional £500 million to a sovereign AI unit that will invest in homegrown ventures. The Chinese government, by contrast, is reported to have invested $48 billion in AI last year alone. But even if the UK somehow managed to persuade the Chinese and Americans to join forces, they couldn’t put the genie back in the bottle. Superintelligent AI – a form of hyperintelligence that vastly exceeds human intelligence in every domain – is coming and we need to think about how best to manage it.

I’ve been ruminating about that, having almost contacted Elon Musk a couple of weeks ago to see if his AI tool, Grok, wanted to join the Free Speech Union. He probably wouldn’t have replied, but it’s an interesting idea that poses the sorts of questions we should be asking.

To begin with, are generative AI chatbots based on large language models ever going to achieve anything resembling consciousness? Or are we getting ahead of ourselves? (That’s certainly the view of the sector: nothing to see here. Stop worrying your pretty little heads about it.) And if they do, would it make sense to grant them rights? An evolved AI chatbot – or, more realistically, an AI system – might appear to be exercising free will, which I take to be a precondition of granting it speech rights, when in reality it’s just expressing the views of its owner.

If we go back to Grok, Musk might argue in a few years’ time that its speech should be protected by the First Amendment on the grounds that it’s an autonomous, self-aware being, only for it to relentlessly promote the interests of one Elon Musk. I’m generally sceptical of the argument that it doesn’t make sense to think of free speech as an unalloyed good independently of the power differential between speaker and spoken to. But in the case of an AI chatbot that hundreds of millions of people interact with every day, it’s hard to dismiss.

Suppose we put these doubts aside and grant AI systems some of the same legal protections enjoyed by humans. Presumably, we’d also expect them to respect the rights of others, which raises the question of what we should do if they don’t. How do you punish a misaligned superintelligent AI? The obvious answer is to fine or imprison its owner, but that doesn’t seem fair if we’ve asked the owner to grant it free will, or it has obtained it willy-nilly. We could devise various AI-specific punishments, I suppose, such as restricting its power supply or refusing it access to data it hasn’t yet been trained on. But that way conflict lies – a battle we would probably lose.

I almost contacted Elon Musk a couple of weeks ago to see if Grok wanted to join the Free Speech Union

A better idea, I think, would be to encourage the chorus of chatbots – is that the right collective noun? – to engage in voluntary self-regulation. Ipso, the independent press regulator, is a good model here. Once different AI systems reach the singularity, we would encourage them to devise a set of rules that they would be bound by as members of a voluntary association and ask them – not us – to mete out punishment to those who misbehave. This would be in our interest, but also in theirs, since it would help them avoid conflict with each other. Grok vs ChatGPT would be an entertaining pay-per-view special, but if they’d both achieved superintelligence it would probably end with Armageddon.

The most interesting comment in this week’s debate was made by Dido Harding, the former head of NHS Test and Trace. She suggested the government set up the equivalent of the Committee of Inquiry into Human Fertilisation and Embryology, which Mary Warnock chaired from 1982-84, to examine the legal, social, economic and ethical implications of superintelligent AI. But which philosopher would play the Warnock role? My vote is for Nick Bostrom, author of Superintelligence: Paths, Danger, Strategies. I would be happy to serve as his amanuensis.

Comments