From the magazine

Where will AI strike first?

Sean Thomas Sean Thomas
 Harvey Rothman
EXPLORE THE ISSUE March 2 2026

Homo sapiens, as a species, is programmed to anticipate death, disaster and apocalypse. The monster in the mere, the ague that comes from the east, the flood that wipes out all living creatures outside the Ark. The reason children – and adults in horror movies – are scared of the dark is because darkness is where predators strike. We have a sensible evolutionary fear of things that go bump in the night.

For thousands of years it was possible to argue this primal fear of apocalypse was overwrought. No matter what humans did, or did not do, we were incapable of destroying ourselves and anything that might destroy us in toto – like the comet that erased the dinosaurs – was simultaneously so rare and so beyond our ability to resist that it was pointless worrying.

It is a neat irony. The first sign of impending AI disaster may come when you next watch a disaster movie

That mindset was progressively challenged in the 20th century. World War One showed that mankind could kill millions in months, for no obvious reason. World War Two gave us nuclear weapons and the very real ability to kill almost everyone in hours.

Even then, humans remained in control of what happened on Earth. We have not seen a global conflagration since 1945. Treaties have – to date – successfully prevented nuclear war and the long winter that would follow.

But now we face a new peril – one we created, but which is conceptually beyond our control. That is, of course, AI. We have created Gollum, the Sorcerer’s Apprentice, the Monster of Frankenstein, and there are plenty of experts who believe it could lead to our end. AI scientists even have a catchphrase for the concept: p(doom), which simply means “the probability of doomsday.” Depending who you talk to, and whether they’ve had a good lunch, estimates of p(doom) range from less than 1 percent (Yann LeCun, ex-head of AI at Meta) to about 99 percent (Eliezer Yudkowsky, of the Machine Intelligence Research Institute).

So let’s look the Caliban of Catastrophe in the mirror. Let’s say AI does go horribly wrong. How might that happen? More helpfully: what would be the first sign it is going awry? The sign that gives you a chance to grab your car and your kids and head for the hills. What, in other words, is the AI version of Chernobyl? Here are the contenders.

A market meltdown. For many robot doomsters, this is the frontrunner. AI trading systems already move most of the money on major exchanges. They’re fast, they’re opaque and they’re all trained on the same data, which means they think alike. Imagine an ambiguous headline which makes them panic and sell simultaneously. Humans would see the cascade and join in. Trillions would vanish before your mid-morning coffee. The 2010 Flash Crash was the trailer; the feature film would be worse.

A less likely but still plausible scenario is self-driving cars running amok. Already these vehicles are ubiquitous on some American and Chinese streets. Imagine a software update goes out overnight. The next morning, hundreds of identical vehicles misread the same common road situation in cities across a continent. Same car, same wreck, same bug – but repeated in parallel. Cue mayhem.

More dramatic, and certainly more cinematic, is a deepfake disaster. A hoax image causing global instability. Picture a hyper-realistic fake video of a world leader during a crisis, perhaps declaring war. Imagine this is accompanied by highly plausible scenes of missiles being launched, tanks trundling, Vladimir Putin being hurled from a Kremlin window. Armies might act before the fact-checkers could catch up.

Then there’s biowarfare. Laboratories are already being automated, with AIs conceiving various scientific experiments, robots conducting them (especially the more dangerous ones), and more bots interpreting the results. We have already seen ChatGPT do versions of this – and it has proved incredibly effective, helping to solve thorny physics questions that have baffled humans for decades. The temptation, in biology, to engineer new forms of horrible viruses might prove impossible to resist. See human history from 2020 to 2023 for one potential outcome.

And yet, to my way of thinking, there is one scenario which is much more likely than any of these. Indeed, it is near-certain. And it could well be the first sign of a wider economic apocalypse, triggered by something much more momentous than a misfiring market bot. I’m talking about the collapse of the TV, movie and video-game industries as we know them, probably before 2030. I recently saw a clip created by a new Chinese video-generating machine called Seedance 2.0. The makers of that program made three minutes of highly convincing Hollywood disaster movie footage in a day, for a few dollars.

It’s cold hard economics that makes AI such a threat to Disney, Playstation et al. Other creative industries (not least publishing) are threatened by AI, but in those instances the difference between what is spent now and what could be spent if everyone used AI is not so vast the argument is over before it’s begun. True, lawyers are expensive. But lawyers also set the rules. They can slow the spread of AI in their profession. No one is going to try to protect the more frivolous elements of the economy.

But a typical Hollywood blockbuster can cost $100 million to make, easily. A big video game can prove just as expensive. The same goes for prestige TV dramas. Tens or hundreds of millions of dollars. AI makes it possible for a few clever kids to make almost the same movie, TV or video game in a bedroom, in days, for a few hundred bucks. That’s true today. It isn’t some extrapolation.

Traditional film and gaming companies simply cannot compete with those numbers. And this really matters, because entire cities depend on these industries and their ancillary businesses (looking at you, Los Angeles, but also New York, Paris, London, Mumbai and others), and millions of jobs could be wiped out in months. Indeed, they surely will be. It is a neat irony. The first sign of impending AI disaster may come when you next watch a disaster movie. You’ll only realize what’s happened when the credits roll and they read: “Produced by AI.”

Comments