When the anthropologist Bronislaw Malinowski traveled to Papua New Guinea in the 1920s, he discovered a group of remote and mostly naked tribes, none of whom had encountered literacy before. And without writing, he found, they organized knowledge in a different way from the rest of the world. Rather than label and categorize everything that exists into linear encyclopedic facts, for instance, they only bothered to record and give names to local flora and fauna if they were useful in their own lives. Animals that were neither food nor dangerous, for example, were treated as unimportant. That is just a bush, they would say, or, merely a flying animal. After centuries of mass literacy, we’ve forgotten that our brains still work like this, too. And it should quell many of my colleagues’ fears about the end of literary culture itself.
The issue should be obvious enough to anyone. Generative artificial intelligence can now, by some feat of technological marvel, summarize, explain, contrast and even write about pretty much any subject in any way you can imagine. I am still utterly amazed at its ability to extract information from books in the British Library that would normally take me weeks of laborious searching, much of it scanning through information of no real value to anyone. Fiction, some say, might survive, for the essence of a good story is something unique that reflects the author’s perspective. But what’s the value of nonfiction writers in the world of AI?
I managed to scoop a deal with Penguin the exact year in which, rumor has it, artificial intelligence started annihilating our entire publishing industry. As usual, that’s just my luck. But, contrary to widespread anxiety, I’ve found no strong evidence that AI is having any impact on commercial book sales whatsoever, though the staggering demon-possession from our smartphones might be. And if we’re this far into the LLM revolution, this seems unlikely to change anytime soon. In fact, there is every reason to believe that neither authors nor journalists have anything to fear from our AI future, at least when it comes to writing books. (Especially true crime, the sales of which are going through the roof.)
Growing older, more mature and wiser means different things to different people; not least the art of going to bed early, absorbing sunlight (though not too much) and eating copious amounts of milled flax seeds. But what it has always meant – and now means more than ever – is the human ability to discern signal from noise. It’s a skill hard to teach, one which accompanies decades of industry experience and can hardly be put into words. And it’s almost impossible to articulate, too, because discerning one from the other doesn’t really follow any particular logic or explanation. It’s more like an intuition than a model. And the avalanche of machine-generated text and video has made it much, much harder.
A good example of this is Elon Musk’s new venture, Grokipedia. In an attempt to defeat the increasing political bias of Wikipedia – a fact I cannot dispute – he’s taken it upon himself to combine his own AI, Grok, and use it to compile a cosmically ambitious online database of all human knowledge. The problem, as you might see from looking at it, is that it feels like a huge, AI-generated summary of the world. Which, basically, is what it is. And that, while a worthwhile venture, is not that interesting a thing to read, let alone talk about a dinner party.
For most people on Earth, learning is just another form of entertainment. Not in the sense that it’s trivial, but in the sense that human beings, for whatever strange, evolutionary reason, like sitting around and hearing weird stories from weird people. If you look at contemporary hunter-gatherers, for instance, studies show that they spend around a third of their waking life sitting around telling, and listening to, stories. High-status individuals are respected not just for their ability to bring back food, but for their ability to tell tales that inspire, support, or merely entertain everyone else. And what makes storytellers unique and valuable to the tribe is not, like AI, their ability to regurgitate endless strings of facts, predictions, or summaries. What makes them valuable is their knowing which story is worth telling – and how and when to tell it. People like people. People like stories about other people. And people really like people who help to tell them what about those people is signal and what’s just noise. The ability to turn that into worthwhile entertainment is unlikely to go away anytime soon.
Many people in the media world, quite naturally, believe that machine-generated text renders the individual contribution of handcrafted books – or handcrafted magazine articles – redundant. Luckily for them, that’s wrong. People still need, and love, to be entertained. And they like it when people they respect or admire are doing the entertaining. I’ll still buy an Andrew Roberts book even if I know AI can generate an accurate summary of Churchill’s life because I value his lifetime of judgment and expertise on the matter, especially when I receive it in his voice. And why would I want to mindlessly scroll Grokipedia on my vacation in Spain this year anyway?
After all, there is an enormous amount of time in a life. And the capacity to sift through the now unprecedented heaps of cultural noise – and cut through to what people are feeling but are unable to say – has gone from a valuable skill to an existentially important one. It is this same process that Malinowski observed in our friends in Papua New Guinea so many years ago. They were teaching each other about the world – and they weren’t bothered about irrelevant details, like furry creatures you couldn’t eat. We are much the same. That, they would say, was just noise.
Writers, have no fear. The storyteller still reigns supreme; everything else, as the islanders used to say, is merely a flying animal.
This article was originally published in The Spectator’s December 22, 2025 World edition.
Comments