Ioan Marc Jones

Book publishers must fight back against AI

Credit: iStock

AI writing is God-awful. It appears intelligent at first glance, empty at second. It possesses the insufferable buoyancy of a holiday rep. It offers little by way of humour, nothing by way of originality. But its fatal flaw is the absence of vulnerability. A writer is a person and a person is a mess and, as readers, as people, as messes, we identify with the mess.

Despite its flaws, AI writing is spreading. Literary agents, drowning in submissions, have resorted to disclaimers discouraging AI writing. Literary magazines have done the same. Amazon, meanwhile, has struggled to stem the tide of AI-generated books.

Authors are finding that, soon after their work hits the shelves, sloppy imitations emerge. Fake books do real harm. On a small scale, uninformed readers might avoid real authors in the future. On a larger scale, low-quality books damage consumer confidence and undermine an already-dwindling public appetite for literature.

A lot of writers use AI and few admit to using it

Most readers of fiction desire the mess of human-generated work. Booker Prize Winner George Saunders recently said he has no interest in machine writing precisely because it lacks human experience. Even the AI advocate Richard Susskind, author of How to Think About AI, rejects the appeal of machine-writing in novels:

An indispensable and intrinsic part of [reading literature] is the knowledge that another human is at the other end.

But the distinction between human and AI writing has become increasingly blurred. Last month, a major publisher discontinued a Young Adult novel over the incorporation of AI-generated text. The publisher – and the author, apparently – had no knowledge of AI use until after publication. Readers might desire the mess of the human, but writers have consistently failed to declare the use of AI – and only a few have been caught.

The publishing industry has taken baby steps to prevent the incursion of AI writing. Last year, Faber put ‘Human-written’ stickers on Helm, a ferocious epic that took Sarah Hall decades to write. Hall’s work, like the work of so many writers, had been stolen to train large language models (LLMs) with no consent and no compensation. The organisation Books By People partnered with publishing houses to launch an ‘Organic Literature’ certification at the end of last year. And the Society of Authors (SoA) has recently launched a logo to identify books written by humans. The classicist Mary Beard has backed the SoA scheme. ‘It’s only going to be human-authored books on my desert island,’ she said.

These stamps are valuable in theory, but limited in practice. A failure to agree on appropriate use seems to be the cause of most problems. People argue that AI editorial support is permissible. But how does that actually work? Where are the lines? Can an AI posit 15 alternate endings? Can it rework a few middling paragraphs? Can an LLM suggest a rewrite of an opening line? Such interventions seem to me inhuman, but to others they are permissible.

Let’s take the stringent classification of appropriate use, as defined by Books by People:

Books conceived and written by humans, where structure, voice, and character development originate from the author. If there is limited AI use involved, it is for spelling or formatting, not for generating or rewriting meaningful content.

A human-written book, by that definition, can still benefit from AI editorial support. And what constitutes ‘meaningful content’? What part of writing is non-meaningful? The meaning of ‘meaningful’ will mean different things to different people.

Even if blurred lines are not crossed, the ‘human-written’ badge seems to depend solely on trust. Not just one layer of it, but several: the reader trusts the badge; the badge-provider trusts the publisher; the publisher trusts the writer. The ‘human-authored’ logo by the SoA seems dependent only on author registration – and little else.

The badges take writers at their word. But, as we’ve seen, a lot of writers use AI and few admit to using it. These badges provide a swift route to gaming the system: proclaim that it is human-written and your partly, or entirely, AI-generated books will sit on bookshelves, like little imposters, with a stamp proclaiming their humanity.  

We need tougher solutions to give readers a fair choice. One approach, as highlighted by Emily M. Bender and Alex Hanna in The AI Con, involves forcing companies to put watermarks at the source, tracing the generation of content. Hidden metadata offers a similar promise. Detection models will easily find AI-generated work and AI companies are open to the idea. But the diffusion of LLMs means that universal application, without – or perhaps even with – regulation, seems implausible. Tech-savvy ‘writers’ could always find a way around hidden markers, such as using digital tools or copying AI writing, even if it slows down their ‘writing’ process.

Publishing houses need to fight back. Stamps should not declare what is human-created; they should expose the machine-made. Litigation disincentivises lying. Publishers could include clauses in contracts prohibiting the use of AI, with appropriate use defined in absolute terms. These clauses should dictate that AI was not used for generating, editing or rewriting any part of the text, providing consumers with legally backed guarantees.

Such absolutism might seem extreme. Writers, however, are unlikely to push back, considering that 10,000 of them protested against AI in a so-called ’empty book’ distributed at the London Book Fair in March. And, if writers do not sign such clauses, their books should advertise their AI credentials. Publishing houses should be happy to draw the distinction. It is the human, after all, that provides a competitive advantage. If AI writing improves and saturates the industry, publishers will suffer. Why buy a book when you can write a prompt?

The need to label AI in fiction speaks to broader calls for transparency in society. AI increasingly blurs the distinction between real and false in many areas of our lives. Charities use AI-generated images to garner empathy, deepfakes undermine democratic process. AI influencers are increasingly influential. AI artworks are flooding the market. And, in literature, that once-sacred realm, AI is proliferating.

Blurred lines call for clarity. They demand new regulations and a new contract with consumers. Labelling AI simply provides a fairer choice. It allows us to consume based on preferences and prejudices. And, in the often chaotic world of fiction, labelling AI gives us the choice to read books written by all-too-human humans, with all that vulnerability and all that mess.

Comments