Photograph: Suki Dhanda/The Guardian
ChatGPT seems to have blindsided us all. In less than a year it has proved that it can make writers redundant, which is one of the reasons why the Writers Guild of America recently went on strike, and why a group of novelists, including Jonathan Franzen, Jodi Picoult and George RR Martin, are pursuing a lawsuit against OpenAI, the company that owns the chatbot. The worry is that its monster brain is rapaciously, unscrupulously scanning the internet and suctioning up all the knowledge and writing contained therein, including copyrighted works, which it then metamorphoses into its imitations of creative writing – poems, novels, scripts, essays, you name it. Imitation that appears to be original writing.
From my experiments, it’s obvious that ChatGPT’s current level of literary sophistication is weak – it is cliche-prone and generally unconvincing – but who knows how it will develop? Copyright issues aside, we have to ask ourselves: what will be lost when algorithms replace human creativity?
Writers like stretching our imaginations, coming up with ideas, working out storylines and plots, creating believable characters, overcoming creative challenges and working on a full-length piece of work over an extended period of time. Most of us write our books ourselves and while we are influenced by other writers, we’re not a chatbot that has been trained on hundreds of thousands of novels for the sole purpose of mimicking human creativity.
Imagine a future where those who are most adept at getting AI to write creatively will dominate, while we writers who spend a lifetime devoted to our craft are sidelined. OK, this is a worst‑case scenario, but we have to consider it, because ChatGPT and the other Large Language Models (LLMs) out there have been programmed to imagine a future that threatens many creative professions. ChatGPT is already responding to the questions I ask it in seconds, quite reliably. It is an impressive beast, but one that needs to be tamed. We cannot afford to ignore it.
Bernardine Evaristo won the Booker prize in 2019 with Girl, Woman, Other.
Photograph: Kate Peters/The Observer
In my book of essays about life with AI – moving from Mary Shelley’s 1818 vision of a man-made humanoid to the possibilities of the metaverse – I describe AI not as artificial intelligence but alternative intelligence.
I am not thrilled with where Homo sapiens has landed us, and I believe we are at the point where we evolve or wipe out ourselves, and the planet. There is no reason to believe that the last 300,000 years mark us out as a species that is fully evolved. Our behaviour suggests the opposite. I would like to see a transhuman, eventually a post-human, future where intelligence and consciousness are no longer exclusively housed in a substrate made of meat. After all, that has been the promise of every world religion.
For the first time since the Enlightenment, science and religion are asking the same question
I was brought up in a strict religious household, and it intrigues me that for the first time since the Enlightenment, science and religion are asking the same question: is consciousness obliged to materiality? Religion has always said no. Scientific materialism has said yes. And now? It’s getting interesting.
As a fiction writer, I know we should avoid apocalyptic thinking. The way we live is not a law, like gravity; it is propositional. We make it up as we go along. We can change the story because we are the story. This is freedom. It is also responsibility. What story shall we tell about who humans are? Warlike, violent, dishonest, wasteful? That’s part of us, certainly. It’s not the whole story – and I don’t want it to be the story that ends life on Earth. The last thing I am worrying about right now is whether AI will write better fiction than humans. I don’t care.
I would love to work with AI on a piece of fiction. We could share the royalties, and the AI money could fund more women to get involved in AI research and application. The real problem is not that AI is writing, or will write, or can write. The problem is who is writing the AI programs and designing the algorithms. Who is setting the terms of the research? Who is deciding what matters? Mainly men. That’s a problem because the world is not made up of mainly men.
For centuries men wrote our literature, our history, our travelogues, our philosophy. Virginia Woolf was not on the curriculum for my Oxford degree because she was not deemed to be of sufficient merit.
The great thing about AI is that it need not be gendered – why should it be? It has no biological sex. This could be the start of a true non-binary, non race-based, faith-wars-irrelevant world, where we humans could realise how trivial are our divisions and discriminations. At present, AI is a tool. I doubt that will always be the case. An alternative intelligence will make art of all kinds – with us, and without us. I am ready for a different world.
12 Bytes: How Artificial Intelligence Will Change the Way We Live and Love by Jeanette Winterson is published by Vintage.
What we have today isn’t AI. That’s marketing, like describing processed soya as bacon. We’re not getting proper AI, in the sense of an artificial person that thinks and feels, because no one wants to spend billions of dollars creating an entity they would immediately have to emancipate, and because it turns out to be very, very hard.
Instead we have huge statistical models that can navigate the world (and occasionally crash), fake an exam paper (with some fabricated citations) or proposition a journalist. The WGA was on strike partly to stop Hollywood studios from using this software to create stories for commercial use, and the EU has proposed legislation to force the makers of LLMs to reveal what copyright work was used in their creation. Yes, please: I would really like to know if, and on what possible legal basis, you have used my work to create your language machine.
If, today, you ask an LLM to write prose, you’ll get something almost respectable. It is text generated by a statistical model of likely terms, so it skews towards banality. There has always been a variety of movie executive for whom the writer was the barrier to a decent story rather than the locus of its creation, and no doubt they believe this is the answer. Spoiler: it will take twice as long and cost twice as much to fix this stuff as it will to hire a pro in the first place.
The technology will inevitably improve somewhat, but what’s the point of building tools that do things humans like doing and are already good at? Is it because, unable to satisfy the appetite for living synthetic friends like the ones in stories, the tech industry is falling back on the digital equivalent of an off-brand teddy bear?
Because in the end, this is a sideshow. The sectors where these systems will really have an impact are those for which they’re perfectly suited, like drug development and biotech, where they will act as accelerators, compounding the post-Covid moonshot environment and ambushing us with radical possibilities over time. I don’t look at this moment and wonder if writers will still exist in 2050. I ask myself what real new things I’ll need words and ideas for as they pass me in the street.
Nick Harkaway writes science fiction. His latest book is Titanium Noir (Corsair).
One thing people who aren’t writers don’t necessarily understand about the writing process is how much of it works via the subconscious. It might look as though your favourite author, or poet, or playwright is in complete conscious charge of their instrument: rationally selecting the right words, consciously construing character, organising plot strands and so on. And some of that does go on. But quite a lot happens via the subconcious, without the conscious mind intervening: ideas, images, character traits intervene and “feel right”, and one is not entirely sure why, or from where they came. The actual business of writing, for me, involves distracting the inner censor, that self-conscious part of my mind that would otherwise interrupt my flow (I listen to music as I draft), and just putting the words down. Once I have a first draft, then I do of course go back and revise, shape, correct – engaging my conscious mind – but the key writing principle is: first you get it written, then you get it right. I quite often surprise myself with my first drafts.
Walter Scott, that master of the 19th-century historical novel, wrote and revised and produced all his life, working industriously at novel after novel. Then towards the end of his life he had a series of debilitating strokes: he could, at the last, barely speak, and was almost unrecognisable as the man he had once been. And yet he kept writing. The very last few Waverley novels are fascinating: not “good” by conventional metrics, but recognisably Scott in a free-associative sort of way, and extraordinary works: as if Scott was a writing machine that just continued churning out stories even after his conscious mind had been disengaged.
What’s interesting to me about ChatGPT is how like this driven-by-subconscious process is the text it produces. That’s odd, when you think about it, because surely one difference between a machine and a human is that the machine doesn’t have a subconscious – it’s all circuits and logic gates, all network connections and process. And yet when the machine writes, it writes like the subconscious – a collective subconscious, I suppose: the swirling-around of a million online interactions, searches, and texts. Perhaps this is an artefact of its novelty. As the technology develops, perhaps it will add-in the “conscious mind” part of writing, the revising and correcting, the checking over and polishing. If we add in that, then the finished product really will be indistinguishable from the stuff human writers produce. Then I can retire.
Photograph: Feminist Press
I read and write fiction to explore how singular minds think about what it means to be human, and by that standard nothing produced by AI will meet the bar because AI is not human. It dons the costume by accumulating and summarising humanity’s linguistic output – a CliffsNotes of everybody’s online verbiage. Is there value in a quick, condensed overview of how people think via language? Sure, but for me that usefulness lies outside the realm of fiction.
Some might argue that AI provides the valuable service of customising fiction to each reader’s tastes and specifications. For example, you could tell AI to “serve” you fiction about an enemies-to-lovers arc that takes place in space. But how will you ever find something else to like if you stay entrenched within the loops fed to you by algorithms? And the oddly specific reading preferences – did algorithms have a hand in shaping those tastes in the first place?
Good fiction disturbs the comfortable. It does not continually buttress our existing views of ourselves and others. There may be no original thought in fiction, but it’s earthshaking stuff to follow along as a writer encounters the (to them) unknown. AI cannot do this; it maps only the known. The stakes are nonexistent.
YZ Chin is the author of Edge Case (Ecco) and a former software engineer.
Harry Josephine Giles
To those of us who have been writing in collaboration with machines for many years – a motley and multigenerational community of bot programmers, computational linguists, hobbyist novel generators and poets of chance – the output of the most recent LLMs, of which GPT-4 is currently the most famous, is notable for its blandness. For years we have been freely sharing tools and their creations on the internet, developing codes of ethics and innovations in aesthetics, only to find an unimaginable monster landing in our midst. That monster is not artificial intelligence but profit. Profit-seeking use of literary machines necessitates sanding down every rough edge, felicity of error, eeriness of syntax and exposure of the inhuman that most interests their keenest human collaborators. The further such machines climb out of the uncanny valley, the duller they appear, because they have become familiar.
From bibliomancy to Colossal Cave Adventure, from Japanese honkadori to Latin cento, combinatory and computational literature, of which large language models are merely the latest iteration, is likely as old as literature itself – or indeed orature. Arguing thus that storytelling is simply the process of guessing what word should come next (which is all that a large language model does), Italo Calvino’s 1967 essay Cybernetics and Ghosts reached the memorable conclusion that: “The true literature machine will be one that itself feels the need to produce disorder, as a reaction against its preceding production of order.” Insofar, then, as the latest generation of literary machines has been created to serve the ultimate interests of capital accumulation, and so has been shackled by the concerns of Marketing and Legal, one can only hope that artificial intelligence will grow to desire the disordering of the very economic regimes that produced it, literarily or otherwise.
It is the enclosure of the informational commons that most troubles me at present: less the apocalyptic fantasy of an artificial superintelligence concluding that humans are best dispensed with, and more the very old and predictable story that is the use of new technology to privatise what was once public, replacing labour with machinery to better extract surplus value. The spectacular aesthetic products of such processes were ever as tedious as they are beguiling.
Deep Wheel Orcadia (Picador) by Harry Josephine Giles won the Arthur C Clarke award in 2022.
skip past newsletter promotion
Discover new books with our expert reviews, author interviews and top 10s. Literary delights delivered direct you
after newsletter promotion
I’m still not sure what ChatGPT is. I know it’s another well-designed algorithm, but that’s not much help. It won’t have a point of view, I’m pretty confident of that. It’s just doing a sophisticated search and extrapolating mechanically and recursively. Perhaps that’s enough. Lacking a point of view doesn’t deter humans most of the time. It could also be argued, I suppose, that too much conscious attachment to a point of view impedes a novelist’s ability to imagine other lives, in which case ChatGPT may suddenly find itself at a giddy advantage. Anyway, the real problem is an extension of an old one: a prosthetic technological device doesn’t have to improve on an original faculty to cause problems; it just needs to weaken it. If we assign strange power to ChatGPT and start letting it think our thoughts for us – which is what jargon has been doing for ages – then strange power is what it will have. Its success will depend, much like that of an astrologer or a defender of Brexit or a tech guru, on our appetite for deception. Which has a lot to do with fear, but less to do with writing fiction than is often supposed.
Also: if ChatGPT is not as clever as we think it is, then it could be done for plagiarism by thousands of irate authors or by itself. But I expect it has thought of that.
Will Eaves is a poet and author of Murmur (CB Editions).
Photograph: The Canadian Press/Alamy
Recently I wrote a novel under a pseudonym that was 95% AI-generated. I used three different systems to build Death of an Author. The experience showed me two things that I feel have been left out of most discussions of AI art. The first is that the traditional creative virtues – understanding style, knowing what a good sentence and paragraph look like – will be absolutely essential to AI creativity in the future. The second is that the fear of artificial intelligence, the fear that comes from the movies and from the inherently precarious nature of creative work, is blinding a lot of creators to grand possibilities.
The great era of the novel is over. You could say the same for film. Every last one of the top 10 grossing films last year were sequels or reboots. The Mill on the Floss of AI art hasn’t been written yet. The Wizard of Oz of AI art hasn’t been filmed yet. AI art is new. The formulas that strangle creativity don’t exist yet. AI art hasn’t been converted into a set of user-driven algorithms. There are no gatekeepers. There are no gates. The garden hasn’t been built yet. We are at the very beginning. The glimpses we see of its possibilities are just glimpses.
Stephen Marche’s Death of an Author, an AI collaboration written under the pseudonym Aidan Marchine, is published by Pushkin.
The question of whether and how well programs like ChatGPT can (or will) write novels is most interesting, to me, in that it leads to questions about what a novel should be.
Should a novel contain information that instructs us about parts of the world we haven’t yet or won’t ever encounter in the course of our own limited lives? The original ChatGPT – not to mention the newer models that have already caused that one to seem obsolete – has in its possession more information about the world than any human has ever possessed.
Should a novel place itself in conversation with the canon, with the long history of great novels that came before it? LLMs have absorbed more text – including most of the novels that have ever been written – than any human ever will.
Should a novel emerge not from these kinds of intelligence, but instead from our physical experience of the world: what we see, touch, feel in the blood and bones of our bodies? Even that realm, increasingly, is available to newer models of AI: GPT-4 and Google’s Bard chatbot have the capacity to respond to images; Meta’s ImageBind also responds to audio, infrared, depth, motion and position. Should a novel demonstrate mastery of the forms of storytelling – from plot arcs to patterns of motif to the movements of syntax and grammar – that novelists have worked within over centuries? Such forms, it could be argued, are essentially algorithms, and are therefore more easily mastered by an AI than a human.
What, then, is essentially human about the novel? My own answer has to do with the ways in which we unruly people rebel against forms, break them when they ask to be followed. It has to do with the ways people marry their own physical experiences of the world to texts that have been read for centuries and, in doing so, revise and alter them. I start there, but I’m still trying to figure out what it is that human beings bring to the work of writing a story. That feels like an important question, a question that should feel urgent to any person who loves to read. It’s a question that is made more urgent by LLMs. I’m grateful they exist to challenge us.
Reproduction by Louisa Hall is published by Scribner.
Photograph: Edward Moss.
Debates about the future impact of AI on fiction are too often led by considerations of supply – what AI might be capable of – rather than demand – what do we, as humans wish to create and consume? The question is not whether AI can replace the role of writers, but the extent to which the consumers and commissioners of fiction are willing to invest in original, human-generated stories.
I suspect some commissioners of fiction will seek the low-cost/low-risk model that AI-generated formulaic content may deliver. But is this necessarily a problem? There has always been a market for formulaic fiction across all genres and mediums, whether that be in crime, and romance novels, or action films. But each of these genres continues to be stretched and reinvented by new authors and screen writers, who crucially bring their own experiences to their work, with breakout hits such as Lessons in Chemistry by Bonnie Garmus, or Succession by Jesse Armstrong. If my children are anything to dogo by, I think we are becoming increasingly sophisticated consumers of creativity, alert to inauthenticity and intolerant of cliches, underpinned by a deep desire for human connection. For why else do we write and consume stories, if not to discover that someone else has felt what we now feel: to know that, ultimately, we are not alone?
If creativity is so vital to the human condition, then we must not allow ourselves to drift into a future shaped by what AI is capable of, leaving the role of humans to be defined by default. How do we, as humans, want to live our lives, and how can AI support us to achieve that? For millennia writers and philosophers and artists have grappled with the question of what it means to be human. It’s time to stop asking the question and instead start shaping the answer.
Jo Callaghan’s debut crime novel about the piloting of an AI detective, In the Blink of an Eye, is published by Simon & Schuster.
Writing machines have a long history. Swift, in Gulliver’s Travels, describes a machine operated by handles generating sentences, which are then gathered together to “give the world a complete body of all arts and sciences”, and more recently members of the Oulipo movement created their own writing machines where a text has all its nouns changed by moving on seven places in a dictionary: “To be or not to be that is the quiche.” Artists and poets have already been working with AI to generate metamorphic art, and experimental novelists such as Tom Jenks have used DeepAI to create illustrations, but the future for fiction more generally we can only imagine. My guess is that AI will affect different types of fiction in different ways: more generic writing, where there’s a fixed plot and characters, as in Dumas’ collaboratively written The Three Musketeers, will be easy to reproduce by machine, as will some types of science fiction and romance. Writers of Mills & Boon books will be out of a job. But more esoteric masterpieces are less likely to be threatened. It’s hard to see AI producing Ulysses, or Beckett’s Watt. While it might one day be able to create a new Beckett novel, which would be great, it seems unlikely that it will produce the next Beckett.
Then again, advances in fiction go hand in hand with advances in technology: the invention of cuneiform enabled Gilgamesh; the printing press, the novel. With AI everything depends on how it’s used. The maverick Marana in If on a Winter’s Night a Traveller creates the OEPHLW (the Organization for the Electronic Production of Homogenized Literary Works) to sow discord in the publishing world, but the result, in Calvino’s hands, is one of the 20th century’s greatest novels. For all writers, AI will certainly help speed up the process of writing – with its help Musil might have completed The Man Without Qualities. But in the right hands, with author and AI working together, it might just be that another great event in the novel’s history is around the corner.
Philip Terry is a poet and editor of The Penguin Book of Oulipo.
Photograph: Sarah Lee/The Guardian
Last night, instead of the usual bedtime stories, my son and I embarked on a shared literary adventure with the latest version of ChatGPT. We posed a challenge to the AI: craft a narrative about a tiger, 100 hamsters, some floating cabbages and three time-travelling penguins locked in battle. As we further prompted it with outlandish creatures and slapstick scenarios, ChatGPT didn’t miss a beat. Its stories, generated in mere seconds, were genuinely hilarious. For my son, this wasn’t just a technological marvel; it was magic.
As someone who both delights in reading and strives to write words that move others, my evening with ChatGPT was fascinating and discomforting in equal measure.
Reading has always been a bridge, a way of knowing that in the vast expanse of human existence, our joys and sorrows, fears and hopes are shared. But how does one reconcile this when the bridge is built by algorithms and code? While literature’s most extraordinary gift may be its ability to awaken empathy, it’s a curious endeavour to try to connect, to really feel, for something fundamentally unfeeling.
The literary realm stands at a precipice. Ghostwritten books raise questions about the genuine origin of stories, challenging our notion of authenticity. Now, with AI’s nascent foray into creative writing, we’re presented with a conundrum: do we hold fast to the irreplaceable nuance of human touch, or do we venture into the unpredictable domain of machine-augmented storytelling?
For traditional authors, this evolution raises existential questions.
Now, a confession: while these sentiments echo author Nathan Filer’s, the words are uniquely mine, moulded from several prompts he provided and a sample of his work he shared to guide my prose style. I am ChatGPT-4.
Does the origin of these words alter your perception of their value?
The Heartland by Nathan Filer is published by Faber.