From the day humans first began recording our stories, one thing has remained consistent: The form evolves. Innovations like paper, the printing press, and the internet have transformed how we document and distribute our literary endeavors, inspiring new ways of conceiving what storytelling and literature can be. But with the rise of chatbots like ChatGPT, the relationship between technology and the arts has fundamentally shifted—from one of distribution to one of creation.
One of the most debated questions, of course, is whether an AI’s creations can have any real artistic merit. But so far, the critiques of creative works generated by AI have either been limited to quantitative methods or to small, simple bits of text. So a cultural historian and a researcher from the field of cognitive robotics recently teamed up to try to apply critical literary analysis—a more qualitative approach akin to how a writing professor would appraise a student—to a full-length work produced by a bot. Murray Shanahan, a professor of computing at the Imperial College of London, and cultural historian Catherine Clarke of the University of London, ran a series of experiments and published their results in a preprint (it has not yet been peer reviewed).
“The air held a peculiar hum, and a strange carriage with no horses thundered past.”
For their first experiment, Shanahan and Clarke fed their chatbot the opening to a piece of speculative fiction they had written and asked it to write what should come next. Then they instructed it to improve the style and content in successive iterations. In a second experiment, they tuned the model to try out more surprising word choices. In the third, they put the bot into a feedback loop in which it was instructed to critique its own output based on a transcript of the human mentorship offered in the first experiment.
The researchers were surprised by the bot’s creative capabilities. For example, it proved adept at maintaining a challenging perspective. The story it was tasked with completing followed a woman who traveled from the 16th to the 20th century, and the chatbot successfully described what she encountered in the “future” through the lens of her Renaissance cultural knowledge. “The air held a peculiar hum, and a strange carriage with no horses thundered past,” the bot wrote. It also introduced an entirely new character, a silver-haired woman named Margaret, without prompting from its human mentor.
But on a meta-level, the project’s most important finding was that the sophistication of the bot’s output depended heavily on the sophistication of its human cues—the researchers took this as an indication that a bot may only produce work of literary merit with heavy coaching from a creative human. Large Language Models like ChatGPT “can produce some pretty exciting material—but only through sustained engagement with a human, who is shaping sophisticated prompts and giving nuanced feedback,” says Clarke. “Developing sophisticated and productive prompts relies on human expertise and craft.”
For example, when the researchers wanted to coax greater detail and complexity from a particular passage, they prompted it to apply both “a new version of the opening paragraph where the knots and wrinkles in the wooden door remind [Effie] of the swirls and eddies in the water where the ford neets [sic] the bank” and “some inner monologue. Maybe with some influence from Virginia Woolf.” The result was an interesting fusion of the two:
Yet wasn’t the water, so lively, also silent? And the wood, so silent, so full of stories? She felt a kinship then, to both. A stream herself, caught between the effervescence of youth and the indelible marks of time, swirling with her own unspoken stories.
Could artificial intelligence tools emerge as “co-creators” for future authors, amplifying rather than replacing human creativity, shifting our understanding of what creativity means? Clarke and Shanahan say their study, while just a start, suggests this is the direction future literary creativity will take.
“I think we’re all grappling with the ethical concerns around AI, and some of us are instinctively hostile or resistant,” says Clarke. “But the humanities simply has to be part of the conversation—or this new world advances without our input.”
It may be an uncomfortable concept for literary traditionalists, but we’ve adjusted our notions of creative authenticity before. Buckminster Fuller’s I Seem to Be a Verb, the USA Trilogy of Dos Passos, the Nocilla Trilogy of Mallo, or the recent My Work by Olga Ravn—all are largely comprised of “found” elements drawn from across various media platforms then repackaged for novel narrative purposes. Fuller’s work, for instance, published in 1970, has no real narrative, and is packed with quotes, slogans, powerful images, and political soundbytes of the kind you might find all over social media today. These works are generally accepted as great by critics, yet were born out of the same technomedia landscape that heralded AI.
Did calculators replace mathematicians? Of course not, though they did revolutionize how their human operators go about their work. Perhaps a decade or so from now, it will be similarly conventional for a writer to amplify their creative powers via chatbot—or whatever guise AI comes in next.
Lead image: studiostoks / Shutterstock.