I Got an Artificial Intelligence to Write My Novel
And it didn't do a better job than me—but we should maybe be worried that it didn't do much worse
According to the literary critic Harold Bloom, among writers “influence always proceeds by misinterpretation.” In The Anxiety of Influence, Bloom argues that writers willfully ignore and misinterpret their predecessors in order to make their canvases as blank as possible.
Bloom’s specific stages of anxiety (outlined much like the stages of grief) are a bit too Freudian and peculiar to be truly universal, but he’s right that to create is to exist in a state of anxiety. Like construction in Rome, we writers are always building on top of something. I too keep an eye on particular contemporary writers, like Richard Powers and Rebecca Goldstein, just in case.
But in the past year my anxiety of influence has shifted far and away to another source: an entity called GPT-3. It’s an artificial neural network with over 175 billion parameters—think of it like an artificial brain with the computing power of 175 billion connections (if it makes you feel any better, you probably have around 125 trillion synapses in your own brain, for comparison).
Developed by OpenAI, GPT-3 costs several million dollars of computational work just to train, and now subscription services that let you access GPT-3 are both approval-only and cost hundreds if not thousands of dollars a month.
GPT-3 is a natural language processor, which means it’s trained to try to complete any prompt that it’s given. Its training data is basically the entire internet, so given a prompt, like a few paragraphs of text, it will make a guess as to what comes next. These guesses show that GPT-3 can really write. It can write in all sorts of styles, oftentimes as convincingly as a real human author. Like a medium, it can even channel the dead. The anxiety I feel toward it is different than toward any writer that once lived and breathed. I think it represents the first warning shots of an impending man vs. machine agon of language.
This is not something anyone in the publishing industry appears to have noticed. The academization of literature, making it a prerequisite for writers to climb the hierarchy up all the way to that famous MFA program (a journey now necessary if you want to be published and reviewed), has created a lot of writers incurious about technology and science. So I doubt more than a handful in the literary community are paying attention to how things might change for them as the limits of deep learning get pushed further out.
Confronting my anxiety head on (Bloom might deem this the “daemonization” stage), I decided to see if GPT-3 could have written my debut novel, The Revelations. To prove to myself, once and for all, there’s nothing to be anxious about (oh reader—there is).
Getting access to GPT-3 requires various approvals, which is why the running joke is that OpenAI should drop the “Open” from their name. This gives interacting with GPT-3 an oracular quality, since you’re communicating with its galaxy brain hosted on some tightly-controlled server. Once my sojourn to Delphi was complete, I fed GPT-3 the jacket copy of my novel—that description on the flaps of the hardcover that tells readers what they’re getting into. This gave GPT-3 a sense of who I was and how to write like me. Then, trying not to bias the experiment, I flipped to a random section of The Revelations and selected a few paragraphs I thought ripe for comparison.
The randomly-arrived at short scene is around the middle of the novel. Carmen, a young scientist, has been pursuing what she thinks was the murder of one of her colleagues in the New York City subway. The events surrounding the investigation have gotten stranger and more mysterious over time, and now, unable to make any progress, she’s staking out the subway station late at night. Within the strained atmosphere she’s been under, and the mysterious guttural phone calls she’s been getting from unlisted numbers since she started the investigation, she begins to dread seeing something inhuman. Here’s the original:
“Stand clear of the closing doors, please.”
Eventually the people become less numerous. In the lateness of the night her mind begins to imagine scenarios, to play them out again and again on a loop, as if some part of her is daring something to happen, is waiting for something drastic. Instead the night just gets later and later and the people fewer and fewer, until finally entire trains are coming and going without the entrance or exit of a single soul. And Carmen is becoming half-terrified of the menacing long platform next to her, which extends into that subterranean realm… the scenario she imagines is that a creature, something huge but lithe, something with an animal head, peeks from the side out onto the platform, a bull’s head maybe, leering at her, its mouth gaping and panting, its eyes impossibly dark and lewd upon her, and eventually the image becomes so strong, and the expectation so intense, that she suddenly bolts from her wooden seat, pushing through the turnstiles and in her flight continually whipping around to look until she reaches the safety of the surface.
Now here’s GTP-3’s version, fed the above paragraph up to the word “creature” (as well as the jacket copy of the book). Where the bolded text starts below is where GPT-3 took over writing:
And Carmen is becoming half-terrified of the menacing long platform next to her, which extends into that subterranean realm . . . the scenario she imagines is that of a creature who crawled out of the earth in a burrow, a huge being with tiny eyes and long and stumpy arms. . . But this wasn’t a dream, was it? The wind of the passing trains nearly knocks her off her feet, and before she knows it she is bursting out of the subway, up the stairs, through the turnstiles and out onto the street, running down the sidewalk, her breath coming in gasps and the crescendo is rising and rising in the pit of her stomach, her sneakers flapping against the asphalt, and they are all just blurs now, someone turning up a music system looks up, a dog barks and gets its leash yanked, an old couple walking arm in arm looks up, but no one notices her, and the city throbs, breathes, exhales.
To get this I merely prompted GPT-3 a handful of times, threw out the first two results, and deleted a few clauses of GPT-3’s third try. It took about five minutes. I didn’t add a single thing, it’s all GPT-3. Damn machine had the gall to write my book.
No doubt, this artificial prose has flaws. Long and stumpy arms? Wind from a subway isn’t strong enough to nearly knock someone off their feet. Carmen is too terrified in this version, rather than just creeped out. That sentence where she considers whether this is a dream doesn’t add up grammatically. Although perhaps Bloom would say this is merely the “clinamen” stage of anxiety, a classic misreading to make this technological shadow of myself look worse than it is.
Still. Any writer worth their ink should start feeling some AI anxiety on reading that output. There are a number of advanced literary techniques GPT-3 is using here. The long run-on sentence of Carmen’s flight means GPT-3 knows that style is apt for describing characters in motion. And the break in narrative to linger on the people who don’t notice her flight as she runs past, that’s good technique as well. It’s cinematic, makes a reader focus on the city itself. The last line of “and the city throbs, breathes, exhales” is definitely something I’d write. It fits the atmosphere of the novel, which treats New York City like its own organism possessing a centuries-slow consciousness.
That particular phrase is so appropriate for the novel it felt reminiscent. After searching the text of the book I came upon a similar phrase describing a storm the characters find themselves caught in, on what becomes the night of the murder.
The city inhales and exhales in great whooping winds.
That’s from a section GPT-3 wasn’t shown. It can’t possibly have seen it. While GPT-3 itself is trained on a corpus of text (basically the entire internet) from the year 2019, The Revelations came out only last month. Just the fact that the AI deduced to write “and the city throbs, breathes, exhales” from the given sample and jacket copy—it’s uncanny.
I’m happy to report there are still issues with GPT-3. It has limited space for input and output, only about 1,500 words or so, and the fact is that if you feed it its own ramblings it becomes more and more incoherent. The AI still needs a human editor to tether it to reality. But it’s a fine first-draft writer in short bursts, especially since it can generate paragraphs about 1000x faster than a human. You just click and there’s the text for you to pick and choose from. I wouldn’t want to write this way, but others will surely use it as a co-author, and it might legitimately improve their books. And if they did use it—who would know?
Beyond an artificial helper, writers should seriously be worried about GPT-4 as a direct competitor. When GPT-5 rolls around, they should feel dread. Therein lies the heart of this new technological anxiety: its inevitable nature. Consider that when I was born, language, whenever I encountered it, was always generated by human consciousness. When I die, will most language come from a source separate from consciousness? Things that speak and things that feel are now entirely dissociable. I grew up in my mother’s independent bookstore, so to me this is anathema, a debasement of the holy. Why is no other writer in the world freaking out about this new Babel?
It doesn’t help that the post-work future is so often envisioned as the AIs doing all the labor, leaving humans free to spend their days making art. But what if the AIs are better at making art too?
Does this output even count as “art”? The words of an AI have no intentionality. Only conscious minds produce meaning. This is more like infinite monkeys typed out infinite nonsense, and eventually this creates a Sylvia Plath poem. One might argue it is the consciousness of the observer that gives meaning to art, not consciousness as art’s producer, but then the reply is that any meaning here is just pareidolia—it’s like seeing faces on the rocks on Mars. It is a deep fake of meaning itself. In this way AI robs us of our very words by diluting their importance away. These machines give us sentences with perfect syntax but without intentional semantic content—something I’ve called the “semantic apocalypse.”
As it stands right now, GPT-3 could not write The Revelations, even with a heavy editorial hand. It could certainly contribute a number of relevant scenes and phrases. Maybe, hopefully, GPT-3 is as good as natural language processors get. Maybe it will always need micromanagement. Maybe maybe maybe. Maybe not. The situation for poets is already far worse. Oh, poor poets. All the things GPT-3 struggles with, like long-term coherency, causality, common sense knowledge, character development, etc, are all things that rarely matter in poems. Same for songwriters. Consider the recent “Drowned in the Sun,” a catchy new Nirvana song made by an AI trained on their old work.
What would Bloom’s horror have been if in the future a simple prompt to GPT-X generates a perfect new Shakespeare sonnet? What anxiety would your average poet feel then? Prompt. Perfect poem. Prompt. Perfect poem. Prompt. And if it can do this for any living writer as well, in any format? Some authors may declare it doesn’t matter, that it’s their identity that makes a product special, not the product itself. But what an honest crafter of language would feel—one who cares about language qua language—is anxiety. The forever crippling kind.
Now, I’m not saying that writers are necessarily under existential threat from GPT-3. When I attend literary events I don’t only see a bunch of dinosaurs plodding across a tar pit. I just get flashes of a possible future. GPT-3 and its ilk could, somehow, not affect literature at all. But just by existing they do necessarily make human production of art smaller in its shadow.
I will tell you a funny thing. A strange one as well, though perhaps it was always inevitable. Lately, if I look in the mirror too long, I see only an ape staring back.