MFA vs. CPU: Another MFA Article Misses the Bigger Picture
Electric Lit relies on contributions from our readers to help make literature more exciting, relevant, and inclusive. Please support our work by becoming a member today, or making a one-time donation here.
Every now and then, the literary world likes to take a break from debating whether ebooks are taking over or whether the novel is dead to discuss an even more pressing matter: are MFA programs bad?
Of all the literary debates, the MFA question might be the dullest, because the stakes are so low. Some writers like to take a few classes for a couple years, others don’t. There’s an important debate about funding — especially in this baby boomer-ravaged economy — but otherwise who really cares if an author has taken a few writing workshops? Not many editors, reviewers, or readers do. But that fact is actually what’s interesting about the MFA debate: it tends to completely ignore the groups that actually determine what gets published in favor of an MFA-centric theory of the literary universe where all other players orbit around the MFA, propelled by its workshopped gravity.
This weekend, The Atlantic jumped into the MFA debate with “How Has the MFA Changed the Contemporary Novel?” If you are intrigued by the title, don’t be. The article doesn’t examine how the rise of MFA programs has changed contemporary fiction. There’s not even any discussion of fiction before the rise of MFA programs. Instead, authors Richard Jean So and Andrew Piper — “two professors of language and literature who regularly use computation to test common assumptions about culture” — set out investigate a question that I truly believe no one has ever asked: are published novels by writers with MFAs stylistically similar to published novels by authors without MFAs that are reviewed by the New York Times?
a question that I truly believe no one has ever asked: are published novels by writers with MFAs stylistically similar to published novels by authors without MFAs that are reviewed by the New York Times?
So and Piper use “a variety of tools from the field of computational text analysis” (talk about vague) to compare some novels from MFA grads (story writers, poets, and non-fiction writers are ignored) to New York Times-reviewed novels by authors like Donna Tartt and Akhil Sharma. Their computer can’t detect much difference in vocabulary or syntax between the two sets of novels. The authors don’t investigate why stylistically similar books are being shopped by agents and published by editors. Instead, they assume what is published is representative of what is written, and conclude that MFA programs don’t affect writers.
The central question itself is a little bizarre. Who argues that MFA grads write differently from their mainstream literary fiction peers? Most aspiring novelists go to MFAs precisely to be able to write the kind of work that gets published by big houses and reviewed in major papers — i.e., mainstream literary fiction. So and Piper might have found very different results if they compared the works of MFA grads to, say, small press horror novels or self-published romance ebooks.
Most aspiring novelists go to MFAs precisely to be able to write the kind of work that gets published by big houses and reviewed in major papers — i.e., mainstream literary fiction.
Because of this sloppy methodology, So and Piper fail to rebut either the pro- or anti- MFA crowds, despite claiming to rebut both. The argument for MFAs is essentially that studying the craft and taking dedicated time to work with engaged peers will help your writing get better. (The more cynical might say that even if it doesn’t help your writing, it can get you important connections.) Does the MFA help people get better? In my experience, yes, but So and Piper make no attempt to analyze whether writers improve or change during an MFA. They don’t compare authors’ work before and after MFA programs, nor do they see if writers’ publication rate or job prospects increase after getting an MFA.
So and Piper also make the faulty assumption that the influence of MFA writing can be measured by MFA degrees. A case in point: one of the three examples The Atlantic gives for a non-MFA writer they analyzed is Akhil Sharma. Sharma studied under writers like Joyce Carol Oates and Paul Auster in undergrad, then was awarded a prestigious Stegner creative writing fellowship, and has taught in the MFA program at Rutgers. It is only a technicality that Sharma doesn’t count as an MFA author (the Stegner is an MFA-style creative writing program at Stanford that is largely awarded to people who already hold MFAs). The authors don’t make their data public, but there’s little doubt that their “non-MFA” data set is filled with writers who similarly either studied creative writing in undergrad or teach in MFA programs.
The Atlantic piece is part of a rise in “data journalism” invading the arts. Computer analysis of artistic works can be interesting, but the majority of the time it seems to show the biases and assumptions of the authors rather than anything about the work itself. Everyone knows how Nate Silver revolutionized baseball analytics and election forecasting with his data-driven approach, but when Silver launched his FiveThirtyEight website and attempted to extend “data journalism” into the arts, the results have been pretty silly. I still remember when the site launched, it featured an analysis of Shakespeare’s Romeo and Juliet that declared “More than 400 years after Shakespeare wrote it, we can now say that ‘Romeo and Juliet’ has the wrong name.” The author “discovered this by writing a computer program to count how many lines each pair of characters in ‘Romeo and Juliet’ spoke to each other” and being shocked to find that Romeo and Juliet don’t speak to each other as much as they speak to other characters. Of course, anyone who studied that play in middle school knows that the entire point of the play is that Romeo and Juliet are “star-crossed lovers” whose relationship is thwarted by outside forces. We don’t need data to tell us the main characters are kept apart from each other, that’s literally what the entire plot revolves around.
So and Piper don’t get into detail about how their data analysis works, but what they do say brings up far more questions than answers. For example, So and Piper claim to analyze the “themes” of MFA and non-MFA novels, but spend only two sentences describing this:
To test whether this was the case, we used a method called topic modeling that examines themes instead of individual words. And while MFA novels do tend to slightly favor certain themes like “family” or “home,” overall there’s no predictable way these topics appear with any regularity in novels written by creative writing graduates more than other people who write novels.
Telling us a book is about “home” or “family” isn’t really delving into its themes in any meaningful way. Would So and Piper’s program tell us the “themes” of The Metamorphosis and Moby-Dick are “insects” and “the ocean”? Later, they claim to calculate the number of “strong female characters” in the novels without any explanation of how their algorithm decides which female characters are strong and which are flat and cliché. (I assume the authors are being loose with language and by “strong” they just mean “has a lot of lines,” but who knows.)
Would So and Piper’s program tell us the “themes” of The Metamorphosis and Moby-Dick are “insects” and “the ocean”?
The only interesting parts of the essay are when So and Piper say their program doesn’t detect much difference between the voices of writers of color and white writers, and when they note that women characters are underrepresented in all books. But this part only highlights again how weak their argument is — and the arguments of so many similar MFA articles — because they completely ignore the book-producing elephant in the room: the publishing industry.
While So and Piper smarmily note that MFA programs claim to be “challenging ‘patriarchy’ and ‘heteronormativity’” while producing sexist work, they seem to naively believe that MFA programs determine what gets published. They don’t. Writers with or without degrees don’t either. Writers of color frequently talk about how editors ask them to make their voices “less ethnic” or change their books to fit what “the market” wants. Groups such as VIDA have long highlighted gender disparity in publishing and in reviewing. While there is certainly sexism and racism in the MFA world, what and who gets published and covered is far more determined by editors, agents, marketing directors, reviewers, publicists, and even readers than MFA professors.
This is ultimately the problem with the entire MFA debate. It ignores all the outside pressures, signals, influences, and factors that determine what gets published. MFAs can be useful to writers, especially when they are well funded, but ultimately, the MFA is only two to three years out of a writer’s life. Those years don’t outweigh decades of signaling from the publishing industry, major newspapers, and magazines about what type of fiction is popular and publishable. And they don’t outweigh years of one’s personal reading habits and taste either. Writers tend to leave the MFA program with their tastes and style in tact and their writing a little more honed. Hopefully they have a polished manuscript freshly printed in their hands. But when the leave the warm confines of the MFA program, they face the cold world of agents, editors, and readers who couldn’t care less what workshop comments or professor feedback they got.