Canadian writer Stephen Marche presented the results of his recent experience with “algorithm-guided” writing in a short story published recently in Wired (December 2017). The algorithm was developed by the research team of Adam Hammond and Julian Brooke, who use big data to illuminate linguistic issues. We know automated processes have been writing newspaper stories for some time, so far only basic business and sports stories, using a program developed by another Hammond, Kris. But pure creative work, Lit-ra-ture?
In a nutshell, Marche collected 50 science fiction short stories he admires and gave them to the researchers. Their software analyzed the stories for style and structure, then gave Marche information on what they have in common.
Could this advice help him write a better story?
The analysts first presented Marche with style guidelines to bring the new story he was writing into closer sync with his 50 favorites. Examples of such general guidelines are:
- There have to be four speaking characters
- 26% of the text has to be dialog
From there, the analysts developed 14 very specific rules to govern the new story’s content. The usefulness of the rules, though, depended totally on the 50 stories he selected. One rule encouraged greater use of adverbs and even set a quota for the number of adverbs needed in every 100 words of text. That rule probably reflects that, among the 50 stories, were several from decades ago, when adverbs were less frowned upon by editorial tastemakers. Choosing only contemporary stories would probably eliminate that prescription.
Similarly, another rule limited the amount of dialog that should come from female characters—another artifact of an earlier era, one hopes. This, even though the late Ursula K. LeGuin’s story “Vaster than Empires and More Slow” was included and stories written by women divide dialog almost equally between male and female characters. Those by men (at least the ones he close) clearly do not. Marche was limited to 16.1%.
What did the algorithm “think” of his story?
Marche wrote a draft of his story, submitted it to his electronic critique group of one, and began to revise. As he worked on it, the software flagged areas—words even—in red or purple where Marche violated the rules, turning green when he fixed it properly. (Sounds soul-crushing, doesn’t it?) Marche says, “My number of literary words was apparently too high, so I had to go through the story replacing words like scarlet with words like red.”
I particularly admire Rule Number Six: “Include a pivotal scene in which a group of people escape from a building at night at high speed in a high tech vehicle made of metal and glass.” Could authors reverse-engineer these rules to help them avoid cliché situations and themes? Would it be possible to violate all of them, consistently? Bring new meaning to the phrase “purple prose”?
Submitted to two real-life editors, Marche’s story was panned as full of unnecessary detail (those adverbs again) and implausible dialog—I guess because the women didn’t speak—and pegged as “pedestrian” and “not writerly.”
Marche’s human editor was more upbeat: “The fact that it’s really not that bad is kind of remarkable.” You can read the results here and decide for yourself. But the fact the software could be helpful at all has me watching my back!