Generative artificial intelligence taking over what we thought were uniquely human activities offers of-the-minute plot lines for mystery and crime writers. We’re accustomed to robot reporters covering high school sports and company earnings reports, but ChatGPT and its kin producing the Great American Novel-To-Be? What about AI creating art, video, and audio that mimics specific human voices? Whole new realms of possible crimes open up. A recent Washington Post article calls this an era of “wild hopes and desperate fears.” If the genie isn’t already out of the bottle, it’s certainly punched through the top.
“The capacity for a technology to be used both for good and ill is not unique to generative AI,” the Post article says. Other types of AI tools have downsides too. One that immediately raised skeptical questions is the idea of deploying AI in policing. A recent Guardian article by Jo Callaghan starts by describing the questions such a move would raise. While it makes sense to continue the long-standing practice of sending a robot to check out suspicious packages, San Francisco’s board of supervisors has planned to arm robots with lethal explosives, before pushback caused them to take a step back, maybe only temporarily.
Public confidence in the police has declined sharply in recent years, not just in the United States, but in England and Wales too, Callaghan reports. Meanwhile, it’s a job that “requires hundreds of judgments to be made each day, often under conditions of extreme pressure and uncertainty.” These decisions are informed by a lot of factors unrelated to the situation confronting the officer: past experience, recent trauma, temperament, attitudes and prejudices absorbed from the rest of society. Could AI, presumably relieved of all those extraneous factors, do better? Operate more fairly and efficiently? Maybe, maybe not.
“Narrow AI,” Callaghan explained, can perform specific tasks, like identifying the bomb in that abandoned backpack; “general purpose AI” makes more complicated judgments and decisions, even the kinds public safety personnel must make. The deep learning that enables general purpose AI results from feeding the system huge amounts of data. For example, having been fed millions of photographs of human faces, facial recognition AI can pick out suspects. We see this and other examples of AI creeping into novels and TV cop shows, where, for example, GPS data are used not only to develop “heat maps” of where crimes are likely, but also to predict specific suspects’ likely location or where to look for a missing person. You can see why some authors prefer to set their stories before 1970. The technology is a lot to keep up with.
Callaghan concludes, “Instead of debating what AI will or will not be able to do in the future, we should be asking what we want from our criminal and justice system, and how AI could help us to achieve it.” These are questions crime writers wrestle with too.
Good writing deserves good readers. My quarterly newsletter contains tips for reading, writing, and viewing. Sign up here and receive three prize-winning short stories!
A thought provoking article. We are venturing into the future, and we are seeing it as it develops. It should be an interesting ride.
Whenever I think about AI I think back to the movie, 2001, A Space Odyssey and exchange between the astronaut and HAL, the thinking computer. “Open the bay door, Hal.” followed by, “I can’t do that, Dave.” Back then it was pure sci-fi amusement, but now I’m waiting for the next step where someone asks Alexsa , or whatever her name is, to do something and have her reply, “I don’t think I’ll do that.” It certainly is the stuff that makes for an interesting dialogue and discussion.
Yes, it does! I had to stay focused in writing that post, because it was easy to wander into the swamp of “predictive policing” and that’s a post (or five or six) for another day.