FFF-Jeffrey Bardzell 5Feb2021
"Chimeras of Boundless Glamour, Realities of Little Worth"
Jeffrey Bardzell, Penn State University
Though its major concepts have been around for over half a century, artificial intelligence seems to be extending beyond computer and cognitive science into a much wider range of domains. According to a Deloitte study, the early emphasis on AI builders—researchers, software developers, data scientists, etc.—is slowly giving way to a new type AI worker, known as AI translators: business leaders, user experience designers, and subject matter experts. Professionals in the latter group will probably not write algorithms that train neural networks, but they will need to understand AI’s capabilities and limitations well enough to leverage it within their organizations. As an HCI researcher and educator, I realized that I, too, needed to become an AI translator, not least because I was helping to train the next generation of them. So I started to read books and articles about AI. I noticed that while I was reading about word vectors, latent features, and unsupervised learning, even the most technical readings seemed haunted: for lurking behind explanations of captured vs. exhaust data and Bayesian search, there was a horror story—a “demon” in Elon Musk’s words—that AI would take our jobs, if it didn’t enslave or kill us outright. Increasingly, I began to attend to these parts of the readings, which sometimes came across as any combination of guilty, dismissive, thoughtful, and/or defensive. Many insisted on leaving any final judgment concerning AI’s risks versus benefits up to the reader, but implored the reader to acquire AI literacy sufficient to do so—a bewildering non-answer. But no one is new to such stories. HAL 9000, an AI from Arthur C. Clarke’s Space Odyssey, decides to eliminate the humans onboard its space ship. Mary Shelley’s Frankenstein narrates the consequences of a monster abandoned by its horrified creator. John Milton’s Paradise Lost chronicles the actions of the fallen angel, who rejected his creator and was abandoned by him in turn. The myth of the unnatural creation that turns against its creator has been with us for thousands of years. Literature, it is said, trains us how to read, and so I began to wonder, how might a novel like Frankenstein train me, as an HCI researcher moving into an AI translator role, to read about artificial intelligence? And the answer is not a cheap moral about a mad scientist—but rather about how humans use narratives as a technical methodology to work out what happens to ourselves when we experience transformation—which is timely, because those of us who are AI translators are also, inevitably, AI storytellers.