Electricity and Alchemy: (Un)Explainable AI and (Un)Explainable Literature

Research output: Chapter in Book/Report/Conference proceedingChapter in a book

Abstract

As deep neural networks (DNNs), Recurrent Neural Networks (RNNs), and ‘transformers’ increasingly read, write, translate, and reconstruct literary and other forms of creative output, users and consumers need to be able to understand the strengths and weaknesses of these sophisticated machine learning processes and outputs. The failure of ‘Explainable AI’ or XAI to offer explanations that are readily interpretable to humans will risk amplifying existing sociotechnical and cultural imaginaries that already code hyperbolic fears concerning AI. This chapter asks whether any fresh light might be cast upon ‘explainable’ AI (or XAI) and attempts to explain the creative operations of ‘black box’ AI models by examining analogous attempts to explain the opaque processes of creative human outputs. Taking as its case studies Beckett’s novel Watt, Montfort’s Megawatt, and Shelley’s Frankenstein, it advocates both for more humans and more humanities in the loop in order to generate new insights into AI (un)explainability and XAI as a mode of literary ‘companionship’.
Original languageEnglish
Title of host publicationThe Routledge Handbook of AI and Literature
PublisherRoutledge
Chapter22
Number of pages11
ISBN (Electronic)9781003255789
ISBN (Print)9781032186948
DOIs
Publication statusPublished - 30 Dec 2024

Fingerprint

Dive into the research topics of 'Electricity and Alchemy: (Un)Explainable AI and (Un)Explainable Literature'. Together they form a unique fingerprint.

Cite this