Explaining Machine Learning Practice: Findings from an Engaged Science and Technology Studies Project

Marisela Gutierrez Lopez*, Susan Halford

*Corresponding author for this work

Research output: Contribution to journalArticle (Academic Journal)peer-review

Abstract

The widespread use of machine learning (ML) models for decision-making raises critical concerns about transparency and accountability – to which an increasingly popular solution is ‘Explainable AI’ (XAI). Here, the object of explanation are technically complex models which are difficult or even impossible to explain. In contrast, this paper makes a call to de-centre models as the object of explanation and look towards the network of ‘machine learning practice’ that bring models into being, and use. We explore this term through an ethnographic study, conducted in collaboration with a large financial services company. Drawing on recent STS research, we ask: what would explanation look like from a position that recognises the emergent and relational nature of machine learning practice, and how might this contribute to greater accountability and responsibility for ML in use? Inspired by the engaged program in STS, we explore if and how approaching explanation through ML practice can be mobilised to intervene in how explanations are done in organisations. Our empirical analysis shows an ‘ecology’ of multiple, situated and intra-acting explanations for machine learning practice across a range of human and non-human actors in the company. We argue that while XAI is inevitably partial and limited, its value lies in establishing explanations as an imperative in contexts where ML is implicated in decision-making. Overall, our research suggests a need to widen and deepen the search for explanations, and explore the opportunities for provisional, relational and collective interrogations over what can (and can’t) be explained about ML practice.
Original languageEnglish
JournalInformation, Communication & Society
Early online date9 Sept 2024
DOIs
Publication statusE-pub ahead of print - 9 Sept 2024

Bibliographical note

Publisher Copyright:
© 2024 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.

Research Groups and Themes

  • ESRC Centre for Sociodigital Futures

Keywords

  • Explainable AI
  • Machine Learning
  • Science and Technology Studies
  • Engaged Research

Fingerprint

Dive into the research topics of 'Explaining Machine Learning Practice: Findings from an Engaged Science and Technology Studies Project'. Together they form a unique fingerprint.

Cite this