Abstract
With the recent rapid developments in artificial intelligence (AI), social scientists and computational scientists have approached overlapping questions about ethics, responsibility, and fairness. Joined-up efforts between these disciplines have nonetheless been scarce due to, among other factors, unfavourable institutional arrangements, unclear publication avenues, and sometimes incompatible normative, epistemological and methodological commitments. In this paper, we offer collaborative ethnography as one concrete methodology to address some of these challenges. We report on an interdisciplinary collaboration between science and technology studies scholars and data scientists developing an AI system to detect online misinformation. The study combined description, interpretation, and (self-)critique throughout the design and development of the AI system. We draw three methodological lessons to move from critique to action for interdisciplinary teams pursuing responsible AI innovation: (1) Collective self-critique as a tool to resist techno-centrism and relativism, (2) Moving from strategic vagueness to co-production, and (3) Using co-authorship as a method.
Original language | English |
---|---|
Article number | 2331655 |
Number of pages | 21 |
Journal | Journal of Responsible Innovation |
Volume | 11 |
Issue number | 1 |
Early online date | 23 Apr 2024 |
DOIs | |
Publication status | Published - 31 Dec 2024 |
Bibliographical note
Publisher Copyright:© 2024 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.