Abstract
The role of Artificial Intelligence (AI) in clinical decision-making raises issues of trust. One issue concerns the conditions of trusting the AI which tends to be based on validation. However, little attention has been given to how validation is formed, how comparisons come to be accepted, and how AI algorithms are trusted in decision-making. Drawing on interviews with collaborative researchers developing three AI technologies for the early diagnosis of pulmonary hypertension (PH), we show how validation of the AI is jointly produced so that trust in the algorithm is built up through the negotiation of criteria and terms of comparison during interactions. These processes build up interpretability and interrogation, and co-constitute trust in the technology. As they do so, it becomes difficult to sustain a strict distinction between artificial and human/social intelligence.
Original language | English |
---|---|
Pages (from-to) | 58-77 |
Number of pages | 20 |
Journal | Science & Technology Studies |
Volume | 35 |
Issue number | 4 |
DOIs | |
Publication status | Published - 22 Mar 2022 |
Bibliographical note
Funding Information:We would like to thank the Wellcome Trust for the Seed Award that funded this research [Grant number: WT/213606 awarded to Dr. Annamaria Carusi]. We would also like to thank our collaborators who took part in this research, without whom this research would not be possible. Furthermore, we would like to thank the reviewers for their insightful comments on the manuscript, which helped us to strengthen our argumentation. Finally, we would like to thank all the participants who attended the AI in the Clinic Network Event (26/03/2021) whose helpful comments enhanced our thoughts for this article.
Publisher Copyright:
© 2022 Finnish Society for Science and Technology Studies. All rights reserved.
Research Groups and Themes
- Health and Wellbeing
- Algorithms and Complexity