Projects per year
Abstract
Much has been written about the need for trustworthy artificial intelligence (AI), but the underlying meaning of trust and trustworthiness can vary or be used in confusing ways. It is not always clear whether individuals are speaking of a technology’s trustworthiness, a developer’s trustworthiness, or simply of gaining the trust of users by any means. In sociotechnical circles, trustworthiness is often used as a proxy for ‘the good’, illustrating the moral heights to which technologies and developers ought to aspire, at times with a multitude of diverse requirements; or at other times, no specification at all. In philosophical circles, there is doubt that the concept of trust should be applied at all to technologies rather than their human creators. Nevertheless, people continue to intuitively reason about trust in technologies in their everyday language. This qualitative study employed an empirical ethics methodology to address how developers and users define and construct requirements for trust throughout development and use, through a series of interviews. We found that different accounts of trust (rational, affective, credentialist, norms-based, relational) served as the basis for individual granting of trust in technologies and operators. Ultimately, the most significant requirement for user trust and assessment of trustworthiness was the accountability of AI developers for the outputs of AI systems, hinging on the identification of accountable moral agents and perceived value alignment between the user and developer’s interests.
Original language | English |
---|---|
Number of pages | 22 |
Journal | AI and Society |
Early online date | 23 Apr 2024 |
DOIs | |
Publication status | E-pub ahead of print - 23 Apr 2024 |
Bibliographical note
Publisher Copyright:© The Author(s) 2024.
Keywords
- artificial intelligence
- trustworthy AI
- public perceptions of AI
- AI ethics
- algorithmic accountability
- AI governance
Fingerprint
Dive into the research topics of 'Adaptable Robots, Ethics, and Trust: A Qualitative and Philosophical Exploration of the Individual Experience of Trustworthy AI'. Together they form a unique fingerprint.Projects
- 1 Finished
-
UKRI Trustworthy Autonomous Systems Node In Functionality
Windsor, S. P. (Principal Investigator), Ives, J. C. S. (Co-Investigator), Downer, J. R. (Co-Investigator), Rossiter, J. M. (Co-Investigator), Eder, K. I. (Co-Investigator) & Hauert, S. (Co-Investigator)
1/11/20 → 30/04/24
Project: Research, Parent