Projects per year
Abstract
With the introduction of artificial intelligence (AI) to healthcare, there is also a need for professional guidance to support its use. New (2022) reports from NHS AI Lab and Health Education England (HEE) focus on healthcare workers’ understanding and confidence in AI clinical decision support systems (AI-CDDSs), and are concerned with developing trust in, and the trustworthiness of these systems. While they offer guidance to aid developers and purchasers of such systems, however, they offer little specific guidance for the clinical users who will be required to use them in patient care.
This paper argues that clinical, professional, and reputational safety will be risked if this deficit of professional guidance for clinical users of AI-CDDSs is not redressed. We argue it is not enough to develop training for clinical users without first establishing professional guidance regarding the rights and expectations of clinical users.
We conclude with a call to action for clinical regulators: to unite to draft guidance for users of AI-CDDS that helps manage clinical, professional, and reputational risks. We further suggest that this exercise offers an opportunity to address fundamental issues in the use of AI-CDDSs; regarding, for example, the fair burden of responsibility for outcomes.
This paper argues that clinical, professional, and reputational safety will be risked if this deficit of professional guidance for clinical users of AI-CDDSs is not redressed. We argue it is not enough to develop training for clinical users without first establishing professional guidance regarding the rights and expectations of clinical users.
We conclude with a call to action for clinical regulators: to unite to draft guidance for users of AI-CDDS that helps manage clinical, professional, and reputational risks. We further suggest that this exercise offers an opportunity to address fundamental issues in the use of AI-CDDSs; regarding, for example, the fair burden of responsibility for outcomes.
Original language | English |
---|---|
Article number | jme-2022-108831 |
Journal | Journal of Medical Ethics |
Early online date | 22 Aug 2023 |
DOIs | |
Publication status | E-pub ahead of print - 22 Aug 2023 |
Bibliographical note
Funding Information:All authors are part funded via the UKRI’s Trustworthy Autonomous Systems Node in Functionality under grant number EP/V026518/1. HS is additionally supported by the Elizabeth Blackwell Institute, University of Bristol via the Wellcome Trust Institutional Strategic Support Fund. JI is in part supported by the NIHR Biomedical Research Centre at University Hospitals Bristol and Weston NHS Foundation Trust and the University of Bristol.
Publisher Copyright:
© Author(s) (or their employer(s)) 2023. Re-use permitted under CC BY. Published by BMJ.
Fingerprint
Dive into the research topics of 'Clinicians and AI use: Where is the professional guidance?'. Together they form a unique fingerprint.Projects
- 1 Finished
-
UKRI Trustworthy Autonomous Systems Node In Functionality
Windsor, S. P. (Principal Investigator), Ives, J. C. S. (Co-Investigator), Downer, J. R. (Co-Investigator), Rossiter, J. M. (Co-Investigator), Eder, K. I. (Co-Investigator) & Hauert, S. (Co-Investigator)
1/11/20 → 30/04/24
Project: Research, Parent