Accidental injustice: Healthcare AI legal responsibility must be prospectively planned prior to its adoption

Kit Fotheringham*, Helen Smith

*Corresponding author for this work

Research output: Contribution to journalReview article (Academic Journal)peer-review

Abstract

This article contributes to the ongoing debate about legal liability and responsibility for patient harm in scenarios where artificial intelligence (AI) is used in healthcare.We note that due to the structure of negligence liability in England and Wales, it is likely that clinicians would be held solely negligent for patient harms arising from software defects, even though AI algorithms will share the decision-making space with clinicians. Drawing on previous research, we argue that the traditional model of negligence liability for clinical malpractice cannot be relied upon to offer justice for clinicians and patients. There is a pressing need for law reform to consider the use of risk pooling, alongside detailed professional guidance for the use of AI in healthcare spaces.
Original languageEnglish
JournalFuture Healthcare Journal
Volume11
Issue number3
DOIs
Publication statusPublished - 19 Sept 2024

Research Groups and Themes

  • Centre for Global Law and Innovation

Keywords

  • artificial intelligence (AI)
  • Healthcare
  • Negligence
  • Regulation

Fingerprint

Dive into the research topics of 'Accidental injustice: Healthcare AI legal responsibility must be prospectively planned prior to its adoption'. Together they form a unique fingerprint.

Cite this