Artificial intelligence in clinical decision-making: Rethinking liability

Helen Smith*, Kit Fotheringham

*Corresponding author for this work

Research output: Contribution to journalArticle (Academic Journal)peer-review

13 Citations (Scopus)
215 Downloads (Pure)


This article theorises, within the context of the law of England and Wales, the potential outcomes in negligence claims against clinicians and software development companies (SDCs) by patients injured due to AI system (AIS) use with human clinical supervision. Currently, a clinician will likely shoulder liability via a negligence claim for allowing defects in an AIS’s outputs to reach patients. We question if this is ‘fair, just and reasonable’ to clinical users: we argue that a duty of care to patients ought to be recognised on the part of SDCs as well as clinicians. As an alternative to negligence claims, we propose ‘risk pooling’ which utilises insurance. Here, a fairer construct of shared responsibility for AIS use could be created between the clinician and the SDC; thus, allowing a rapid mechanism of compensation to injured patients via insurance.
Original languageEnglish
Pages (from-to)1
Number of pages24
JournalMedical Law International
Publication statusPublished - 26 Aug 2020

Structured keywords

  • LAW Centre for Global Law and Innovation


  • artificial intelligence (AI)
  • Duty of care
  • negligence
  • Risk allocation
  • tort law
  • tort
  • risk pooling
  • Artificial intelligence


Dive into the research topics of 'Artificial intelligence in clinical decision-making: Rethinking liability'. Together they form a unique fingerprint.

Cite this