Artificial intelligence use in clinical decision-making
: allocating ethical and legal responsibility

Student thesis: Doctoral ThesisDoctor of Philosophy (PhD)


Advances in computer science have resulted in the development of artificially intelligent systems (AISs) designed for deployment in healthcare environments. There is a potential risk of patient harm eventuating if an AIS dispenses an output which is inappropriate for a patient and a clinician’s decision-making is influenced by that output. Because of this potential risk, the ethical and legal consequences of AIS used must be considered and planned for prior to AIS deployment.
My literature review noted neither case law nor legislation in the law of England and Wales specific to negligence in the use of AISs in clinical decision-making. This informs two research questions:
• How, according to current law in England and Wales, will legal liability be allocated between clinicians and software developing companies (SDCs) when AISs are used in clinical decision-making?
• How can ethical responsibility for the consequences of the use of AIS in clinical decision-making be determined and allocated?
My legal analysis finds that clinicians risk shouldering the burden of a negligence claim despite the SDCs actions of supplying the AIS. Using ethical theory, I determine that it is unfair for clinical users to solely shoulder responsibility as an SDC is also causally responsible for harms resulting from the use of their AIS’s outputs.
To achieve a fair balance of responsibility between the clinician and the SDC when AISs are used in clinical decision-making, I propose a shared model of responsibility informed by contractarian theories.
To exemplify this approach, I present the concept of risk pooling. This solution: 1) addresses the problem of clinicians being used as moral and legal ‘crumple zones’; 2) offers SDCs the opportunity to proactively accept responsibility for the effects of their AISs on a clinician’s decision-making; and 3) makes provision for patients who may be harmed as a result of AIS use.
Date of Award27 Sept 2022
Original languageEnglish
Awarding Institution
  • The University of Bristol
SupervisorJonathan C S Ives (Supervisor), Andrew J Charlesworth (Supervisor) & Giles M Birchley (Supervisor)


  • Artificial intelligence
  • Ethics of AI
  • Clinical
  • Responsibility
  • Negligence
  • Accountability
  • Ethics
  • Bioethics
  • Healthcare

Cite this