Artificial Intelligence to Inform Clinical Decision Making: A Practical Solution to An Ethical And Legal Challenge.

Research output: Contribution to conferenceConference Paperpeer-review

Abstract

This position paper discusses the challenges of allocating legal and ethical responsibility to stakeholders when artificially intelligent systems (AISs) are used in clinical decision making and offers one possible solution. Clinicians have been identified as at risk of being subject to the tort of negligence if a patient is harmed as a result of their using an AIS in clinical decision making. An ethical model of prospective and retrospective personal moral responsibility is suggested to avoid clinicians being treated as a ‘moral crumple zone’. The adoption of risk pooling could support a shared model of responsibility that could promote both prospective and retrospective personal moral responsibility whist avoiding the need for negligence claims.
Original languageEnglish
Publication statusPublished - 2021
EventIJCAI 2020 AI for Social Good workshop -
Duration: 7 Jan 20218 Jan 2021
https://crcs.seas.harvard.edu/event/ai-social-good-workshop-0

Conference

ConferenceIJCAI 2020 AI for Social Good workshop
Abbreviated titleAI4SG
Period7/01/218/01/21
Internet address

Keywords

  • Artificial Intelligence
  • Responsibility
  • Risk Pooling
  • Negligence
  • Clinical
  • Healthcare
  • Medicine

Fingerprint

Dive into the research topics of 'Artificial Intelligence to Inform Clinical Decision Making: A Practical Solution to An Ethical And Legal Challenge.'. Together they form a unique fingerprint.

Cite this