Abstract
This position paper discusses the challenges of allocating legal and ethical responsibility to stakeholders when artificially intelligent systems (AISs) are used in clinical decision making and offers one possible solution. Clinicians have been identified as at risk of being subject to the tort of negligence if a patient is harmed as a result of their using an AIS in clinical decision making. An ethical model of prospective and retrospective personal moral responsibility is suggested to avoid clinicians being treated as a ‘moral crumple zone’. The adoption of risk pooling could support a shared model of responsibility that could promote both prospective and retrospective personal moral responsibility whist avoiding the need for negligence claims.
Original language | English |
---|---|
Publication status | Published - 2021 |
Event | IJCAI 2020 AI for Social Good workshop - Duration: 7 Jan 2021 → 8 Jan 2021 https://crcs.seas.harvard.edu/event/ai-social-good-workshop-0 |
Conference
Conference | IJCAI 2020 AI for Social Good workshop |
---|---|
Abbreviated title | AI4SG |
Period | 7/01/21 → 8/01/21 |
Internet address |
Keywords
- Artificial Intelligence
- Responsibility
- Risk Pooling
- Negligence
- Clinical
- Healthcare
- Medicine