This position paper discusses the challenges of allocating legal and ethical responsibility to stakeholders when artificially intelligent systems (AISs) are used in clinical decision making and offers one possible solution. Clinicians have been identified as at risk of being subject to the tort of negligence if a patient is harmed as a result of their using an AIS in clinical decision making. An ethical model of prospective and retrospective personal moral responsibility is suggested to avoid clinicians being treated as a ‘moral crumple zone’. The adoption of risk pooling could support a shared model of responsibility that could promote both prospective and retrospective personal moral responsibility whist avoiding the need for negligence claims.
|Publication status||Published - 2021|
|Event||IJCAI 2020 AI for Social Good workshop - |
Duration: 7 Jan 2021 → 8 Jan 2021
|Conference||IJCAI 2020 AI for Social Good workshop|
|Period||7/01/21 → 8/01/21|
- Artificial Intelligence
- Risk Pooling