Abstract
Suppose a driverless car encounters a scenario where (i) harm to at least one person is unavoidable and (ii) a choice about how to distribute harms between different persons is required. How should the driverless car be programmed to behave in this situation? I call this the moral design problem. Santoni de Sio (Ethical Theory Moral Pract 20:411–429, 2017) defends a legal-philosophical approach to this problem, which aims to bring us to a consensus on the moral design problem despite our disagreements about which moral principles provide the correct account of justified harm. He then articulates an answer to the moral design problem based on the legal doctrine of necessity. In this paper, I argue that Santoni de Sio’s answer to the moral design problem does not achieve the aim of the legal-philosophical approach. This is because his answer relies on moral principles which, at least, utilitarians have reason to reject. I then articulate an alternative reading of the doctrine of necessity, and construct a partial answer to the moral design problem based on this. I argue that utilitarians, contractualists and deontologists can agree on this partial answer, even if they disagree about which moral principles offer the correct account of justified harm.
Original language | English |
---|---|
Pages (from-to) | 413-427 |
Number of pages | 15 |
Journal | Ethical Theory and Moral Practice |
Volume | 21 |
Issue number | 2 |
Early online date | 25 Apr 2018 |
DOIs | |
Publication status | Published - Apr 2018 |
Keywords
- robot ethics
- autonomous vehicles
- artificial intelligence
- ethics
- ethics of technology
- ethics of harm
- criminal law
- legal necessity
- philosophy