Abstract
Responsible AI must be able to make or support decisions that consider human values and can be justified by human morals. Accommodating values and morals in responsible decision making is supported by adopting a perspective of macro ethics, which views ethics through a holistic lens incorporating social context. Normative ethical principles inferred from philosophy can be used to methodically reason about ethics and make ethical judgements in specific contexts. Operationalising normative ethical principles thus promotes responsible reasoning under the perspective of macro ethics. We survey AI and computer science literature and develop a taxonomy of 21 normative ethical principles which can be operationalised in AI. We describe how each principle has previously been operationalised, highlighting key themes that AI practitioners seeking to implement ethical principles should be aware of. We envision that this taxonomy will facilitate the development of methodologies to incorporate normative ethical principles in reasoning capacities of responsible AI systems.
Original language | English |
---|---|
Article number | 289 |
Pages (from-to) | 1-37 |
Number of pages | 37 |
Journal | ACM Computing Surveys |
Volume | 56 |
Issue number | 11 |
Early online date | 31 May 2024 |
DOIs | |
Publication status | E-pub ahead of print - 31 May 2024 |
Bibliographical note
Publisher Copyright:© 2024 Copyright held by the owner/author(s)
Research Groups and Themes
- Intelligent Systems Laboratory