Macro Ethics Principles for Responsible AI Systems: Taxonomy and Directions

Jessica Woodgate*, Nirav Ajmeri*

*Corresponding author for this work

Research output: Contribution to journalArticle (Academic Journal)peer-review

1 Citation (Scopus)
39 Downloads (Pure)

Abstract

Responsible AI must be able to make or support decisions that consider human values and can be justified by human morals. Accommodating values and morals in responsible decision making is supported by adopting a perspective of macro ethics, which views ethics through a holistic lens incorporating social context. Normative ethical principles inferred from philosophy can be used to methodically reason about ethics and make ethical judgements in specific contexts. Operationalising normative ethical principles thus promotes responsible reasoning under the perspective of macro ethics. We survey AI and computer science literature and develop a taxonomy of 21 normative ethical principles which can be operationalised in AI. We describe how each principle has previously been operationalised, highlighting key themes that AI practitioners seeking to implement ethical principles should be aware of. We envision that this taxonomy will facilitate the development of methodologies to incorporate normative ethical principles in reasoning capacities of responsible AI systems.
Original languageEnglish
Article number289
Pages (from-to)1-37
Number of pages37
JournalACM Computing Surveys
Volume56
Issue number11
Early online date31 May 2024
DOIs
Publication statusE-pub ahead of print - 31 May 2024

Bibliographical note

Publisher Copyright:
© 2024 Copyright held by the owner/author(s)

Research Groups and Themes

  • Intelligent Systems Laboratory

Fingerprint

Dive into the research topics of 'Macro Ethics Principles for Responsible AI Systems: Taxonomy and Directions'. Together they form a unique fingerprint.

Cite this