Engineering Responsible And Explainable Models In Human-Agent Collectives

Dhaminda B Abeywickrama*, Sarvapali Ramchurn

*Corresponding author for this work

Research output: Contribution to journalArticle (Academic Journal)peer-review

1 Citation (Scopus)

Abstract

In human-agent collectives, humans and agents need to work collaboratively and agree on collective decisions. However, ensuring that agents responsibly make decisions is a complex task, especially when encountering dilemmas where the choices available to agents are not unambiguously preferred over another. Therefore, methodologies that allow the certification of such systems are urgently needed. In this paper, we propose a novel engineering methodology based on formal model checking as a step toward providing evidence for the certification of responsible and explainable decision making within human-agent collectives. Our approach, which is based on the MCMAS model checker, verifies the decision-making behavior against the logical formulae specified to guarantee safety and controllability, and address ethical concerns. We propose the use of counterexample traces and simulation results to provide a judgment and an explanation to the AI engineer as to the reasons actions may be refused or allowed. To demonstrate the practical feasibility of our approach, we evaluate it using the real-world problem of human-UAV (unmanned aerial vehicle) teaming in dynamic and uncertain environments.
Original languageEnglish
Article number2282834
Number of pages56
JournalApplied Artificial Intelligence
Volume38
Issue number1
DOIs
Publication statusPublished - 5 Dec 2023

Bibliographical note

Funding Information:
The authors would like to thank Prof. Alessio Lomuscio, Imperial College London, for informal discussions on the MCMAS tool. The work presented in this paper was supported by the AXA Research Fund, and the UK Engineering and Physical Sciences Research Council (EPSRC)-funded Smart Cities Platform under the grant [EP/P010164/1]. D.A. is also supported by the UKRI Trustworthy Autonomous Systems Node in Functionality under Grant EP/V026518/1. For the purpose of open access, the author(s) has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission.

Publisher Copyright:
© 2023 The Author(s). Published with license by Taylor & Francis Group, LLC.

Fingerprint

Dive into the research topics of 'Engineering Responsible And Explainable Models In Human-Agent Collectives'. Together they form a unique fingerprint.

Cite this