On Specifying for Trustworthiness

Dhaminda B Abeywickrama*, Amel Bennaceur, Greg Chance, Yiannis Demiris, Anastasia Kordoni, Mark Levine, Luke Moffat, Moreau Luc, Mohammad Reza Mousavi, Bashar Nuseibeh, Subramanian Ramamoorthy, Jan Oliver Ringert, James S Wilson, Shane P Windsor, Kerstin I Eder

*Corresponding author for this work

Research output: Contribution to journalArticle (Academic Journal)peer-review

110 Downloads (Pure)

Abstract

As autonomous systems (AS) increasingly become part of our daily lives, ensuring their trustworthiness is crucial. In order to demonstrate the trustworthiness of an AS, we first need to specify what is required for an AS to be considered trustworthy. This roadmap paper identifies key challenges for specifying for trustworthiness in AS, as identified during the “Specifying for Trustworthiness” workshop held as part of the UK Research and Innovation (UKRI) Trustworthy Autonomous Systems (TAS) programme. We look across a range of AS domains with consideration of the resilience, trust, functionality, verifiability, security, and governance and regulation of AS and identify some of the key specification challenges in these domains. We then highlight the intellectual challenges that are involved with specifying for trustworthiness in AS that cut across domains and are exacerbated by the inherent uncertainty involved with the environments in which AS need to operate.
Original languageEnglish
Pages (from-to)98-109
Number of pages12
JournalCommunications of the ACM
Volume67
Issue number1
DOIs
Publication statusPublished - 1 Jan 2024

Bibliographical note

12 pages, 2 figures

Fingerprint

Dive into the research topics of 'On Specifying for Trustworthiness'. Together they form a unique fingerprint.

Cite this