Machine Learning Explanations as Boundary Objects: How AI Researchers Explain and Non-Experts Perceive Machine Learning

Amid Ayobi, Katarzyna Stawarz, Dmitri Katz, Paul Marshall, Taku Yamagata, Raul Santos-Rodriguez, Peter A Flach

Research output: Chapter in Book/Report/Conference proceedingConference Contribution (Conference Proceeding)

4 Citations (Scopus)
141 Downloads (Pure)

Abstract

Understanding artificial intelligence (AI) and machine learning (ML) approaches is becoming increasingly important for people with a wide range of professional backgrounds. However, it is unclear how ML concepts can be effectively explained as part of human-centred and multidisciplinary design processes. We provide a qualitative account of how AI researchers explained and non-experts perceived ML concepts as part of a co-design project that aimed to inform the design of ML applications for diabetes self-care. We identify benefits and challenges of explaining ML concepts with analogical narratives, information visualisations, and publicly available videos. Co-design participants reported not only gaining an improved understanding of ML concepts but also highlighted challenges of understanding ML explanations, including misalignments between scientific models and their lived self-care experiences and individual information needs. We frame our findings through the lens of Stars and Griesemer’s concept of boundary objects to discuss how the presentation of user-centred ML explanations could strike a balance between being plastic and robust enough to support design objectives and people’s individual information needs.
Original languageEnglish
Title of host publicationJoint Proceedings of the ACM IUI 2021 Workshops
PublisherCEUR Workshop Proceedings
Volume2903
Publication statusPublished - 17 Apr 2021
EventWorkshop on Transparency and Explanations in Smart Systems (TEXSS) -
Duration: 13 Apr 202113 Apr 2021
https://explainablesystems.comp.nus.edu.sg/2021/

Publication series

NameCEUR workshop proceedings
ISSN (Print)1613-0073
NameCentral EURope workshop proceedings
ISSN (Print)1613-0073

Workshop

WorkshopWorkshop on Transparency and Explanations in Smart Systems (TEXSS)
Period13/04/2113/04/21
Internet address

Research Groups and Themes

  • Bristol Interaction Group

Fingerprint

Dive into the research topics of 'Machine Learning Explanations as Boundary Objects: How AI Researchers Explain and Non-Experts Perceive Machine Learning'. Together they form a unique fingerprint.

Cite this