Machine Learning Explanations as Boundary Objects: How AI Researchers Explain and Non-Experts Perceive Machine Learning

Research output: Chapter in Book/Report/Conference proceedingConference Contribution (Conference Proceeding)

4 Downloads (Pure)

Abstract

Understanding artificial intelligence (AI) and machine learning (ML) approaches is becoming increasingly important for people with a wide range of professional backgrounds. However, it is unclear how ML concepts can be effectively explained as part of human-centred and multidisciplinary design processes. We provide a qualitative account of how AI researchers explained and non-experts perceived ML concepts as part of a co-design project that aimed to inform the design of ML applications for diabetes self-care. We identify benefits and challenges of explaining ML concepts with analogical narratives, information visualisations, and publicly available videos. Co-design participants reported not only gaining an improved understanding of ML concepts but also highlighted challenges of understanding ML explanations, including misalignments between scientific models and their lived self-care experiences and individual information needs. We frame our findings through the lens of Stars and Griesemer’s concept of boundary objects to discuss how the presentation of user-centred ML explanations could strike a balance between being plastic and robust enough to support design objectives and people’s individual information needs.
Original languageEnglish
Title of host publicationJoint Proceedings of the ACM IUI 2021 Workshops
PublisherCEUR Workshop Proceedings
Volume2903
ISBN (Print)16130073
Publication statusPublished - 17 Apr 2021
EventWorkshop on Transparency and Explanations in Smart Systems (TEXSS) -
Duration: 13 Apr 202113 Apr 2021
https://explainablesystems.comp.nus.edu.sg/2021/

Workshop

WorkshopWorkshop on Transparency and Explanations in Smart Systems (TEXSS)
Period13/04/2113/04/21
Internet address

Fingerprint

Dive into the research topics of 'Machine Learning Explanations as Boundary Objects: How AI Researchers Explain and Non-Experts Perceive Machine Learning'. Together they form a unique fingerprint.

Cite this