Projects per year
Understanding artificial intelligence (AI) and machine learning (ML) approaches is becoming increasingly important for people with a wide range of professional backgrounds. However, it is unclear how ML concepts can be effectively explained as part of human-centred and multidisciplinary design processes. We provide a qualitative account of how AI researchers explained and non-experts perceived ML concepts as part of a co-design project that aimed to inform the design of ML applications for diabetes self-care. We identify benefits and challenges of explaining ML concepts with analogical narratives, information visualisations, and publicly available videos. Co-design participants reported not only gaining an improved understanding of ML concepts but also highlighted challenges of understanding ML explanations, including misalignments between scientific models and their lived self-care experiences and individual information needs. We frame our findings through the lens of Stars and Griesemer’s concept of boundary objects to discuss how the presentation of user-centred ML explanations could strike a balance between being plastic and robust enough to support design objectives and people’s individual information needs.
|Title of host publication||Joint Proceedings of the ACM IUI 2021 Workshops|
|Publisher||CEUR Workshop Proceedings|
|Publication status||Published - 17 Apr 2021|
|Event||Workshop on Transparency and Explanations in Smart Systems (TEXSS) - |
Duration: 13 Apr 2021 → 13 Apr 2021
|Workshop||Workshop on Transparency and Explanations in Smart Systems (TEXSS)|
|Period||13/04/21 → 13/04/21|
FingerprintDive into the research topics of 'Machine Learning Explanations as Boundary Objects: How AI Researchers Explain and Non-Experts Perceive Machine Learning'. Together they form a unique fingerprint.
- 1 Finished
ML4D: InnovateUK ML4D: Machine Learning for Enhanced Diabetes Self-Care
O'Kane, A. A., Marshall, P., Santos-Rodriguez, R. & Flach, P. A.
1/11/18 → 30/04/20