Understanding artificial intelligence (AI) and machine learning (ML) approaches is becoming increasingly important for people with a wide range of professional backgrounds. However, it is unclear how ML concepts can be effectively explained as part of human-centred and multidisciplinary design processes. We provide a qualitative account of how AI researchers explained and non-experts perceived ML concepts as part of a co-design project that aimed to inform the design of ML applications for diabetes self-care. We identify benefits and challenges of explaining ML concepts with analogical narratives, information visualisations, and publicly available videos. Co-design participants reported not only gaining an improved understanding of ML concepts but also highlighted challenges of understanding ML explanations, including misalignments between scientific models and their lived self-care experiences and individual information needs. We frame our findings through the lens of Stars and Griesemer’s concept of boundary objects to discuss how the presentation of user-centred ML explanations could strike a balance between being plastic and robust enough to support design objectives and people’s individual information needs.
|Title of host publication||Joint Proceedings of the ACM IUI 2021 Workshops|
|Publisher||CEUR Workshop Proceedings|
|Publication status||Published - 17 Apr 2021|
|Event||Workshop on Transparency and Explanations in Smart Systems (TEXSS) - |
Duration: 13 Apr 2021 → 13 Apr 2021
|Workshop||Workshop on Transparency and Explanations in Smart Systems (TEXSS)|
|Period||13/04/21 → 13/04/21|