A systematic review of explainable artificial intelligence methods for speech-based cognitive decline detection

Ravi Shankar*, Ziyu Goh, Fiona Devi, Qian Xu

*Corresponding author for this work

Research output: Contribution to journalArticle (Academic Journal)peer-review

Abstract

Artificial intelligence models analyzing speech show remarkable promise for identifying cognitive decline, achieving performance comparable to clinical assessments. However, their “black box” nature poses significant barriers to clinical adoption, as healthcare professionals require transparent decision-making processes. This challenge is compounded by regulatory requirements, including GDPR mandates for explainability and medical device regulations emphasizing AI transparency. Following PRISMA guidelines, we systematically reviewed explainable AI (XAI) techniques for speech-based detection of Alzheimer’s disease and mild cognitive impairment across six databases through May 2025. From 2077 records, 13 studies met the inclusion criteria, employing XAI methods including SHAP, LIME, attention mechanisms, and novel approaches across machine learning architectures. Models achieved AUC values of 0.76-0.94, consistently identifying acoustic markers (pause patterns, speech rate) and linguistic features (vocabulary diversity, pronoun usage). While XAI techniques demonstrate promise for clinical interpretability, significant gaps remain in stakeholder engagement, real-world validation, and standardized evaluation frameworks.
Original languageEnglish
Article number724
Number of pages14
Journalnpj Digital Medicine
Volume8
Issue number1
DOIs
Publication statusPublished - 26 Nov 2025

Bibliographical note

Publisher Copyright:
© The Author(s) 2025.

Fingerprint

Dive into the research topics of 'A systematic review of explainable artificial intelligence methods for speech-based cognitive decline detection'. Together they form a unique fingerprint.

Cite this