Abstract
Artificial intelligence models analyzing speech show remarkable promise for identifying cognitive decline, achieving performance comparable to clinical assessments. However, their “black box” nature poses significant barriers to clinical adoption, as healthcare professionals require transparent decision-making processes. This challenge is compounded by regulatory requirements, including GDPR mandates for explainability and medical device regulations emphasizing AI transparency. Following PRISMA guidelines, we systematically reviewed explainable AI (XAI) techniques for speech-based detection of Alzheimer’s disease and mild cognitive impairment across six databases through May 2025. From 2077 records, 13 studies met the inclusion criteria, employing XAI methods including SHAP, LIME, attention mechanisms, and novel approaches across machine learning architectures. Models achieved AUC values of 0.76-0.94, consistently identifying acoustic markers (pause patterns, speech rate) and linguistic features (vocabulary diversity, pronoun usage). While XAI techniques demonstrate promise for clinical interpretability, significant gaps remain in stakeholder engagement, real-world validation, and standardized evaluation frameworks.
| Original language | English |
|---|---|
| Article number | 724 |
| Number of pages | 14 |
| Journal | npj Digital Medicine |
| Volume | 8 |
| Issue number | 1 |
| DOIs | |
| Publication status | Published - 26 Nov 2025 |
Bibliographical note
Publisher Copyright:© The Author(s) 2025.