Projects per year
Abstract
Explainable Artificial Intelligence (XAI) techniques can provide explanations of how AI systems or models make decisions, or what factors AI considers when making the decisions. Online social networks have a problem with misinformation which is known to have negative effects. In this paper, we propose to utilize XAI techniques to study what factors lead to misinformation spreading by explaining a trained graph neural network that predicts misinformation spread. However, it is difficult to achieve this with the existing XAI methods for homogeneous social networks, since the spread of misinformation is often associated with heterogeneous social networks which contain different types of nodes and relationships. This paper presents, MisInfoExplainer, an XAI pipeline for explaining the factors contributing to misinformation spread in heterogeneous social networks. Firstly, a prediction module is proposed for predicting misinformation spread by leveraging GraphSAGE with heterogeneous graph convolution. Secondly, we propose an explanation module that uses gradient-based and perturbation-based methods, to identify what makes misinformation spread by explaining the trained prediction module. Experimentally we demonstrate the superiority of MisinfoExplainer in predicting misinformation spread, and also reveal the key factors that make misinformation spread by generating a global explanation for the prediction module. Finally, we conclude that the perturbation-based approach is superior to the gradient-based approach, both in terms of qualitative analysis and quantitative measurements.
Original language | English |
---|---|
Title of host publication | Explainable Artificial Intelligence |
Subtitle of host publication | First World Conference, xAI 2023, Lisbon, Portugal, July 26–28, 2023, Proceedings, Part II |
Editors | Luca Longo |
Publisher | Springer |
Pages | 321-337 |
Number of pages | 17 |
ISBN (Electronic) | 978-3-031-44067-0 |
ISBN (Print) | 978-3-031-44066-3 |
DOIs | |
Publication status | Published - 21 Oct 2023 |
Event | The 1st World Conference on Explainable AI - Lisbon, Lisbon, Portugal Duration: 26 Jul 2023 → 28 Jul 2023 https://xaiworldconference.com/2023/ |
Publication series
Name | Communications in Computer and Information Science |
---|---|
Volume | 1902 CCIS |
ISSN (Print) | 1865-0929 |
ISSN (Electronic) | 1865-0937 |
Conference
Conference | The 1st World Conference on Explainable AI |
---|---|
Abbreviated title | XAI |
Country/Territory | Portugal |
City | Lisbon |
Period | 26/07/23 → 28/07/23 |
Internet address |
Bibliographical note
Publisher Copyright:© 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
Keywords
- XAI
- Graph neural networks
- misinformation spread
Fingerprint
Dive into the research topics of 'What Will Make Misinformation Spread: An XAI Perspective'. Together they form a unique fingerprint.Projects
- 1 Finished
-
8463 EP/T026707/1 CHAI : Cyber Hygiene in AI enabled domestic life
1/12/20 → 28/02/24
Project: Research