What Will Make Misinformation Spread: An XAI Perspective

Hongbo Bo*, Yiwen Wu, Zinuo You, Ryan McConville, Jun Hong, Weiru Liu

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference Contribution (Conference Proceeding)

1 Citation (Scopus)
110 Downloads (Pure)

Abstract

Explainable Artificial Intelligence (XAI) techniques can provide explanations of how AI systems or models make decisions, or what factors AI considers when making the decisions. Online social networks have a problem with misinformation which is known to have negative effects. In this paper, we propose to utilize XAI techniques to study what factors lead to misinformation spreading by explaining a trained graph neural network that predicts misinformation spread. However, it is difficult to achieve this with the existing XAI methods for homogeneous social networks, since the spread of misinformation is often associated with heterogeneous social networks which contain different types of nodes and relationships. This paper presents, MisInfoExplainer, an XAI pipeline for explaining the factors contributing to misinformation spread in heterogeneous social networks. Firstly, a prediction module is proposed for predicting misinformation spread by leveraging GraphSAGE with heterogeneous graph convolution. Secondly, we propose an explanation module that uses gradient-based and perturbation-based methods, to identify what makes misinformation spread by explaining the trained prediction module. Experimentally we demonstrate the superiority of MisinfoExplainer in predicting misinformation spread, and also reveal the key factors that make misinformation spread by generating a global explanation for the prediction module. Finally, we conclude that the perturbation-based approach is superior to the gradient-based approach, both in terms of qualitative analysis and quantitative measurements.

Original languageEnglish
Title of host publicationExplainable Artificial Intelligence
Subtitle of host publicationFirst World Conference, xAI 2023, Lisbon, Portugal, July 26–28, 2023, Proceedings, Part II
EditorsLuca Longo
PublisherSpringer
Pages321-337
Number of pages17
ISBN (Electronic)978-3-031-44067-0
ISBN (Print)978-3-031-44066-3
DOIs
Publication statusPublished - 21 Oct 2023
EventThe 1st World Conference on Explainable AI - Lisbon, Lisbon, Portugal
Duration: 26 Jul 202328 Jul 2023
https://xaiworldconference.com/2023/

Publication series

NameCommunications in Computer and Information Science
Volume1902 CCIS
ISSN (Print)1865-0929
ISSN (Electronic)1865-0937

Conference

ConferenceThe 1st World Conference on Explainable AI
Abbreviated titleXAI
Country/TerritoryPortugal
CityLisbon
Period26/07/2328/07/23
Internet address

Bibliographical note

Publisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

Keywords

  • XAI
  • Graph neural networks
  • misinformation spread

Fingerprint

Dive into the research topics of 'What Will Make Misinformation Spread: An XAI Perspective'. Together they form a unique fingerprint.

Cite this