Uncertainty in GNN Learning Evaluations: The Importance of a Consistent Benchmark for Community Detection.

Research output: Chapter in Book/Report/Conference proceedingConference Contribution (Conference Proceeding)

Abstract

Graph Neural Networks (GNNs) have improved unsupervised community detection of clustered nodes due to their ability to encode the dual dimensionality of the connectivity and feature information spaces of graphs. Identifying the latent communities has many practical applications from social networks to genomics. Current benchmarks of real world performance are confusing due to the variety of decisions influencing the evaluation of GNNs at this task. To address this, we propose a framework to establish a common evaluation protocol. We motivate and justify it by demonstrating the differences with and without the protocol. The W Randomness Coefficient is a metric proposed for assessing the consistency of algorithm rankings to quantify the reliability of results under the presence of randomness. We find that by ensuring the same evaluation criteria is followed, there may be significant differences from the reported performance of methods at this task, but a more complete evaluation and comparison of methods is possible.
Original languageEnglish
Title of host publicationTHE INTERNATIONAL CONFERENCE ON COMPLEX NETWORKS AND THEIR APPLICATIONS
PublisherSpringer
Publication statusAccepted/In press - 4 Oct 2023

Keywords

  • graph clustering
  • graph
  • Graph attention
  • Graph auto-encoder network
  • Graph Comparison
  • Graph convolutional network
  • Graph Embedding
  • graph databases
  • benchmark
  • performance evaluation
  • uncertainty
  • RANDOM COEFFICIENTS

Fingerprint

Dive into the research topics of 'Uncertainty in GNN Learning Evaluations: The Importance of a Consistent Benchmark for Community Detection.'. Together they form a unique fingerprint.

Cite this