Do Graph Neural Network States Contain Graph Properties?

Published: 29 Aug 2025, Last Modified: 29 Aug 2025NeSy 2025 - Phase 2 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Mechanistic Interpretability, XAI, Explainable AI, linear probing, Graph Neural Networks
TL;DR: We develop a probing pipeline for Graph Neural Networks using graph-theoretic properties as the best candidate for studying the features leveraged in the latent space of the graph models.
Abstract: Deep neural networks (DNNs) achieve state-of-the-art performance on many tasks, but this often requires increasingly larger model sizes, which in turn leads to more complex internal representations. Explainability techniques (XAI) have made remarkable progress in the interpretability of ML models. However, the non-euclidean nature of Graph Neural Networks (GNNs) makes it difficult to reuse already existing XAI methods. While other works have focused on instance-based explanation methods for GNNs, very few have investigated model-based methods and, to our knowledge, none have tried to probe the embedding of the GNNs for structural graph properties. In this paper we present a model agnostic explainability pipeline for Graph Neural Networks (GNNs) employing diagnostic classifiers. We propose to consider graph-theoretic properties as the features of choice for studying the emergence of representations in GNNs. This pipeline aims to probe and interpret the learned representations in GNNs across various architectures and datasets, refining our understanding and trust in these models.
Track: Main Track
Paper Type: Long Paper
Resubmission: No
Software: https://github.com/TomPelletreauDuris/Probing-GNN-representations
Publication Agreement: pdf
Submission Number: 83
Loading