Abstract: We consider explainability in equivariant graph neural networks for 3D geometric graphs. While many XAI methods have been developed for analyzing graph neural networks, they predominantly target 2D graph structures. The complex nature of 3D data and the sophisticated architectures of equivariant GNNs present unique challenges. Current XAI techniques either struggle to adapt to equivariant GNNs or fail to effectively handle positional data and evaluate the significance of geometric features adequately.
To address these challenges, we introduce a novel method, known as EquiGX, which uses the Deep Taylor decomposition framework to extend the layer-wise relevance propagation rules tailored for spherical equivariant GNNs. Our approach decomposes prediction scores and back-propagates the relevance scores through each layer to the input space. Our decomposition rules provide a detailed explanation of each layer’s contribution to the network’s predictions, thereby enhancing our understanding of how geometric and positional data influence the model’s outputs.
Through experiments on both synthetic and real-world datasets, our method demonstrates its capability to identify critical geometric structures and outperform alternative baselines. These results indicate that our method provides significantly enhanced explanations for equivariant GNNs. Our code has been released as part of the AIRS library (https://github.com/divelab/AIRS/).
Lay Summary: Understanding how machine learning models make decisions is crucial, especially when they analyze complex 3D structures like molecules or physical systems. While there are many tools to explain decisions made by models working on simpler, 2D data, these tools often fall short when applied to advanced models that work with 3D information. In this work, we focus on a special type of model called equivariant graph neural networks, which are designed to handle 3D geometric data in a way that respects its spatial structure. We develop a new explanation method called EquiGX that helps us see how these models arrive at their predictions. EquiGX works by breaking down the model’s output into contributions from each layer, tracing this back to the input data, and showing which 3D features matter most.
We tested our method on both artificial and real-world datasets and found that it gives clearer and more accurate explanations than existing approaches. This work helps open the “black box” of 3D deep learning models, making them more transparent and trustworthy.
Primary Area: Social Aspects->Accountability, Transparency, and Interpretability
Keywords: equivariant GNNs, XAI
Submission Number: 12944
Loading