Abstract: Graph Neural Networks (GNNs) have become a powerful tool for modeling molecular data. To improve their reliability and interpretability, various graph explanation methods are proposed to identify key molecular substructures that drive model predictions. Many graph explainers introduce soft masks to enable gradient-based optimization, and then discretize the optimized masks to obtain explanatory subgraphs. While these methods perform well for 2D GNNs, there is a growing demand for 3D explanation techniques suited to 3D GNNs, which often surpass 2D GNNs in performance. However, existing explainers struggle with 3D GNNs because cutoff-based 3D graph construction yields denser graphs, with the number of edges growing quadratically with the number of atoms. Motivated by this, we identify key sources of explanation errors and derive an upper bound that decomposes the explanation error into two components: (i) the optimized soft-mask loss and (ii) the discrepancy introduced when discretizing the soft mask to form the explanatory subgraph. Our theoretical analysis shows that the second component is closely related to the soft-to-discrete mask gap and is amplified by graph density, making it particularly challenging for dense 3D graphs. To bridge this gap, we use an energy-based formulation and our method assigns two energy values to each atom, corresponding to importance and non-importance. The explanation model becomes more confident when the distinction between two states is clearer. By optimizing these energy values to distinguish the two cases, we minimize both components of the bound and identify a stable subgraph with high explanation fidelity. Experiments with various 3D backbone models on widely used datasets validate our method's effectiveness in providing accurate and reliable explanations for 3D molecular graphs.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Shinichi_Nakajima2
Submission Number: 6870
Loading