Interpretable Graph Information Bottleneck via Disentanglement Learning for Graph-Level Representation Learning
Abstract: Graph-level representation learning requires models to capture complex structural patterns while maintaining interpretability for downstream applications. However, existing methods often produce entangled representations that mix different types of graph information, limiting both performance and explainability. We propose an Interpretable Graph Information Bottleneck via Disentanglement Learning (IGIB-DL) approach that learns disentangled graph representations through principled information compression. Our framework integrates a Graph Information Bottleneck mechanism to extract task-relevant features while filtering out redundant information, combined with a Disentanglement Learning component that separates structural, semantic, and contextual factors in the learned representations. To enhance interpretability, we introduce a Factor-wise Attribution mechanism that identifies which disentangled components contribute most to the final predictions. Furthermore, a Variational Information Constraint ensures that different factors remain independent while collectively preserving essential graph characteristics. Comprehensive experiments on seven graph classification benchmarks show that our IGIB-DL method not only achieves competitive performance but also provides superior interpretability compared to existing graph representation learning approaches.
Loading