Track: long paper (up to 4 pages)
Keywords: Graph Neural Networks, Graph Pooling
Abstract: Hierarchical Graph Neural Networks (GNNs) integrate pooling layers to generate graph representations by progressively coarsening graphs. These GNNs are provably more expressive than traditional GNNs that solely rely on message passing. While prior work shows that hierarchical architectures do not exhibit empirical performance gains, these findings are based on small datasets where structure-unaware baselines often perform well, limiting their generalizability. In this work, we comprehensively investigate the
role of graph structure in pooling-based GNNs. Our analysis includes: (1) reproducing previous studies on larger, more diverse datasets, (2) assessing the robustness of different architectures to structural perturbations of the graphs at varying depths of the network layers, and (3) comparing against structure-agnostic baselines. Our results confirm previous findings and demonstrate that they hold across newly tested datasets, even when graph structure is meaningful for the task. Interestingly, we observe that hierarchical GNNs exhibit improved performance recovery to structural perturbations compared to their flat counterparts. These findings highlight both the potential and limitations of pooling-based GNNs, motivating the need for more structure-sensitive benchmarks and evaluation frameworks.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 25
Loading