GraphFractalNet: A Fractal-Inspired Sparse Transformer for Ultra-Scalable Graph Representation Learning
Keywords: Graph Neural Networks (GNNs), Graph Transformers, Spectral Graph Methods, Graph Representation Learning
Abstract: Graph neural networks (GNNs) and Transformer-based architectures have achieved strong performance in graph representation learning, yet they often struggle with scalability, over-smoothing, and limited expressiveness on complex topological patterns. We propose GraphFractalNet, a novel framework that integrates spectral embeddings, dynamic graph rewiring, and a fractal attention mechanism to capture both global and hierarchical self-similar structures in graphs. By leveraging truncated spectral bases, GraphFractalNet provides topology-aware node embeddings, while the rewiring module adaptively optimizes edge connectivity to improve information flow and sparsity. The fractal attention layer further constrains attention to recursively clustered subgraphs, enabling sub-quadratic complexity of $O(N \log \log N)$ per layer while retaining expressive power beyond $k$-Weisfeiler–Lehman tests. Theoretically, we establish generalization bounds under spectral Rademacher complexity and prove that GraphFractalNet is strictly more expressive than standard message-passing GNNs. Empirical results show that GraphFractalNet delivers state-of-the-art performance on both molecular property prediction and large-scale node classification tasks, consistently improving accuracy and scalability. Comprehensive ablation studies underscore the critical roles of spectral embeddings, dynamic rewiring, and fractal attention, each contributing to the model’s effectiveness and efficiency. Overall, GraphFractalNet emerges as a principled and scalable architecture that seamlessly integrates spectral techniques with Transformer-inspired designs for graph learning.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 18808
Loading