Keywords: Graph Sparsification, Self-Supervised Learning, Constrained Optimization
Abstract: Graph sparsification has emerged as a promising approach to improve efficiency and remove redundant or noisy edges in large-scale graphs. However, existing methods often rely on task-specific labels, limiting their applicability in label-scarce scenarios, and they rarely address the residual noise that remains after sparsification. To address this issue, we aim to jointly consider both sparsity and robustness. In this work, we present GRAPHSPA, a self-supervised graph sparsification framework that constructs compact yet informative subgraphs without requiring labels, while explicitly mitigating residual noise. We formulate sparsification as a constrained optimization problem in which flatness is incorporated as part of the objective. Specifically, we address this problem by leveraging an augmented Lagrangian scheme to progressively satisfy the target sparsity. We also train the encoder to be robust to perturbations so that optimization is guided toward flatter regions of the loss landscape, reducing sensitivity to residual noise, and improving generalization. We theoretically demonstrate that this framework guarantees stable convergence while addressing both sparsity and robustness. Extensive experiments on benchmark datasets show that GRAPHSPA consistently outperforms baselines across various sparsity ratios and preserves cluster structures in t-SNE visualizations. Notably, it demonstrates strong and consistent performance on both large-scale and heterophilic datasets, validating its applicability in real-world scenarios. These results highlight GRAPHSPA as a principled and reliable framework for graph sparsification without labels and under residual noise.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 17772
Loading