Multi-view Graph Condensation via Tensor Decomposition

Published: 23 Sept 2025, Last Modified: 27 Oct 2025NPGML PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph Condensation, Tensor Decomposition, Graph Neural Networks, Matrix tri-factorization
TL;DR: A graph condensation method that leverages tensor decomposition
Abstract: Training Graph Neural Networks (GNNs) on large-scale graphs presents significant computational challenges due to the resources required for their storage and processing. Graph Condensation has emerged as a promising solution to reduce these demands by learning a compact graph that preserves the essential information of the original one while maintaining the GNN's performance. Despite their efficacy, current condensation approaches frequently rely on a computationally intensive bi-level optimization. Moreover, they fail to maintain a mapping between synthetic and original nodes, limiting the interpretability of the model's decisions. In this sense, a wide range of decomposition techniques have been applied to learn linear or multi-linear functions from graphs, offering a more transparent and less resource-intensive alternative. However, their applicability to graph condensation remains unexplored. This paper addresses this gap and proposes a novel method called Multi-view Graph Condensation via Tensor Decomposition (GCTD) to investigate the extent to which such techniques can synthesize a smaller graph while achieving comparable downstream task performance. Experiments on six datasets show that GCTD effectively reduces graph size while preserving GNN performance. Our code is available at https://anonymous.4open.science/r/gctd-345A.
Submission Number: 66
Loading