Structured Graph Reduction for Efficient GNN

24 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: learning on graphs and other geometries & topologies
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: structured graph coarsening, graph neural network, node classification, convex optimization
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Scalability remains a prominent challenge for Graph Neural Networks (GNNs) when dealing with large-scale graph data. Graph coarsening is a technique that reduces a large graph to a smaller tractable graph. A good quality graph representation with specific properties is needed to achieve good performance with downstream applications. However, existing coarsening methods could not coarsen graphs with desirable properties, such as sparsity, scale-free characteristics, bipartite structure, or multi-component structure. This work introduces a unified optimization framework for learning coarsened graphs with desirable structures and properties. The proposed frameworks are solved efficiently by leveraging block majorization-minimization, $\log$ determinant, structured regularization, and spectral regularization frameworks. Extensive experiments with real benchmark datasets elucidate the proposed framework’s efficacy in preserving the structure in coarsened graphs. Empirically, when there is no prior knowledge available regarding the graph's structure, constructing a multicomponent coarsened graph consistently demonstrates superior performance compared to state-of-the-art methods.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: zip
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9000
Loading