Keywords: graph coarsening, graph machine learning
Abstract: Graph coarsening aims to diminish the size of a graph to lighten its memory footprint, and has numerous applications in graph signal processing and machine learning. It is usually defined using a reduction matrix and a lifting matrix, which, respectively, allows to project a graph signal from the original graph to the coarsened one and back. This results in a loss of information measured by the so-called Restricted Spectral Approximation (RSA). Most coarsening frameworks impose a fixed relationship between the reduction and lifting matrices, generally as pseudo-inverses of each other, and seek to define a coarsening that minimizes the RSA.
In this paper, we remark that the roles of these two matrices are not entirely symmetric: indeed, putting constraints on the *lifting matrix alone* ensures the existence of important objects such as the coarsened graph's adjacency matrix or Laplacian.
In light of this, in this paper, we introduce a more general notion of reduction matrix, that is *not* necessarily the pseudo-inverse of the lifting matrix.
We establish a taxonomy of ``admissible'' families of reduction matrices, discuss the different properties that they must satisfy and whether they admit a closed-form description or not. We show that, for a *fixed* coarsening represented by a fixed lifting matrix, the RSA can be *further* reduced simply by modifying the reduction matrix. We explore different examples, including some based on a constrained optimization process of the RSA. Since this criterion has also been linked to the performance of Graph Neural Networks, we also illustrate the impact of this choices on different node classification tasks on coarsened graphs.
Supplementary Material: zip
Primary Area: Theory (e.g., control theory, learning theory, algorithmic game theory)
Submission Number: 21031
Loading