Abstract: Spectral graph neural networks (GNNs) have shown great advantages in various graph-related fields. The core of spectral GNNs is the graph convolution operator based on the graph spectral theory. To design learnable graph convolution operators, polynomials offer an efficient way. However, existing polynomial-based spectral GNNs face three limitations: i) large approximation error, ii) non-orthogonality, and iii) poor interpretability. To address these limitations, this paper suggests a novel spectral GNN called GLN via generalized Laguerre approximation. Through theoretical analysis, we show the exponential-decayed approximation error and flexible orthogonality of the generalized Laguerre polynomials. We also show strong representation ability of GLN in that many existing graph filters and GNNs are special cases of GLN. Experimental study demonstrates the superior performance of GLN on various benchmark datasets.
Loading