Keywords: Supervised Node Classification, Spectral Embedding, Social Graphs, Neural Models
Abstract: Network embedding methods compute geometric representations of graphs that render various prediction problems amenable to machine learning techniques. Spectral network embeddings are based on the computation of eigenvectors of a normalized graph Laplacian. When coupled with standard classifiers, spectral embeddings yield strong baseline performance in node classification tasks. Remarkably, it has been recently shown that these `base' classifications followed by a simple `Correction and Smooth' procedure reach state-of-the-art performance on widely used benchmarks. All these recent works employ classifiers that are agnostic to the nature of the underlying embedding. We present simple neural models that leverage fundamental geometric properties of spectral embeddings and obtains significantly improved classification accuracy over commonly used standard classifiers. Our results are based on a specific variant of spectral clustering that is not well-known, but it is presently the only variant known to have analyzable theoretical properties. We provide a \texttt{PyTorch} implementation of our classifier along with code for the fast computation of spectral embeddings.
One-sentence Summary: Scalable neural model leveraging geometric properties of spectral embeddings for improved classification performance.
7 Replies
Loading