Graph Auto-Encoder via Neighborhood Wasserstein ReconstructionDownload PDF

29 Sept 2021, 00:33 (edited 18 Feb 2022)ICLR 2022 PosterReaders: Everyone
  • Keywords: graph representation learning, unsupervised learning, autoencoder, wasserstein distance
  • Abstract: Graph neural networks (GNNs) have drawn significant research attention recently, mostly under the setting of semi-supervised learning. When task-agnostic representations are preferred or supervision is simply unavailable, the auto-encoder framework comes in handy with a natural graph reconstruction objective for unsupervised GNN training. However, existing graph auto-encoders are designed to reconstruct the direct links, so GNNs trained in this way are only optimized towards proximity-oriented graph mining tasks, and will fall short when the topological structures matter. In this work, we revisit the graph encoding process of GNNs which essentially learns to encode the neighborhood information of each node into an embedding vector, and propose a novel graph decoder to reconstruct the entire neighborhood information regarding both proximity and structure via Neighborhood Wasserstein Reconstruction (NWR). Specifically, from the GNN embedding of each node, NWR jointly predicts its node degree and neighbor feature distribution, where the distribution prediction adopts an optimal-transport loss based on the Wasserstein distance. Extensive experiments on both synthetic and real-world network datasets show that the unsupervised node representations learned with NWR have much more advantageous in structure-oriented graph mining tasks, while also achieving competitive performance in proximity-oriented ones.
  • One-sentence Summary: We study unsupervised graph representation learning and propose a novel decoder based on neighborhood reconstruction with Wasserstein distance to facilitate the GNN encoding of entire neighborhood information beyond direct links.
  • Supplementary Material: zip
12 Replies