GRAND++: Graph Neural Diffusion with A Source TermDownload PDF


Sep 29, 2021 (edited Nov 15, 2021)ICLR 2022 Conference Blind SubmissionReaders: Everyone
  • Keywords: graph deep learning, low-labeling rates, diffusion on graphs, random walk
  • Abstract: We propose GRAph Neural Diffusion with a source term (GRAND++) for graph deep learning with a limited number of labeled nodes, i.e., low-labeling rate. GRAND++ is a class of continuous-depth graph deep learning architectures whose theoretical underpinning is the diffusion process on graphs with a source term. The source term guarantees two interesting theoretical properties of GRAND++: (i) the representation of graph nodes, under the dynamics of GRAND++, will not converge to a constant vector over all nodes even as the time goes to infinity, which mitigates the over-smoothing issue of graph neural networks and enables graph learning in very deep architectures. (ii) GRAND++ can provide accurate classification even when the model is trained with a very limited number of labeled training data. We experimentally verify the above two advantages on various graph deep learning benchmark tasks, showing a significant improvement over many existing graph neural networks.
  • One-sentence Summary: We propose GRAND++ for deep graph learning with limited labeled training data
  • Supplementary Material: zip
22 Replies