What graph neural networks cannot learn: depth vs width

Anonymous

Sep 25, 2019 ICLR 2020 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: This paper studies theoretically the capacity limits of graph neural networks (GNN) falling within the message-passing framework. Two main results are presented. First, GNN are shown to be Turing universal under sufficient conditions on their depth, width, node identification, and layer expressiveness. Second, it is discovered that GNN can lose a significant portion of their power when their depth and width is restricted. The proposed impossibility statements stem from a new technique that enables the repurposing of seminal results from theoretical computer science and leads to lower bounds for an array of decision, optimization, and estimation problems involving graphs. Strikingly, several of these problems are deemed impossible unless the product of a GNN's depth and width exceeds (a function of) the graph size; this dependence remains significant even for tasks that appear simple or when considering approximation.
  • Keywords: graph neural networks, capacity, impossibility results, lower bounds
  • TL;DR: Several graph problems are impossible unless the product of a graph neural network's depth and width exceeds (a function of) the graph size.
0 Replies

Loading