Universality of Neural Networks on Sets and Graphs

Published: 02 May 2023, Last Modified: 02 May 2023Blogposts @ ICLR 2023 ConditionalReaders: Everyone
Keywords: graph neural networks, graph representation learning, deep sets, transformers, expressive power, graph isomorphism, weisfeiler-lehman, higher-order gnns
Abstract: Universal function approximation is one of the central tenets in theoretical deep learning research. It is the question whether a specific neural network architecture is, in theory, able to approximate any function of interest. The ICLR paper "How Powerful are Graph Neural Networks?" shows that mathematically analysing the constraints of an architecture as a universal function approximator and alleviating these constraints can lead to more principled architecture choices, performance improvements, and long term impact on the field. Specifically in the fields of learning on sets and learning on graphs, universal function approximation is a well-studied property. The two fields are closely linked, because the need for permutation invariance in both cases lead to similar building blocks. However, these two fields have evolved in parallel, often lacking awareness of developments in the respective other field. This post aims at bringing these two fields closer together, particularly from the perspective of universal function approximation.
Blogpost Url: https://iclr-blogposts.github.io/2023/blog/2023/sets-and-graphs/
ICLR Papers: https://openreview.net/forum?id=ryGs6iA5Km, https://openreview.net/forum?id=SJU4ayYgl, https://openreview.net/forum?id=B1l2bp4YwS, https://openreview.net/forum?id=BJluy2RcFm, https://openreview.net/forum?id=wIzUeM3TAU, https://openreview.net/forum?id=r9hNv76KoT3
ID Of The Authors Of The ICLR Paper: ~Keyulu_Xu1, ~Thomas_Kipf2, ~Andreas_Loukas1, ~Ryan_L_Murphy1, ~Floris_Geerts1, ~Bohang_Zhang1
Conflict Of Interest: No
6 Replies

Loading