Joint Diffusion Processes as an Inductive Bias in Sheaf Neural Networks

Published: 17 Jun 2024, Last Modified: 10 Jul 2024ICML 2024 Workshop GRaMEveryoneRevisionsBibTeXCC BY 4.0
Track: Proceedings
Keywords: Graph Neural Networks, Sheaves, Sheaf Neural Networks, ODEs, Geometry, Topology, Opinion Dynamics, Oversmoothing, Heterophily, Synthetic Data
TL;DR: We incorporate ODEs from opinion dynamics into Sheaf Neural Networks as a way of addressing heterophily and oversmoothing, as well as propose a new method for synthetic data generation to better evaluate the capabilities of sheaf-based models.
Abstract: Sheaf Neural Networks (SNNs) naturally extend Graph Neural Networks (GNNs) by endowing a cellular sheaf over the graph, equipping nodes and edges with vector spaces and defining linear mappings between them. While the attached geometric structure has proven to be useful in analyzing heterophily and oversmoothing, so far the methods by which the sheaf is computed do not always guarantee a good performance in such settings. In this work, drawing inspiration from opinion dynamics concepts, we propose two novel sheaf learning approaches that (i) provide a more intuitive understanding of the involved structure maps, (ii) introduce a useful inductive bias for heterophily and oversmoothing, and (iii) infer the sheaf in a way that does not scale with the number of features, thus using fewer learnable parameters than existing methods. In our evaluation, we show the limitations of the real-world benchmarks used so far on SNNs, and design a new synthetic task --leveraging the symmetries of $n$-dimensional ellipsoids-- that enables us to better assess the strengths and weaknesses of sheaf-based models. Our extensive experimentation on these novel datasets reveals valuable insights into the scenarios and contexts where SNNs in general --and our proposed approaches in particular-- can be beneficial.
Submission Number: 50
Loading