Sheaf Attention NetworksDownload PDF

26 Sept 2022, 12:09 (modified: 09 Nov 2022, 02:12)NeurReps 2022 OralReaders: Everyone
Keywords: Graph Neural Networks, Graph Attention Networks, Sheaf Neural Networks
TL;DR: We propose SheafAN, a generalisation of GAT based on cellular sheaves
Abstract: Attention has become a central inductive bias for deep learning models irrespective of domain. However, increasing theoretical and empirical evidence suggests that Graph Attention Networks (GATs) suffer from the same pathological issues affecting many other Graph Neural Networks (GNNs). First, GAT's features tend to become progressively smoother as more layers are stacked, and second, the model performs poorly in heterophilic graphs. Sheaf Neural Networks (SNNs), a new class of models inspired by algebraic topology and geometry, have shown much promise in tackling these two issues. Building upon the recent success of SNNs and the wide adoption of attention-based architectures, we propose Sheaf Attention Networks (SheafANs). By making use of a novel and more expressive attention mechanism equipped with geometric inductive biases, we show that this type of construction generalizes popular attention-based GNN models to cellular sheaves. We demonstrate that these models help tackle the oversmoothing and heterophily problems and show that, in practice, SheafANs consistently outperform GAT on synthetic and real-world benchmarks.
4 Replies

Loading