Grothendieck Graph Neural Networks Framework: An Algebraic Platform for Crafting Topology-Aware GNNs
Keywords: Geometric deep learning, Categorical deep learning, Algebraic deep learning, Graph neural networks
TL;DR: By algebraically extending graph neighborhoods to covers, this paper introduces the GGNN framework, enabling the systematic design of expressive GNNs.
Abstract: Graph Neural Networks (GNNs) typically rely on neighborhoods as the foundation of message passing. While simple and effective, neighborhoods limit expressivity, often no stronger than the Weisfeiler–Lehman (WL) test. We propose the Grothendieck Graph Neural Networks (GGNN) framework, an algebraic platform that generalizes neighborhoods into covers, offering flexible alternatives for defining message-passing strategies. GGNN translates covers into matrices, similar to how adjacency matrices encode neighborhoods, enabling both theoretical analysis and practical implementation.
Within this framework, we introduce the cover of sieves, inspired by category theory, which captures rich topological features. Based on this cover, we design Sieve Neural Networks (SNN), which produce the matrix form of the cover of sieves, generalizing the adjacency matrix. Experiments show that SNN achieves zero failures on graph isomorphism tasks (SRG, CSL, BREC) and improves topology-aware evaluation via a label propagation probe. These results demonstrate GGNN’s ability to serve as a principled foundation for designing topology-aware GNNs.
Supplementary Material: zip
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 19705
Loading