Keywords: Graph Learning Architectures, Linearzation, State-Space Models
Abstract: Designing effective graph learning architectures is central to making learning on structured and relational data feasible and scalable.
Crucially, such designs must incorporate sufficient inductive bias to capture and leverage the graph topology. Simultaneously, they have to balance this objective with efficiently utilizing modern hardware, and remaining effectively trainable, even at scale.
The current proposed architectures and paradigms range from message-passing neural networks on the graph topology to graph-informed transformers and virtual compute structures. Especially, the latter techniques often translate useful concepts and insights from graph theory in order to improve stability, mixing time, or bottlenecks.
In this work, we highlight a linearization technique from the recently proposed Graph State-Space Model, as a powerful, general tool to design or improve graph learning architectures.
At its core, the technique simplifies and reduces sequential computational depth and improves execution speed, while largely preserving trainability. Furthermore, the tool is versatile enough to be applied as a drop-in module across existing architectures.
We showcase this flexibility by adapting Cayley Graph Propagation, yielding a simple, deeper and faster architecture.
Submission Number: 46
Loading