Topologic Attention Networks: Attending to Direct and Indirect Neighbors through Gaussian Belief Propagation
Abstract: Graph neural networks (GNN) typically rely on localized message passing, requiring increasing depth to capture long range dependencies. In this work, we introduce Graph Linear Transformations, a linear transformation that realizes direct and indirect feature mixing on graphs through a single, well-defined linear operator derived from the graph structure. By interpreting graphs as walk-summable Gaussian graphical models, we compute these transformations via Gaussian Belief Propagation, enabling each node to aggregate information from both direct and indirect neighbors without explicit enumeration of multi-hop paths. We show that different constructions of the underlying precision matrix induce distinct and interpretable propagation biases, ranging from selective edge-level interactions to uniform structural smoothing, and that Graph Linear Transformations can achieve competitive or superior performance compared to both local message-passing GNNs and dynamic neighborhood aggregation models across homophilic and heterophilic benchmark datasets.
External IDs:dblp:journals/corr/abs-2511-16871
Loading