Keywords: GNN, lattice gauge theory, orbit space, invariance
Abstract: We introduce a gauge-theoretic framework for graph neural networks on arbitrary graphs with local frames. Each directed edge carries a link variable
$U_{ij} \in O(d)$ that parallel-transports node features, and the invariant head aggregates a finite, explicit dictionary of gauge invariants
(open strings and Wilson loops). Using the First Fundamental Theorem for $O(d)$ on mixed tensor spaces, we prove that this dictionary generates all
$O(d)$-invariant polynomials, yielding a universal approximation result on compact sets for continuous invariant targets. We further formulate learning directly on the orbit space
$X/(S_n \times O(d))$ and establish a nonuniform learnability guarantee via bounded-Lipschitz slices.
We realize the theory in a lightweight message-passing architecture. On a synthetic gauge diagnostic,
the model attains almost perfect generalization while passing local gauge probes and maintaining numerical $S_n \times O(d)$.
On QM9 dataset, an augmented variant that includes atomic numbers and interatomic distances improves regression accuracy.
These results show that a finite gauge-invariant dictionary, implemented with standard message passing, i
s both theoretically expressive and practically effective for symmetry-aware learning on graphs.
Supplementary Material: zip
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 10223
Loading