Track: long paper (up to 8 pages)
Keywords: geometric deep learning, gauge symmetry, gauge invariance, holonomy regression
Abstract: Gauge ambiguity is a pervasive obstacle when learning from group-valued transport data: edge measurements depend on arbitrary local coordinate choices, while the quantities we care about are gauge-invariant. Inspired by lattice gauge theory, where meaningful observables are built from loop holonomies rather than individual edge variables, we study a $\mathrm{SO}(3)$ learning problem on a discrete torus with random vertex-wise gauges. We adopt a categorical viewpoint in which a connection is a functor from the fundamental groupoid and gauge transformations act as natural isomorphisms. This leads to Categorical Trace Loop Networks (CTLN): a novel architecture that learns from loop- and face-based gauge invariants obtained by functorial holonomy composition and trace/angle scalarization. On gauge-randomized torus holonomy regression, CTLN achieves a test MAE of 0.1747, while a standard message passing network and a spectral connection-Laplacian baseline both remain near 0.81 MAE. These results show that in gauge-dominated regimes, learning on categorical invariants capturing global topology and higher-order consistency provides a highly effective method.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 122
Loading