Keywords: Loss landscapes, Algebraic Topology, Neural Networks Symmetries.
Abstract: Although neural network parameter spaces are contractible, training exhibits global phenomena that elude purely Euclidean accounts. We show that these effects arise up to symmetry after factoring out ubiquitous reparameterizations, the low-loss regions of the quotient landscape acquire nontrivial homology. For semi-algebraic losses and common symmetry groups, we prove that quotient sublevel sets $S_c/G$ have finite Betti numbers and that $\beta_k(S_c/G)>0$ yields a topological certificate of barriers. Also, we operationalize these insights with symmetry-aware trajectories (permutation alignment, scale normalization, Stiefel-consistent updates) that remove spurious obstacles and expose genuine connectivity in the quotient space. Experiments on Stiefel-constrained autoencoders and residual networks support the theory: homology summaries of sublevel sets predict the presence or absence of interpolation barriers, and quotient-aware paths recover robust mode connectivity. Taken together, our results provide a principled and testable account of why weight matching works and when loss barriers are intrinsic rather than artifacts of parameter redundancy.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 499
Loading