Symmetry in Neural Network Parameter Spaces

TMLR Paper5131 Authors

16 Jun 2025 (modified: 19 Jun 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Modern deep learning models are highly overparameterized, resulting in large sets of parameter configurations that yield the same outputs. A significant portion of this redundancy is explained by symmetries in the parameter space—transformations that leave the network function unchanged. These symmetries shape the loss landscape and constrain learning dynamics, offering a new lens for understanding optimization, generalization, and model complexity that complements existing theory of deep learning. This survey provides an overview of parameter space symmetry. We summarize existing literature, uncover connections between symmetry and learning theory, and identify gaps and opportunities in this emerging field.
Submission Length: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Jeffrey_Pennington1
Submission Number: 5131
Loading