Neural Tangent Kernel Perspective on Parameter-Space Symmetries

ICLR 2026 Conference Submission19094 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Symmetries, Parater Space Symmetries, Parater-Space Symmetries, Neural Teleporation, Neural Tangent Kernel, Wide Neural Networks, Linearization, NTK
Abstract: Parameter-space symmetries are transformations that modify model parameters without altering the outputs. These transformations can be leveraged to accelerate optimization and enhance generalization. Remarkably, applying a single transformation either before or during training often suffices to realize these benefits. While the effectiveness of this approach is very promising, its underlying mechanisms remain poorly understood. In this paper, we offer an explanation within the Neural Tangent Kernel (NTK) framework, by analyzing how such transformations affect the kernel's properties. In particular, we show that maximizing the alignment between the loss gradient and the data kernel is equivalent to maximizing the alignment between the NTK and the data. Since kernel alignment is known to correlate with optimization rate in the NTK limit, this insight elucidates how loss gradient optimization facilitates faster training. To establish the validity of this approach, we prove that parameter-space symmetries preserve the NTK limit.
Primary Area: learning theory
Submission Number: 19094
Loading