Abstract: While conservation laws in gradient flow training dynamics are well understood for (mostly shallow) ReLU and linear networks, their study remains largely unexplored for more practical architectures. For this, we first show that basic building blocks such as ReLU (or linear) shallow networks, with or without convolution, have easily expressed conservation laws, and no more than the known ones. In the case of a single attention layer, we also completely describe all conservation laws, and we show that residual blocks have the same conservation laws as the same block without a skip connection. We then introduce the notion of conservation laws that depend only on *a subset* of parameters (corresponding e.g. to a pair of consecutive layers, to a residual block, or to an attention layer). We demonstrate that the characterization of such laws can be reduced to the analysis of the corresponding building block in isolation. Finally, we examine how these newly discovered conservation principles, initially established in the continuous gradient flow regime, persist under discrete optimization dynamics, particularly in the context of Stochastic Gradient Descent (SGD).
Lay Summary: Modern artificial intelligence (AI) systems are powerful, but it's hard to understand exactly how they learn. While we know some basic mathematical rules satisfied by simpler models, the "conservation laws"—which explain how certain properties remain constant during learning—haven't been fully explored in more advanced models.
We find that conservation laws also apply to recent AI systems. These laws explain how certain characteristics of the system are preserved throughout learning, and our findings explain why it is enough to analyze specific parts of the AI system, such as individual blocks or layers.
This discovery gives us a better understanding of how modern AI models work internally. Even when these systems are trained with common methods, the conservation laws still hold. This could make AI training more stable and reliable, helping to improve performance in a more predictable way.
Link To Code: https://github.com/sibyllema/Conservation-laws-for-ResNets-and-Transformers
Primary Area: Deep Learning->Theory
Keywords: Conservation laws, gradient flow, linear and relu neural networks, Convolutive ResNet, Transformer, SGD
Submission Number: 6956
Loading