Implicit Bias of Gradient Descent for Non-Homogeneous Deep Networks

Published: 01 May 2025, Last Modified: 23 Jul 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: We establish the asymptotic implicit bias of gradient descent (GD) for generic non-homogeneous deep networks under exponential loss. Specifically, we characterize three key properties of GD iterates starting from a sufficiently small empirical risk, where the threshold is determined by a measure of the network's non-homogeneity. First, we show that a normalized margin induced by the GD iterates increases nearly monotonically. Second, we prove that while the norm of the GD iterates diverges to infinity, the iterates themselves converge in direction. Finally, we establish that this directional limit satisfies the Karush–Kuhn–Tucker (KKT) conditions of a margin maximization problem. Prior works on implicit bias have focused exclusively on homogeneous networks; in contrast, our results apply to a broad class of non-homogeneous networks satisfying a mild near-homogeneity condition. In particular, our results apply to networks with residual connections and non-homogeneous activation functions, thereby resolving an open problem posed byJi & Telgarsky (2020).
Lay Summary: When we train deep neural networks with gradient descent, the method subconsciously nudges the model toward a particular kind of solution, even when many solutions could fit the data. We show this “steering effect” holds not just for idealized, perfectly uniform networks studied before, but also for more realistic architectures that include skip-connections and varied activation functions. As training continues, the model’s confidence gap (margin) on the right answers steadily grows; the weights themselves get larger and larger, yet the direction they point settles into a single orientation. That final direction is exactly the one that maximizes the margin according to standard optimality rules, helping explain why over-parameterized networks often end up with simple, well-generalizing solutions. Compared to previous results, we provide a detailed analysis for much more practical neural networks including residual connection and many activations.
Primary Area: Deep Learning->Theory
Keywords: Implicit bias, Non-homogeneous model, Deep neural networks, Gradient descent
Submission Number: 12836
Loading