Early learning of the optimal constant solution in neural networks and humans

Published: 14 May 2025, Last Modified: 13 Jul 2025CCN 2025 Proceedings asProceedingsPosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Deep neural networks learn increasingly complex functions over the course of training. Here, we show both empirically and theoretically that learning of the target function is preceded by an early phase in which networks learn the optimal constant solution (OCS) – that is, initial model responses mirror the distribution of target labels, while entirely ignoring information provided in the input. Using a hierarchical category learning task, we derive exact solutions for learning dynamics in deep linear networks trained with bias terms. Even when initialized to zero, this simple architectural feature induces substantial changes in early dynamics. We identify hallmarks of this early OCS phase and illustrate how these signatures are observed in deep linear networks and larger, nonlinear convolutional neural networks solving a hierarchical learning task based on MNIST and CIFAR10. We train human learners over the course of three days on a structurally equivalent learning task. We then identify qualitative signatures of this early OCS phase in terms of true negative rates. Surprisingly, we find the same early reliance on the OCS in the behavior of human learners. Finally, we show that learning of the OCS can emerge even in the absence of bias terms and is equivalently driven by generic correlations in the input data. Overall, our work suggests the OCS is a common phenomenon in biological and artificial, supervised, error-corrective learning, and suggests possible factors for its prevalence.
Submission Number: 8
Loading