Feature Dynamics as Implicit Data Augmentation: A Depth-Decomposed View on Deep Neural Network Generalization
Keywords: Generalization, deep learning, feature dynamics, implicit bias, robustness
TL;DR: This paper shows that shallow feature evolution acts as implicit structured augmentation, where temporal consistency and SGD-induced anisotropic noise jointly explain deep neural network generalization.
Abstract: Why do deep networks generalize well? In contrast to the classical generalization theory, we approach this fundamental question by looking not only at inputs and outputs, but at the evolution of internal features. Our study uncovers a phenomenon of temporal consistency: predictions remain stable when shallow features from earlier checkpoints are combined with deeper features from later ones. This stability is not a trivial convergence artifact. Rather, it acts as a form of implicit, structured augmentation that supports generalization. We show that temporal consistency extends to unseen and corrupted data, but collapses when semantic structure is destroyed (e.g., random labels). Statistical tests further reveal that SGD injects anisotropic noise aligned with a few principal directions, reinforcing its role as a source of structured variability. Together, these findings suggest a conceptual perspective that links feature dynamics to generalization, pointing toward future work on practical surrogates for measuring temporal feature evolution.
Primary Area: learning theory
Submission Number: 6383
Loading