Keywords: Neural Networks, Optimization, Structure Discovery, Compressibility, Derandomization, Multiple Index Model, Johnson Lindenstrauss, MAXCUT
TL;DR: We extend theoretical insights into Neural Networks, proving a key derandomization lemma that explains structure discovery and applies to other problems such as MAXCUT approximation and Johnson-Lindenstrauss embeddings.
Abstract: Understanding the dynamics of feature learning in neural networks (NNs) remains a significant challenge.
The work of (Mousavi-Hosseini et al., 2023) analyzes a multiple index teacher-student setting and shows that a two-layer student attains a low-rank structure in its first-layer weights when trained with stochastic gradient descent (SGD) and a strong regularizer.
This structural property is known to reduce sample complexity of generalization.
Indeed, in a second step, the same authors establish algorithm-specific learning guarantees under additional assumptions.
In this paper, we focus exclusively on the structure discovery aspect and study it under weaker assumptions, more specifically: we allow (a) NNs of arbitrary size and depth, (b) with all parameters trainable, (c) under any smooth loss function, (d) tiny regularization, and (e) trained by any method that attains a second-order stationary point (SOSP), e.g. perturbed gradient descent (PGD). At the core of our approach is a key $\textit{derandomization}$ lemma, which states that optimizing the function $E_{x} \left[g_{\theta}(Wx + b)\right]$ converges to a point where $W = 0$, under mild conditions. The fundamental nature of this lemma directly explains structure discovery and has immediate applications in other domains including an end-to-end approximation for MAXCUT, and computing Johnson-Lindenstrauss embeddings.
Supplementary Material: zip
Primary Area: learning theory
Submission Number: 7257
Loading