Keywords: Generalization, Rademacher complexity, Compositionality, Feature Learning
TL;DR: We prove a generalization bound that shows that DNNs can learn composition of F1 function / Sobolev functions efficiently, allowing to quantify the gains from feature learning / symmetry learning.
Abstract: We show that deep neural networks (DNNs) can efficiently learn any
composition of functions with bounded $F_{1}$-norm, which allows
DNNs to break the curse of dimensionality in ways that shallow networks
cannot. More specifically, we derive a generalization bound that combines
a covering number argument for compositionality, and the $F_{1}$-norm
(or the related Barron norm) for large width adaptivity. We show that
the global minimizer of the regularized loss of DNNs can fit for example
the composition of two functions $f^{*}=h\circ g$ from a small number
of observations, assuming $g$ is smooth/regular and reduces the dimensionality
(e.g. $g$ could be the modulo map of the symmetries of $f^{*}$),
so that $h$ can be learned in spite of its low regularity. The measures
of reguarity we consider is the Sobolev norm with different levels
of differentiability, which is well adapted to the $F_{1}$ norm.
We compute scaling laws empirically, and observe phase transitions
depending on whether $g$ or $h$ is harder to learn, as predicted
by our theory.
Primary Area: Learning theory
Submission Number: 19443
Loading