Keywords: Deep learning theory, feature larning, sampel complexity, scaling laws
TL;DR: We introduce a heuristic framework for scaling analysis that adopts variational methods from statistical field theory. It yields predictions of feature learning emergence in deep NNs, going beyond the state-of-the-art to hitherto intractable regimes.
Abstract: Two pressing topics in the theory of deep learning are the interpretation of feature learning mechanisms and the determination of implicit bias of networks in the rich regime. Current theories of rich feature learning effects revolve around networks with one or two trainable layers or deep linear networks. Furthermore, even under such limiting settings, predictions often appear in the form of high-dimensional non-linear equations, which require computationally intensive numerical solutions. Given the many details that go into defining a deep learning problem, this analytical complexity is a significant and often unavoidable challenge. Here, we propose a powerful heuristic route for predicting the data and width scales at which various patterns of feature learning emerge. This form of scale analysis is considerably simpler than such exact theories and reproduces the scaling exponents of various known results. In addition, we make novel predictions on complex toy architectures, such as three-layer non-linear networks, thus extending the scope of first-principle theories of deep learning.
Supplementary Material: pdf
Primary Area: learning theory
Submission Number: 18495
Loading