The Eigenlearning Framework: A Conservation Law Perspective on Kernel Ridge Regression and Wide Neural Networks

Published: 15 Jun 2023, Last Modified: 15 Jun 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: We derive simple closed-form estimates for the test risk and other generalization metrics of kernel ridge regression (KRR). Relative to prior work, our derivations are greatly simplified and our final expressions are more readily interpreted. In particular, we show that KRR can be interpreted as an explicit competition among kernel eigenmodes for a fixed supply of a quantity we term "learnability.'' These improvements are enabled by a sharp conservation law which limits the ability of KRR to learn any orthonormal basis of functions. Test risk and other objects of interest are expressed transparently in terms of our conserved quantity evaluated in the kernel eigenbasis. We use our improved framework to: i) provide a theoretical explanation for the "deep bootstrap" of Nakkiran et al (2020), ii) generalize a previous result regarding the hardness of the classic parity problem, iii) fashion a theoretical tool for the study of adversarial robustness, and iv) draw a tight analogy between KRR and a well-studied system in statistical physics.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: minor changes and deanonymization for camera-ready revision
Code: https://github.com/james-simon/eigenlearning
Assigned Action Editor: ~Andriy_Mnih1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 876
Loading