Empirical Limitations of the NTK for Understanding Scaling Laws in Deep Learning

Published: 10 Aug 2023, Last Modified: 10 Aug 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: The ``Neural Tangent Kernel'' (NTK) (Jacot et al 2018), and its empirical variants have been proposed as a proxy to capture certain behaviors of real neural networks. In this work, we study NTKs through the lens of scaling laws, and demonstrate that they fall short of explaining important aspects of neural network generalization. In particular, we demonstrate realistic settings where finite-width neural networks have significantly better data scaling exponents as compared to their corresponding empirical and infinite NTKs at initialization. This reveals a more fundamental difference between the real networks and NTKs, beyond just a few percentage points of test accuracy. Further, we show that even if the empirical NTK is allowed to be pre-trained on a constant number of samples, the kernel scaling does not catch up to the neural network scaling. Finally, we show that the empirical NTK continues to evolve throughout most of the training, in contrast with prior work which suggests that it stabilizes after a few epochs of training. Altogether, our work establishes concrete limitations of the NTK approach in understanding scaling laws of real networks on natural datasets.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Jinwoo_Shin1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 981
Loading