Tight conditions for when the NTK approximation is valid

Published: 29 Nov 2023, Last Modified: 29 Nov 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: We study when the neural tangent kernel (NTK) approximation is valid for training a model with the square loss. In the lazy training setting of Chizat et al. 2019, we show that rescaling the model by a factor of $\alpha = O(T)$ suffices for the NTK approximation to be valid until training time $T$. Our bound is tight and improves on the previous bound of Chizat et al. 2019, which required a larger rescaling factor of $\alpha = O(T^2)$.
Submission Length: Regular submission (no more than 12 pages of main content)
Supplementary Material: pdf
Assigned Action Editor: ~Murat_A_Erdogdu1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1179
Loading