A note on regularised NTK dynamics with an application to PAC-Bayesian training

Published: 15 Apr 2024, Last Modified: 15 Apr 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: We establish explicit dynamics for neural networks whose training objective has a regularising term that constrains the parameters to remain close to their initial value. This keeps the network in a lazy training regime, where the dynamics can be linearised around the initialisation. The standard neural tangent kernel (NTK) governs the evolution during the training in the infinite-width limit, although the regularisation yields an additional term appears in the differential equation describing the dynamics. This setting provides an appropriate framework to study the evolution of wide networks trained to optimise generalisation objectives such as PAC-Bayes bounds, and hence contribute to a deeper theoretical understanding of such networks.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Final version deanonymised
Assigned Action Editor: ~George_Papamakarios1
Submission Number: 1972
Loading