Faking Interpolation Until You Make It

Published: 24 Sept 2022, Last Modified: 28 Feb 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Deep over-parameterized neural networks exhibit the interpolation property on many data sets. Specifically, these models can achieve approximately zero loss on all training samples simultaneously. This property has been exploited to develop optimisation algorithms for this setting. These algorithms use the fact that the optimal loss value is known to employ a variation of a Polyak step size calculated on each stochastic batch of data. We introduce a novel extension of this idea to tasks where the interpolation property does not hold. As we no longer have access to the optimal loss values a priori, we instead estimate them for each sample online. To realise this, we introduce a simple but highly effective heuristic for approximating the optimal value based on previous loss evaluations. We provide rigorous experimentation on a range of problems. From our empirical analysis we demonstrate the effectiveness of our approach, which outperforms other single hyperparameter optimisation methods.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Camera ready version. Corrected Typos. Fixed broken references.
Code: https://github.com/Alasdair-P/alig_plus
Assigned Action Editor: ~Laurent_Dinh1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 273
Loading