On the Interpretability of Regularisation for Neural Networks Through Model Gradient SimilarityDownload PDF

Published: 31 Oct 2022, Last Modified: 15 Jan 2023NeurIPS 2022 AcceptReaders: Everyone
Keywords: neural network, regularization, gradient descent, gradient similarity, noisy labels, generalization
TL;DR: A new framework for monitoring and regularising neural network training, providing insights into the mechanism of regularisation for a wide range of methods.
Abstract: Most complex machine learning and modelling techniques are prone to over-fitting and may subsequently generalise poorly to future data. Artificial neural networks are no different in this regard and, despite having a level of implicit regularisation when trained with gradient descent, often require the aid of explicit regularisers. We introduce a new framework, Model Gradient Similarity (MGS), that (1) serves as a metric of regularisation, which can be used to monitor neural network training, (2) adds insight into how explicit regularisers, while derived from widely different principles, operate via the same mechanism underneath by increasing MGS, and (3) provides the basis for a new regularisation scheme which exhibits excellent performance, especially in challenging settings such as high levels of label noise or limited sample sizes.
Supplementary Material: zip
9 Replies