Gradient Regularization Improves Accuracy of Discriminative ModelsDownload PDF

12 Feb 2018 (modified: 05 May 2023)ICLR 2018 Workshop SubmissionReaders: Everyone
Abstract: Regularizing the gradient norm of the output of a neural network with respect to its inputs is a powerful technique, rediscovered several times. This paper presents evidence that gradient regularization can consistently improve classification accuracy on vision tasks, using modern deep neural networks, especially when the amount of training data is small. We introduce our regularizers as members of a broader class of Jacobian-based regularizers. We demonstrate empirically on real and synthetic data that the learning process leads to gradients controlled beyond the training points, and results in solutions that generalize well.
Keywords: deep learning, supervised learning, classification, regularization, gradient penalty
TL;DR: Penalizing the gradients of a neural network with respect to the inputs improves classification accuracy, especially for small datasets.
7 Replies

Loading