Robust Learning with Jacobian RegularizationDownload PDF

25 Sept 2019 (modified: 22 Oct 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
Keywords: Supervised Representation Learning, Few-Shot Learning, Regularization, Adversarial Defense, Deep Learning
TL;DR: We analyze and develop a computationally efficient implementation of Jacobian regularization that increases the classification margins of neural networks.
Abstract: Design of reliable systems must guarantee stability against input perturbations. In machine learning, such guarantee entails preventing overfitting and ensuring robustness of models against corruption of input data. In order to maximize stability, we analyze and develop a computationally efficient implementation of Jacobian regularization that increases classification margins of neural networks. The stabilizing effect of the Jacobian regularizer leads to significant improvements in robustness, as measured against both random and adversarial input perturbations, without severely degrading generalization properties on clean data.
Code: https://www.dropbox.com/s/3t0l5vujtk1yzwq/jacobian.py?dl=0
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/arxiv:1908.02729/code)
Original Pdf: pdf
7 Replies

Loading