Distribution Mismatch Correction for Improved Robustness in Deep Neural NetworksDownload PDF

Published: 02 Dec 2021, Last Modified: 05 May 2023NeurIPS 2021 Workshop DistShift PosterReaders: Everyone
Keywords: robustness, distribution shift, deep neural networks, unsupervised domain adapation, image corruption
TL;DR: This paper uses the 1-D Wasserstein distance to reduce the mismatch between the test-time activation distribution and a non-parametric target distribution based on the training set to improve classification robustness of CNNs.
Abstract: Deep neural networks rely heavily on normalization methods to improve their performance and learning behavior. Although normalization methods spurred the development of increasingly deep and efficient architectures, they also increased the vulnerability with respect to noise and input corruptions. In most applications, however, noise is ubiquitous and diverse; this can often lead to complete failure of machine learning systems as they fail to cope with mismatches between the input distribution during training- and test-time. The most common normalization method, batch normalization, reduces the distribution shift during training but is agnostic to changes of the input distribution during test time. Sample-based normalization methods can correct linear transformations of the activation distribution but cannot mitigate changes in the distribution shape; this makes the network vulnerable to distribution changes that cannot be reflected in the normalization parameters. We propose an unsupervised non-parametric distribution correction method that adapts the activation distribution of each layer. This reduces the mismatch between the training and test-time distribution by minimizing the 1-D Wasserstein distance. In our experiments, we empirically show that the proposed method effectively reduces the impact of intense image corruptions and thus improves the classification performance without the need for retraining or fine-tuning the model. An extended version of this paper can be found at \url{https://arxiv.org/abs/2110.01955}.
1 Reply

Loading