Keywords: Machine Learning, Latent Space Perturbations, Semantic Perturbations, Adversarial Perturbations, Normalizing Flows, ICML
Abstract: Several methods from two separate lines of works, namely, data augmentation (DA) and adversarial training techniques, rely on perturbations done in latent space. Often, these methods are either non-interpretable due to their non-invertibility or are notoriously difficult to train due to their numerous hyperparameters. We exploit the exactly reversible encoder-decoder structure of normalizing flows to perform perturbations in the latent space. We demonstrate that these on-manifold perturbations match the performance of advanced DA techniques---reaching $96.6\%$ test accuracy for CIFAR-10 using ResNet-18 and outperform existing methods particularly in low data regimes---yielding $10$--$25\%$ relative improvement of test accuracy from classical training. We find our latent adversarial perturbations, adaptive to the classifier throughout its training, are most effective.
TL;DR: Invertibility of normalizing flows can be exploited to define latent space perturbations that yield helpful data augmentations for classifier training, particularly in low-data regimes.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 8 code implementations](https://www.catalyzex.com/paper/semantic-perturbations-with-normalizing-flows/code)
4 Replies
Loading