Improving VAEs' Robustness to Adversarial AttackDownload PDF

Published: 12 Jan 2021, Last Modified: 05 May 2023ICLR 2021 PosterReaders: Everyone
Keywords: deep generative models, variational autoencoders, robustness, adversarial attack
Abstract: Variational autoencoders (VAEs) have recently been shown to be vulnerable to adversarial attacks, wherein they are fooled into reconstructing a chosen target image. However, how to defend against such attacks remains an open problem. We make significant advances in addressing this issue by introducing methods for producing adversarially robust VAEs. Namely, we first demonstrate that methods proposed to obtain disentangled latent representations produce VAEs that are more robust to these attacks. However, this robustness comes at the cost of reducing the quality of the reconstructions. We ameliorate this by applying disentangling methods to hierarchical VAEs. The resulting models produce high--fidelity autoencoders that are also adversarially robust. We confirm their capabilities on several different datasets and with current state-of-the-art VAE adversarial attacks, and also show that they increase the robustness of downstream tasks to attack.
One-sentence Summary: We show that regularisation methods first developed to obtain 'disentangled' VAEs increase the robustness of VAEs to adversarial attack; leveraging this insight we propose an even-more-robust hierarchical VAE.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Supplementary Material: zip
Data: [CelebA](https://paperswithcode.com/dataset/celeba), [Chairs](https://paperswithcode.com/dataset/chairs)
16 Replies

Loading