Are Generative Classifiers More Robust to Adversarial Attacks?

Yingzhen Li

Feb 12, 2018 (modified: Jun 04, 2018) ICLR 2018 Workshop Submission readers: everyone Show Bibtex
  • Abstract: There is a rising interest in studying the robustness of deep neural network classifiers against adversaries, with both advanced attack and defence techniques being actively developed. However, most recent work focuses on discriminative classifiers which only models the conditional distribution of the labels given the inputs. In this abstract we propose deep Bayes classifier that improves the classical naive Bayes with deep generative models, and verifies its robustness against a number of existing attacks. Our initial results on MNIST suggest that deep Bayes classifiers might be more robust when compared with deep discriminative classifiers.
  • Keywords: generative models, adversarial attacks, defences
  • TL;DR: We show initial evidence that generative classifiers (using conditional DGMs) might be more robust to recent attacks than DNN-based discriminative classifiers.

Loading