Are Generative Classifiers More Robust to Adversarial Attacks?Download PDF

12 Feb 2018 (modified: 05 May 2023)ICLR 2018 Workshop SubmissionReaders: Everyone
Abstract: There is a rising interest in studying the robustness of deep neural network classifiers against adversaries, with both advanced attack and defence techniques being actively developed. However, most recent work focuses on discriminative classifiers which only models the conditional distribution of the labels given the inputs. In this abstract we propose deep Bayes classifier that improves the classical naive Bayes with deep generative models, and verifies its robustness against a number of existing attacks. Our initial results on MNIST suggest that deep Bayes classifiers might be more robust when compared with deep discriminative classifiers.
Keywords: generative models, adversarial attacks, defences
TL;DR: We show initial evidence that generative classifiers (using conditional DGMs) might be more robust to recent attacks than DNN-based discriminative classifiers.
3 Replies

Loading