Abstract: Existing supervised learning detectors may deteriorate their performance when detecting unseen adversarial examples (AEs), because they may be sensitive with training samples. We found that (1) the CNN classifier is modest robust against AEs generated from other CNNs, and (2) such adversarial robustness is rarely affected by unseen instances. So, we construct an attack-agnostic detector based on an adversarial robust surrogate CNN to detect unknown AEs. Specifically, for a protected CNN classifier, we design a surrogate CNN classifier and predict the image with different classification labels on them as an AE. In order to detect transferable AEs and maintain low false positive rate, the surrogate model is distilled from the protected model, aiming at enhancing the adversarial robustness (i.e., suppress the transferability of AE) and meanwhile mimicking the output of clean image. To defend the potential ensemble attack targeted at our detector, we propose a new adversarial training scheme to enhance the security of the proposed detector. Experimental results of generalization ability tests on Cifar-10 and ImageNet-20 show that our method can detect unseen AEs effectively and performs much better than the state-of-the-arts.
Loading