Abstract: Adversarial examples have attracted significant attention in machine learning. The reproduced paper proposed that adversarial examples can be directly attributed to the presence of non-robust features: highly predictive, yet brittle features. This challenge aims to reproduce the tasks reported in the paper and modify model components to understand the proposed model’s robustness. Hence, we first generated a new robust dataset including adversarial examples, and then reproduced a Residual Neural Network (ResNet) classifier baseline on CIFAR-10 dataset. In addition, some model hyperparameters, such as learning rate, value of adversarial perturbation and normalization approaches, have been changed to explore how these components impact the classification accuracy. We also designed, implemented and evaluated the Visual Geometry Group Network (VGG), DenseNet and InceptionV3 classifiers as the extensions of reproduced paper. It is found that the DenseNet classifier reported the best accuracy of 90.49% in our works, so DenseNet can be recommended in the future CIFAR-10 dataset classification.
Track: Ablation
NeurIPS Paper Id: https://openreview.net/forum?id=BygcXErlIr
5 Replies
Loading