Reproducibility Report: Rethinking Softmax Cross-Entropy Loss for Adversarial RobustnessDownload PDF

Jan 31, 2021 (edited Apr 01, 2021)ML Reproducibility Challenge 2020 Blind SubmissionReaders: Everyone
  • Keywords: Adversarial Robustness, Max-Mahalanobis, Adversarial Attacks
  • Abstract: Reproducibility Summary - Scope of Reproducibility: (Pang,19) presented Max-Mahalanobis center (MMC) loss and argued that MMC loss is adversarial more robust than SCE. The author's SCE loss conveys inappropriate supervisory signals to the model, leading to sparse sample density in the feature space. In this reproducibility challenge, we verify the claims that training with MMC loss produces adversarially robust models while also enabling accuracy comparably with models trained with SCE loss. Methodology - We used the code as present in the repository provided by (Pang,19). We used their files to implement our experiments and test their hypothesis. We used Nvidia GeForce RTX 2080 Ti to perform all our experiments. It took a total of around 500 GPU hours. We used adaptive attacks to test out the main claims of the paper. Our main goal was to prove the various hypothesis stated by the authors. We also reproduce the MMC loss and optimal center generation algorithm in the PyTorch framework, which can help the PyTorch practitioner facilitate further research. Results - We reproduced all the experiments as done by (Pang,19) and could not see any significant difference between our results. All the results were within 2\% of the values presented in the paper. We could also validate the hypothesis as stated by the authors of the paper. We believe that the paper gives a very good idea of what other objectives other than SCE loss could look like. What was easy- It is easy to replicate the original results because the code was publicly available. Also implementing MMC loss was also pretty straightforward. What was hard- The paper is very theoretical and it was difficult to understand some parts of it. Additionally, running adaptive attacks was tough because you had to go and change the loss function in cleverhans library for every experiment that had to be run. This was a tedious task. There were some places where we had to look at the proper documentation of a library to understand what was actually happening in the code. Communication with original authors- Some of our doubts regarding the theory and implementation details were clarified by the original authors via email and in the issues of their Github repository.
  • Paper Url:
3 Replies