Reproducibility study for "Explaining in Style: Training a GAN to explain a classifier in StyleSpace"Download PDF

Published: 11 Apr 2022, Last Modified: 05 May 2023RC2021Readers: Everyone
Keywords: StyleGAN2, Classifier, Explainability, Counterfactual
TL;DR: This paper attempts to reproduce StylEx, a model for generating counterfactual explanations for a classifier.
Abstract: Scope of Reproducibility This work aims to reproduce Lang et al.’s StylEx [9] which proposes a novel approach to explain how a classifier makes its decision. They claim that StylEx creates a post-hoc counterfactual explanation whose principal attributes correspond to properties that are intuitive to humans. The paper boasts a large range of real-world practicality. However, StylEx proves difficult to reproduce due to its time complexity and holes in the information provided. This paper tries to fill in these holes by: i) re-implementation of StylEx in a different framework, ii) creating a low resource training benchmark. Methodology We use their provided python notebook to confirm their AttFind algorithm. However, to test the authors’ claims, we reverse engineer their architecture and completely re-implement their train algorithm. Due to the computational cost of training, we use their pre-trained weights to test our reconstruction. To expedite training, a smaller resolution dataset is used. The training took 9 hours for 50,000 iterations on a Google Colab Nvidia K80 GPU. The hyperparameters are listed in the proceedings. Results We reproduce the StylEx model in a different framework and test the AttFind algorithm, verifying the original paper’s results for the perceived age classifier. However, we could not reproduce the results for the other classifiers used, due to time limitations in training and the absence of their pre-trained models. In addition, we verify the paper’s claim of providing human-interpretable explanations, by reproducing the two user studies outlined in the original paper. What was easy The notebook supplied by the authors loads their pre-trained models and reproduces part of the results in the paper. Furthermore, their algorithm for discovering classifier-related attributes, AttFind, is well outlined in their paper making the notebook easy to follow. Lastly, the authors were responsive to our inquiries. What was difficult A major difficulty was that the authors provide only a single pre-trained model, which makes most of the main claims require training code to verify. Moreover, the paper leaves out information about their design choices and experimental setup. In addition, the authors do not provide an implementation of the models’ architecture or training. Finally, the practical audience is limited by the resource requirements. Communication with original authors We had modest communication with the original author, Oran Lang. Our discussion was limited to inquiries about design choices not mentioned in the paper. They were able to clarify the encoder architecture and some of their experimental setup. However, their training code could not be made available due to internal dependencies.
Paper Url: https://openaccess.thecvf.com/content/ICCV2021/papers/Lang_Explaining_in_Style_Training_a_GAN_To_Explain_a_Classifier_ICCV_2021_paper.pdf
Paper Venue: ICCV 2021
4 Replies

Loading