[Re] Generative causal explainers of black-box classifiersDownload PDF

31 Jan 2021 (modified: 05 May 2023)ML Reproducibility Challenge 2020 Blind SubmissionReaders: Everyone
Keywords: reproducibility
Abstract: Explainability of black-box classifiers is an important aspect of neural models that often is non-existent. Classifiers made for tasks such as object recognition and decision making often lack transparency which causes vulnerability being overlooked \cite{explain}. Without insight into the reasons behind a decision made by a neural model, potential security risks or classification mistakes can be missed \cite{explain}. Multiple solutions have been posed to solve this problem. An example is the method designed by O'Shaugnessy et al. \cite{generative-paper}. The authors design a learning framework that leverages a generative model and information-theoretic measures of causal influence. The objective function encourages both the generative model to faithfully represent the data distribution and the latent factors to have a large causal influence on the classifier output. In this study, the reproducibility of the method developed by O'Shaugnessy et al. is tested. Several claims are challenged to ensure the validity of the of the method. Furthermore, the method is extended to test generalizability. It was found that the claims are not as strong as the authors suggested and the method is not as easily generalizable as expected. However, for the task described in the original study, the method is completely reproducible, and thus a valid contribution to machine learning innovation.
Paper Url: https://openreview.net/forum?id=tdG6Fa3Y6hq
4 Replies

Loading