[Re] On the Reproducibility of Post-Hoc Concept Bottleneck Models

Published: 08 May 2024, Last Modified: 08 May 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: To obtain state-of-the-art performance, many deeper artificial intelligence models sacrifice human explainability in their decision-making. One solution proposed for achieving top performance and retaining explainability is the Post-Hoc Concept Bottleneck Model (PCBM) (Yuksekgonul et al., 2023), which can convert the embeddings of any deep neural network into a set of human-interpretable concept weights. In this work, we reproduce and expand upon the findings of Yuksekgonul et al. (2023). Our results show that while most of the authors’ claims and results hold, some of the results they obtained could not be sufficiently replicated. Specifically, the claims relating to PCBM performance preservation and its non-requirement of labeled concept datasets were generally reproduced, whereas the one claiming its model editing capabilities was not. Beyond these results, our contributions to their work include evidence that PCBMs may work for audio classification problems, verification of the interpretability of their methods, and updates to their code for missing implementations. The code for our implementations can be found at https://github.com/dgcnz/FACT.
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/dgcnz/FACT
Assigned Action Editor: ~Sanghyuk_Chun1
Submission Number: 2212
Loading