Reproducibility study of “LICO: Explainable Models with Language-Image Consistency"

Published: 02 Jun 2024, Last Modified: 02 Jun 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: The growing reproducibility crisis in machine learning has brought forward a need for careful examination of research findings. This paper investigates the claims made by Lei et al. (2023) regarding their proposed method, LICO, for enhancing post-hoc interpretability techniques and improving image classification performance. LICO leverages natural language supervision from a vision-language model to enrich feature representations and guide the learning process. We conduct a comprehensive reproducibility study, employing (Wide) ResNets and established interpretability methods like Grad-CAM and RISE. We were mostly unable to reproduce the authors' results. In particular, we did not find that LICO consistently led to improved classification performance or improvements in quantitative and qualitative measures of interpretability. Thus, our findings highlight the importance of rigorous evaluation and transparent reporting in interpretability research.
Certifications: Reproducibility Certification
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Changes in the camera-ready version: added author names, and fixed some final grammatical and spelling mistakes.
Code: https://github.com/robertdvdk/lico-fact
Assigned Action Editor: ~Pascal_Poupart2
Submission Number: 2248
Loading