Abstract: This report aims to verify the findings and expand upon the evaluation and training methods from the paper LICO: Explainable Models with Language-Image COnsistency. The main claims from the original paper are that LICO (i) enhances interpretability by producing more explainable saliency maps in conjunction with a post-hoc explainability method and (ii) improves image classification performance without computational overhead during inference. We have reproduced the key experiments conducted by Lei et al.; however, the obtained results do not support the original claims. Additionally, we identify a limitation in the paper’s evaluation method, which favors non-robust models, and propose robust experimental setups for more comprehensive quantitative analysis. Furthermore, we undertake additional studies on LICO’s training methodology to enhance its interpretability. Our code is available at https://github.com/konradszewczyk/lico-reproduction.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We have added the minor revisions requested by the Action Editor. In particular:
- we have softened the choice of words in the abstract, avoiding the use of phrases such as "vulnerability of experimental setup" when mentioning the original work.
- we have reduced the accusatory tone in parts of section 3.4 "Experimental Setup and Code" related to the available codebase
- we have rephrased parts of section 5. "Discussion" and 5.3 "Communication with authors"
Code: https://github.com/konradszewczyk/lico-reproduction
Assigned Action Editor: ~Krzysztof_Jerzy_Geras1
Submission Number: 2247
Loading