Reproducibility Study of "Languange-Image COnsistency"

TMLR Paper2247 Authors

16 Feb 2024 (modified: 07 Apr 2024)Under review for TMLREveryoneRevisionsBibTeX
Abstract: This report aims to verify the findings and expand upon the evaluation and training methods from the paper LICO: Explainable Models with Language-Image COnsistency. The main claims are that LICO (i) enhances interpretability by producing more explainable saliency maps in conjunction with a post-hoc explainability method and (ii) improves image classification performance without computational overhead during inference. We have reproduced the key experiments conducted by Lei et al.; however, the obtained results do not support the original claims. Additionally, we identify a vulnerability in the paper’s main evaluation method that favors non-robust models, and propose robust experimental setups for quantitative analysis. Furthermore, we undertake additional studies on LICO’s training methodology to enhance its interpretability. Our code is available at https://anonymous.4open.science/r/lico-reproduction-7FEB.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Krzysztof_Jerzy_Geras1
Submission Number: 2247
Loading