MI-Grad-CAM: Letting Your Model Reveal What’s Most Informative

ICLR 2026 Conference Submission25318 Authors

20 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Mutual Information
Abstract: With the growing role of machine vision in critical applications such as healthcare, achieving precise and interpretable decision-making is crucial. Class Activation Mapping (CAM) is widely used for visual explanations in computer vision, but improving its interpretability remains an open research area. In this work, we introduce MI-Grad-CAM, a novel post-hoc visual explanation method that provides clearer, causally-driven insights into how CNNs reach their conclusions by prioritizing causality over mere correlation. MI-Grad-CAM generates class-specific visualizations by weighting feature maps based on normalized mutual information between the input image and feature maps, combined with gradient information of the predicted class with respect to these feature maps. This approach strengthens the causal link between explanations and model predictions, supported by counterfactual analysis to verify causality. We also propose the Harmonized Confidence Index (HCI), a new evaluation metric to measure explanation effectiveness. Our method demonstrates robust performance in both qualitative and quantitative evaluations, achieving competitive or superior results compared to state-of-the-art methods, particularly in terms of explanation faithfulness and model reliability.
Primary Area: interpretability and explainable AI
Submission Number: 25318
Loading