Evaluation and analysis of visual methods for CNN explainability: a novel approach and experimental study

Published: 01 Jan 2025, Last Modified: 03 Oct 2025Neural Comput. Appl. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The rapid development and widespread adoption of deep neural networks (DNNs) are attributed to their remarkable performance and practical efficacy across numerous applications. Nevertheless, the sophistication of the inference process often comes at the expense of explainability, becoming an increasingly critical concern. Explainable AI endeavors to shed light on the inner workings of neural networks, thereby enhancing trust in their predictive capabilities and facilitating more reliable and informed decision-making. Visual explanations for the behavior of DNNs contribute to understanding their functioning. However, selecting the right visualization method for a particular task and evaluating its performance remains a challenge. This paper offers a comprehensive analysis of post hoc methods for generating visual explanations, concentrating on class activation mapping (CAM) techniques. It introduces two novel approaches: high-resolution relevance propagation CAM (HiResRP-CAM) and EigenLayer-CAM. These methods are evaluated both qualitatively and quantitatively across various experimental setups within the context of convolutional neural networks (CNNs) for image classification tasks. The evaluation encompasses diverse network architectures, datasets, and metrics, including certain proposed enhancements. The paper discusses experimental results, including instances of model failure due to adversarial noise, and explores factors to deepen understanding of model behavior, characteristics of explainability methods and current strategies for assessing explanation quality. In this way, we provide a perspective on the challenges and limitations inherent in interpreting CNNs and on the relevant aspects for choosing a specific method.
Loading