A Biologically Inspired Filter Significance Assessment Method for Model Explanation

Published: 2025, Last Modified: 28 Feb 2026xAI (2) 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The interpretability of deep learning models remains a significant challenge, particularly in convolutional neural networks (CNNs) where understanding the contributions of individual filters is crucial for explainability. In this work, we propose a biologically inspired filter significance assessment method based on Steady-State Visually Evoked Potentials (SSVEPs), a well-established neuroscience principle. Our approach leverages frequency tagging techniques to quantify the importance of convolutional filters by analyzing their frequency-locked responses to periodic contrast modulations in input images. By blending SSVEP-based filter selection into Class Activation Mapping (CAM) frameworks such as Grad-CAM, Grad-CAM++, EigenCAM, and LayerCAM, we enhance model interpretability while reducing attribution noise. Experimental evaluations on ImageNet using VGG-16, ResNet-50, and ResNeXt-50 demonstrate that SSVEP-enhanced CAM methods improve spatial focus in visual explanations, yielding higher energy concentration while maintaining competitive localization accuracy. These findings suggest that our biologically inspired approach offers a robust mechanism for identifying key filters in CNNs, paving the way for more interpretable and transparent deep learning models.
Loading