Abstract: Over the past decade, deep learning has shown remarkable performance in a variety of AI fields. However, their complex and massive parameters often label them as ‘black box’ models, raising concerns about their interpretability, and this issue reduces the reliability and usability of deep learning in the real world. In response, there's a growing emphasis on developing ‘interpretable’ or ‘explainable’ AI models. In this paper, we propose a novel methodology to refine the clarity of saliency maps derived from Layer-wise Relevance Propagation (LRP) by eliminating noise through the segmentation technique based on the Gaussian Mixture Model (G MM). To prove the effectiveness of the proposed method, we conduct the experimental evaluation by comparing the results of our model and original LRPs.
Loading