FAM: Visual Explanations for the Feature Representations from Deep Convolutional NetworksDownload PDFOpen Website

2022 (modified: 02 Nov 2022)CVPR 2022Readers: Everyone
Abstract: In recent years, increasing attention has been drawn to the internal mechanisms of representation models. Traditional methods are inapplicable to fully explain the feature representations, especially if the images do not fit into any category. In this case, employing an existing class or the similarity with other image is unable to provide a complete and reliable visual explanation. To handle this task, we propose a novel visual explanation paradigm called Fea-ture Activation Mapping (FAM) in this paper. Under this paradigm, Grad-FAM and Score-FAM are designed for vi-sualizing feature representations. Unlike the previous approaches, FAM locates the regions of images that contribute most to the feature vector itself. Extensive experiments and evaluations, both subjective and objective, showed that Score-FAM provided most promising interpretable vi-sual explanations for feature representations in Person Re-Identification. Furthermore, FAM also can be employed to analyze other vision tasks, such as self-supervised represen-tation learning.
0 Replies

Loading