Semantics for Global and Local Interpretation of Deep Convolutional Neural NetworksDownload PDFOpen Website

2021 (modified: 15 Nov 2022)IJCNN 2021Readers: Everyone
Abstract: A large number of saliency methods have been proposed to explain individual decisions of deep convolutional neural networks (DCNNs). They work by identifying the relevance of each input feature to the predicted output class. However, the feature representations of hidden layers are difficult to interpret semantically. In this work, human-interpretable semantic concepts are associated with vectors in feature space. The association process is mathematically formulated as an optimization problem. The semantic vectors obtained from the optimal solution are applied to interpret deep neural networks globally and locally. The global interpretations are useful to understand the knowledge learned by DCNNs. The interpretation of local behaviors can help to gain a better understanding of the individual decisions made by DCNNs. The empirical experiments demonstrate how to use identified semantics to interpret the existing DCNNs.
0 Replies

Loading