- Original Pdf: pdf
- Abstract: We introduce interpretable components into a Deep Neural Network (DNN) to explain its decision mechanism. Instead of reasoning a decision from a given trained neural network, we design an interpretable neural network architecture before training. Weight values in the first layers of a Convolutional Neural Network (CNN) and a ResNet-50 are replaced by well-known predefined kernels such as sharpening, embossing, color filters, etc. Each filter's relative importance is measured with a variant of the saliency map and Layer-wise Relevance Propagation (LRP) proposed by Simonyan et al. and Bach et al. We suggest that images processed by predefined kernels still contain enough information for DNNs to extract features without degrading performances on MNIST, and ImageNet datasets. Our model based on the ResNet-50 shows 92.1% top-5 and 74.6% top-1 accuracy on the ImageNet dataset. At the same time, our model provides three different tools to explain individual classification and overall properties of a certain class; the relative importance scores with respects to (1) each color, (2) each filter, and (3) each pixel of the image.
- Keywords: Interpretability, DNN, Hybrid Networks