- Original Pdf: pdf
- Abstract: In recent years, researchers have seen working on interpreting the insights of deep networks in the pursuit of overcoming their opaqueness and so-called ‘black-box’ tag from them. In this work, we present a new visual interpretation technique that finds out discriminative image locations contributing highly towards networks’ prediction. We select the most contributing set of neurons per layer and engineer the forward pass operation to gradually reach to the important locations of the in-put image. We explore the connectivity structure of the neuron and obtain support from succeeding and preceding layer along with its evidence from current layer to advocate for a neuron’s importance. While conducting this operation, we also add priorities to the supports from neighboring layers, which, in practice, provides a reliable way of selecting the discriminative set of neurons for the target layer.We conduct both the objective and subjective evaluations to examine the performance of our method in terms of model’s faithfulness and human-trust, where we visualize its efficacy over other existing methods.