TL;DR: Towards Interpreting Deep Neural Networks via Understanding Layer Behaviors
Abstract: Deep neural networks (DNNs) have achieved unprecedented practical success in many applications.
However, how to interpret DNNs is still an open problem.
In particular, what do hidden layers behave is not clearly understood.
In this paper, relying on a teacher-student paradigm, we seek to understand the layer behaviors of DNNs by ``monitoring" both across-layer and single-layer distribution evolution to some target distribution in the training. Here, the ``across-layer" and ``single-layer" considers the layer behavior \emph{along the depth} and a specific layer \emph{along training epochs}, respectively.
Relying on optimal transport theory, we employ the Wasserstein distance ($W$-distance) to measure the divergence between the layer distribution and the target distribution.
Theoretically, we prove that i) the $W$-distance of across layers to the target distribution tends to decrease along the depth. ii) the $W$-distance of a specific layer to the target distribution tends to decrease along training iterations. iii)
However, a deep layer is not always better than a shallow layer for some samples. Moreover, our results helps to analyze the stability of layer distributions and explains why auxiliary losses helps the training of DNNs. Extensive experiments on real-world datasets justify our theoretical findings.
Keywords: Interpretability of DNNs, Wasserstein distance, Layer behavior
Original Pdf: pdf
12 Replies
Loading