Quantifying Layerwise Information Discarding of Neural Networks and BeyondDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Deep Learning, Information Theory, Interpretability, Convolutional Neural Networks
Abstract: This paper presents a method to explain how input information is discarded through intermediate layers of a neural network during the forward propagation. The layerwise analysis of information discarding is used to explain and diagnose various deep-learning techniques. We define two types of entropy-based metrics, i.e., the strict information discarding and the reconstruction uncertainty, which measure input information of a specific layer from two perspectives. We develop a method to compute entropy-based metrics, which ensures the fairness of comparisons between different layers of different networks. Preliminary experiments have shown the effectiveness of our metrics in analyzing benchmark networks and explaining existing deep-learning techniques. The code will be released when the paper is accepted.
Original Pdf: pdf
7 Replies

Loading