What Information Does a ResNet Compress?Download PDF

27 Sept 2018 (modified: 05 May 2023)ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: The information bottleneck principle (Shwartz-Ziv & Tishby, 2017) suggests that SGD-based training of deep neural networks results in optimally compressed hidden layers, from an information theoretic perspective. However, this claim was established on toy data. The goal of the work we present here is to test these claims in a realistic setting using a larger and deeper convolutional architecture, a ResNet model. We trained PixelCNN++ models as inverse representation decoders to measure the mutual information between hidden layers of a ResNet and input image data, when trained for (1) classification and (2) autoencoding. We find that two stages of learning happen for both training regimes, and that compression does occur, even for an autoencoder. Sampling images by conditioning on hidden layers’ activations offers an intuitive visualisation to understand what a ResNets learns to forget.
Keywords: Deep Learning, Information Bottleneck, Residual Neural Networks, Information Theory
TL;DR: The Information Bottleneck Principle applied to ResNets, using PixelCNN++ models to decode mutual information and conditionally generate images for information illustration
Data: [CIFAR-10](https://paperswithcode.com/dataset/cifar-10), [CINIC-10](https://paperswithcode.com/dataset/cinic-10), [ImageNet](https://paperswithcode.com/dataset/imagenet)
15 Replies

Loading