On the Maximum Mutual Information Capacity of Neural Architectures

Published: 11 Jul 2023, Last Modified: 11 Jul 2023NCW ICML 2023EveryoneRevisionsBibTeX
Keywords: Information losses, mutual information, learning theory, maximum mutual information, neural network
TL;DR: We derive the closed-form expression of the maximum mutual information - the maximum value of $I(X;Z)$ obtainable via training - for a broad family of neural network architectures.
Abstract: We derive the closed-form expression of the maximum mutual information - the maximum value of $I(X;Z)$ obtainable via training - for a broad family of neural network architectures. The quantity is essential to several branches of machine learning theory and practice. Quantitatively, we show that the maximum mutual information for these families all stem from generalizations of a single catch-all formula. Qualitatively, we show that the maximum mutual information of an architecture is most strongly influenced by the width of the smallest layer of the network - the ``information bottleneck'' in a different sense of the phrase, and by any statistical invariances captured by the architecture.
Submission Number: 7
Loading