A Measure of the Complexity of Neural Representations based on Partial Information Decomposition

Published: 18 May 2023, Last Modified: 18 May 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: In neural networks, task-relevant information is represented jointly by groups of neurons. However, the specific way in which this mutual information about the classification label is distributed among the individual neurons is not well understood: While parts of it may only be obtainable from specific single neurons, other parts are carried redundantly or synergistically by multiple neurons. We show how Partial Information Decomposition (PID), a recent extension of information theory, can disentangle these different contributions. From this, we introduce the measure of ``Representational Complexity'', which quantifies the difficulty of accessing information spread across multiple neurons. We show how this complexity is directly computable for smaller layers. For larger layers, we propose subsampling and coarse-graining procedures and prove corresponding bounds on the latter. Empirically, for quantized deep neural networks solving the MNIST and CIFAR10 tasks, we observe that representational complexity decreases both through successive hidden layers and over training, and compare the results to related measures. Overall, we propose representational complexity as a principled and interpretable summary statistic for analyzing the structure and evolution of neural representations and complex systems in general.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: N/A
Code: https://github.com/Priesemann-Group/nninfo
Supplementary Material: zip
Assigned Action Editor: ~Jean_Barbier2
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 716