Compression supports low-dimensional representations of behavior across neural circuitsDownload PDF

Published: 21 Nov 2022, Last Modified: 05 May 2023InfoCog @ NeurIPS 2022 OralReaders: Everyone
Keywords: lossy compression, neural dimensionality, representation, brain networks, recurrent neural networks, efficient coding
TL;DR: We test a theory linking neural compression to dimensionality reduction and reduced representational capacity in brain and artificial neural networks.
Abstract: Dimensionality reduction, a form of compression, can simplify representations of information to increase efficiency and reveal general patterns. Yet, this simplification also forfeits information, thereby reducing representational capacity. Hence, the brain may benefit from generating both compressed and uncompressed activity, and may do so in a heterogeneous manner across diverse neural circuits that represent low-level (sensory) or high-level (cognitive) stimuli. However, precisely how compression and representational capacity differ across the cortex remains unknown. Here we predict different levels of compression across regional circuits by using random walks on networks to model activity flow and to formulate rate-distortion functions, which are the basis of lossy compression. Using a large sample of youth ($n=1,040$), we test predictions in two ways: by measuring the dimensionality of spontaneous activity from sensorimotor to association cortex, and by assessing the representational capacity for 24 behaviors in neural circuits and 20 cognitive variables in recurrent neural networks. Our network theory of compression predicts the dimensionality of activity ($t=12.13, p<0.001$) and the representational capacity of biological ($r=0.53, p=0.016$) and artificial ($r=0.61, p<0.001$) networks. The model suggests how a basic form of compression is an emergent property of activity flow between distributed circuits that communicate with the rest of the network.
In-person Presentation: yes
0 Replies

Loading