Track: Extended Abstract Track
Keywords: Probabilistic neural coding, representation learning, information bottleneck
TL;DR: Probabilistic representations do not emerge in task-optimized neural networks.
Abstract: The precise neural mechanisms of probabilistic computation remain unknown despite growing evidence that humans track their uncertainty. Recent work has proposed that probabilistic representations arise naturally in task-optimized neural networks. However, previous decoding approaches only tested sufficiency---whether posteriors were decodable from neural activity---without testing whether these representations were minimal---whether they filter irrelevant input information. This limitation makes it difficult to distinguish genuine probabilistic representations from trivial input recoding. We introduce the functional information bottleneck (fIB) framework, which evaluates neural representations based on both sufficiency (posterior decodability) and minimality (invariance to irrelevant inputs). Using this novel approach, we show networks trained to perform cue combination, coordinate transformation, and Kalman filtering without probabilistic objectives encode Bayesian posteriors in their hidden layer activities, but these networks fail to compress their inputs in a task-optimal way, instead performing heuristic computations akin to input re-representation. Therefore, it remains an open question under what conditions truly probabilistic representations emerge in neural networks. More generally, our work provides a stringent framework for identifying probabilistic codes, and lays the foundation for systematically examining whether, how, and which posteriors are represented in neural circuits during complex decision-making.
Submission Number: 135
Loading