Information theoretic study of the neural geometry induced by category learning

Published: 27 Oct 2023, Last Modified: 27 Nov 2023InfoCog@NeurIPS2023 OralEveryoneRevisionsBibTeX
Keywords: mutual information, Fisher information, neural geometry, deep learning, categorical perception, Bayesian learning
TL;DR: Information theoretic approach reveals how categorical perception and the underlying neural geometry naturally emerge from category learning.
Abstract: Categorization is an important topic both for biological and artificial neural networks. Here, we take an information theoretic approach to assess the efficiency of the representations induced by category learning. We show that one can decompose the relevant Bayesian cost into two components, one for the coding part and one for the decoding part. Minimizing the coding cost implies maximizing the mutual information between the set of categories and the neural activities. We analytically show that this mutual information can be written as the sum of two terms that can be interpreted as (i) finding an appropriate representation space, and, (ii) building a representation with the appropriate metrics, based on the neural Fisher information on this space. One main consequence is that category learning induces an expansion of neural space near decision boundaries. Finally, we provide numerical illustrations that show how Fisher information of the coding neural population aligns with the boundaries between categories.
Submission Number: 13
Loading