Bayesian Oracle for bounding information gain in neural encoding modelsDownload PDF

Published: 21 Nov 2022, Last Modified: 05 May 2023InfoCog @ NeurIPS 2022 PosterReaders: Everyone
Keywords: information theory, evaluation metrics, Bayesian, Neuroscience
TL;DR: We provide a method to obtain upper bounds of information gain in order to evaluate neural encoding models
Abstract: Many normative theories that link neural population activity to cognitive tasks, such as neural sampling and the Bayesian brain hypothesis, make predictions for single trial fluctuations. Linking information theoretic principles of cognition to neural activity thus requires models that accurately capture all moments of the response distribution. However, to measure the quality of such models, commonly used correlation-based metrics are not sufficient as they mainly care about the mean of the response distribution. An interpretable alternative evaluation metric for likelihood-based models is Information Gain (IG) which evaluates the likelihood of a model relative to a lower and upper bound. However, while a lower bound is usually easy to obtain and evaluate, constructing an upper bound turns out to be challenging for neural recordings with relatively low numbers of repeated trials, high (shared) variability and sparse responses. In this work, we generalize the jack-knife oracle estimator for the mean -- commonly used for correlation metrics -- to a flexible Bayesian oracle estimator for IG based on posterior predictive distributions. We describe and address the challenges that arise when estimating the lower and upper bounds from small datasets. We then show that our upper bound estimate is data-efficient and robust even in the case of sparse responses and low signal-to-noise ratio. Finally, we provide the derivation of the upper bound estimator for a variety of common distributions including the state-of-the-art zero-inflated mixture models.
In-person Presentation: yes
0 Replies

Loading