Keywords: Neural predictive models, Bayesian brain hypothesis, Posterior inference, Zero shot model adaptation, Distributional shifts, Probabilistic machine learning and AI, Normative Theory
TL;DR: This paper proposes a normative approach combining theory in neuroscience and machine learning towards zero-shot adaptation of neural predictive models to distributional shifts and neural adaptation
Abstract: Understanding how the brain adapts to changing sensory environments is a key challenge in neuroscience, with implications for AI.
Typical neural predictive models are trained to predict neuronal responses to stimuli from a fixed stimulus distribution.
This limits their ability to account for possible neural adaptations to new sensory contexts with shifts in the stimulus distribution, requiring the models to be retrained on newly recorded datasets in order to adapt them.
In this work, we propose a zero-shot adaptation approach by leveraging Bayesian theories of perception and neural representation that suggest that (1) sensory neurons encode posterior distributions over latent variables in an internal generative model of the world and (2) that the brain preserves the mapping from latent causes to observations in its generative model, while adapting the prior distribution to new contexts.
By employing advances in machine learning and generative models, we validate our approach on synthetic data, demonstrating the performance of our zero-shot adapted models to models retrained with new neural data.
Our work not only lays the foundation for a normative approach to adapting neural predictive models to domain shifts, but also paves the way for an empirical method for testing Bayesian theories of neural representations.
Submission Number: 63
Loading