Uncovering Neural Encoding Variability with Infinite Gaussian Process Factor Analysis

Published: 10 Oct 2024, Last Modified: 20 Nov 2024NeuroAI @ NeurIPS 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Neural variability, GPFA, Bayesian nonparametrics.
TL;DR: We propose infinite GPFA, a fully Bayesian nonparametric extension of GPFA model, for investigating the nature of neural variability from a novel perspective.
Abstract: Gaussian Process Factor Analysis (GPFA) is a powerful factor analysis model for extracting low-dimensional latent processes underlying population neural activities. However, one limitation of standard GPFA models is that the number of latent factors needs to be pre-specified or selected through heuristic-based approaches. We propose the infinite GPFA model, a Bayesian non-parametric extension of the classical GPFA model by incorporating an Indian Buffet Process (IBP) prior, such that we are able to infer the potentially infinite set of likely latent factors active at each time points, in a probabilistically principled manner. Learning and inference in the infinite GPFA model is performed through variational expectation-maximisation, and we additionally propose a scalable extension based on sparse variational Gaussian Process methods. We empirically demonstrate that the infinite GPFA model correctly infers dynamically changing activations of latent factors on synthetic dataset. Through fitting the infinite GPFA model to simultaneously recorded population neural activities, we identify non-trivial and behavioural meaningful variability in neural encoding process, and addressing an important gap in existing interpretations of the nature of neural variability.
Submission Number: 24
Loading