Keywords: streaming dimensionality reduction, neural stimulation, adaptive experiments
Abstract: Latent neural dynamics are a widely used model in neuroscience for describing the time evolution of collective neural activity. These models have been established as useful for neural decoding: for example, latent dynamical models of neural activity give state-of-the art predictions of ongoing kinematics in motor tasks. Despite their utility, the causal mechanisms behind the effectiveness of latent variable models remain poorly understood. To uncover how such latent variables causally encode behaviors, or how they change, would require methods for stimulating neural dynamics during an experiment. Algorithms to drive neural dynamics remain limited, however, due to the need to continually track and respond to changes in neural activity, to account for variation in neural responses under stimulation, and to select useful stimulations to apply from an extensive set of possibilities. Here, we develop a novel streaming method for stimulation-response modeling in affine latent spaces and an optimization framework for selecting high-dimensional stimulation patterns to drive low-dimensional dynamics. Our method integrates streaming latent space construction, an adaptive nonparametric model of the effects of stimulations, and projection maximization under feasibility constraints to determine stimuli that move dynamics along a desired vector. We demonstrate our approach on both simulated and real neural data (calcium fluorescence images, intracortial electrophysiological recordings). We evaluate our method across multiple latent space representations and multiple models of dynamics in parallel, and additionally provide a novel streaming estimator to determine which representation is most predictive of ongoing neural dynamics at any timepoint. This allows for direct comparison between different latent representations and the opportunity for adaptive selection of stimulations to best distinguish amongst neural subspace hypotheses. Finally, we demonstrate algorithm runtimes at faster than real-time speeds ($<$100 ms), making it compatible with future in vivo applications.
Supplementary Material: zip
Primary Area: learning on time series and dynamical systems
Submission Number: 22138
Loading