TL;DR: We propose a method to model spatiotemporal dynamics in neural imaging data and their relationship to behavior.
Abstract: High-dimensional imaging of neural activity, such as widefield calcium and functional ultrasound imaging, provide a rich source of information for understanding the relationship between brain activity and behavior. Accurately modeling neural dynamics in these modalities is crucial for understanding this relationship but is hindered by the high-dimensionality, complex spatiotemporal dependencies, and prevalent behaviorally irrelevant dynamics in these modalities. Existing dynamical models often employ preprocessing steps to obtain low-dimensional representations from neural image modalities. However, this process can discard behaviorally relevant information and miss spatiotemporal structure. We propose SBIND, a novel data-driven deep learning framework to model spatiotemporal dependencies in neural images and disentangle their behaviorally relevant dynamics from other neural dynamics. We validate SBIND on widefield imaging datasets, and show its extension to functional ultrasound imaging, a recent modality whose dynamical modeling has largely remained unexplored. We find that our model effectively identifies both local and long-range spatial dependencies across the brain while also dissociating behaviorally relevant neural dynamics. Doing so, SBIND outperforms existing models in neural-behavioral prediction. Overall, SBIND provides a versatile tool for investigating the neural mechanisms underlying behavior using imaging modalities.
Lay Summary: Modern brain imaging provides detailed "videos" of brain-wide activity, thus providing an unprecedented opportunity for understanding how the brain controls behavior. However, these image data are quite complex as they capture numerous behavioral and mental processes that are simultaneously generated by the brain at once. Also, these images present patterns that vary over time and space, both locally in a given brain area and globally across brain areas. As a result, it is challenging to isolate the patterns in these images linked to a specific behavior, like a mouse licking a spout, from all the other background brain activity. Existing methods often simplify and reduce dimensionality of these video recordings before any analysis, which can lead to the loss of important information.
We present a novel framework, SBIND, that learns directly from the raw, high-resolution video recordings of brain activity, rather than from simplified data. SBIND automatically identifies important patterns as they evolve over time across both local brain regions and larger, brain-wide networks. SBIND does this by first disentangling the brain patterns that are most closely tied to the measured behavior, and then separately modeling other ongoing brain patterns. Further, SBIND is designed to capture both the local and global brain-wide patterns. This approach helps SBIND accurately distinguish behavior-related patterns.
On two diverse high-resolution neural imaging modalities, one optical and one acoustic, SBIND outperformed existing approaches at predicting behavior and future neural activity. By capturing both local and global brain patterns that are relevant to behavior, SBIND offers a more holistic view of brain-behavior relations. It also opens the door to developing brain-computer interfaces that are more advanced and less invasive, for example to restore lost function in individuals with movement disabilities or mental disorders.
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Link To Code: https://github.com/ShanechiLab/SBIND/
Primary Area: Applications->Neuroscience, Cognitive Science
Keywords: Deep Learning, Dynamical Modeling, Neural Imaging, Behavior, Neuroscience
Submission Number: 14793
Loading