Estimating flexible across-area communication with neurally-constrained RNN

Published: 30 Jun 2024, Last Modified: 19 Feb 2025CCN 2024EveryoneCC BY 4.0
Abstract: Neural computations supporting complex behaviors involve multiple brain regions, and large-scale recordings from animals engaged in complex tasks are increasingly common. A current challenge in analysing these data is to identify which part of the information contained within a brain region is shared with others. Here, to address this limitation, we trained multi-region recurrent neural networks (RNN) models to reproduce the dynamics of large-scale single-unit recordings (more than 6000 neurons across 7 cortical areas) from monkeys engaged in a two-dimensional (color and motion direction) context-dependent decision-making task. Decoding analyses show that all areas encode both stimuli (color and direction). However, using our approach we uncovered feed-forward and feedback interactions within a network of 7 interacting regions. Constraining interactions during training or testing recovered the canonical brain hierarchy that differentiate sensory and frontal regions. Inspecting across-region interactions, we also found that frontal regions compress the irrelevant stimulus in a context-dependent manner, while sensory regions always compress the same stimulus.
Loading