- Keywords: computational neuroscience, motor control, deep RL
- TL;DR: We built a physical simulation of a rodent, trained it to solve a set of tasks, and analyzed the resulting networks.
- Abstract: Parallel developments in neuroscience and deep learning have led to mutually productive exchanges, pushing our understanding of real and artificial neural networks in sensory and cognitive systems. However, this interaction between fields is less developed in the study of motor control. Existing experimental research and neural network models have been focused on the production of individual behaviors, yielding little insight into how intelligent systems can produce a rich and varied set of motor behaviors. In this work we develop a virtual rodent that learns to flexibly apply a broad motor repertoire, including righting, running, leaping and rearing, to solve multiple tasks in a simulated world. We analyze the artificial neural mechanisms underlying the virtual rodent's motor capabilities using a neuroethological approach, where we characterize neural activity patterns relative to the rodent's behavior and goals. We show that the rodent solves tasks by using a shared set of force patterns that are orchestrated into task-specific behaviors over longer timescales. Through methods familiar to neuroscientists, including representational similarity analysis, dimensionality reduction techniques, and targeted perturbations, we show that the networks produce these behaviors using at least two classes of behavioral representations, one that explicitly encodes behavioral kinematics in a task-invariant manner, and a second that encodes task-specific behavioral strategies. Overall, the virtual rat promises to facilitate grounded collaborations between deep reinforcement learning and motor neuroscience.