Geometry of abstract learned knowledge in deep RL agents

Published: 29 Nov 2023, Last Modified: 30 Nov 2023NeurReps 2023 OralEveryoneRevisionsBibTeX
Submission Track: Proceedings
Keywords: Evidence accumulation, Low-dimensional embedding, Manifold learning, Population analysis
Abstract: Data from neural recordings suggest that mammalian brains represent physical and abstract task-relevant variables through low-dimensional neural manifolds. In a recent electrophysiological study (Nieh et al., 2021), mice performed an evidence accumulation task while moving along a virtual track. Nonlinear dimensionality reduction of the population activity revealed that task-relevant variables were jointly mapped in an orderly manner in the low-dimensional space. Here we trained deep reinforcement learning (RL) agents on the same evidence accumulation task and found that their neural activity can be described with a low-dimensional manifold spanned by task-relevant variables. These results provide further insight into similarities and differences between neural dynamics in mammals and deep RL agents. Furthermore, we showed that manifold learning can be used to characterize the representational space of the RL agents with the potential to improve the interpretability of decision-making in RL.
Submission Number: 71
Loading