Abstract: The hippocampus has long been associated with spatial memory and goal-directed
spatial navigation. However, the region’s independent role in continual learning of
navigational strategies has seldom been investigated. Here we analyse populationlevel
activity of hippocampal CA1 neurons in the context of continual learning of
two different spatial navigation strategies. Demixed Principal Component Analysis
(dPCA) is applied on neuronal recordings from 612 hippocampal CA1 neurons
of rodents learning to perform allocentric and egocentric spatial tasks. The components
uncovered using dPCA from the firing activity reveal that hippocampal
neurons encode relevant task variables such decisions, navigational strategies and
reward location. We compare this hippocampal features with standard reinforcement
learning algorithms, highlighting similarities and differences. Finally, we
demonstrate that a standard deep reinforcement learning model achieves similar
average performance when compared to animal learning, but fails to mimic animals
during task switching. Overall, our results gives insights into how the hippocampus
solves reinforced spatial continual learning, and puts forward a framework
to explicitly compare biological and machine learning during spatial continual
learning.
Original Pdf: pdf
9 Replies
Loading