Keywords: Prefrontal cortex, reinforcement learning, value, representations, RNNs, composition, generalisation, cognitive maps, schema, meta learning
Abstract: Although the Prefrontal Cortex is known to play a pivotal role in both value-based decision-making and schema learning, the frameworks for each remain largely distinct. By extending recent mechanistic understanding of cognitive maps in PFC to reward-based decision-making tasks, we demonstrate that PFC value responses are necessary for navigating the state-space of the task. Meta-trained RNNs are shown to learn internal value representations that act as control signals for the routing of activity within the network, consistent with the influential Miller \& Cohen hypothesis of PFC as an executive controller. This work builds towards a unifying theory of value and schema in PFC, and offers a mechanistic understanding of meta-reinforcement learners, both biological and artificial.
Primary Area: applications to neuroscience & cognitive science
Submission Number: 22156
Loading