Track: Research Track
Keywords: Fictive learning; Reinforcement learning; Two-step task
Abstract: Reinforcement learning (RL) is a normative computational framework to account for reward-based learning. Classical RL algorithms are based on experienced outcomes, whereas humans and animals may generalize learning to unexperienced events based on internal world models, so-called fictive learning. We propose a simple, brain-inspired fictive learning rule to augment model-based RL and use the rodent two-step task to examine whether fictive learning can better explain the observed behavior. This learning rule uses the same reward prediction error (RPE) to update both experienced and unexperienced states and actions, with scaling by the event correlation inferred from the internal model for fictive update. Through simulations, we show that this model better reproduces key behavioral traits observed in the two-step task. Model fitting validates its superior fit over existing alternatives. Furthermore, the model replicates the striatal dopaminergic dynamics observed in the same task, suggesting that the brain might operate through fictive learning for reward-based learning. The fictive learning observed here is conceptually analogous to, and partially inspired by, existing counterfactual approaches in machine learning. This convergence illustrates how machine learning offers candidate mechanisms for biology, how experiments reveal new theoretical principles, and how paradigms like the two-step task can serve as shared testbeds to evaluate both robotic and biological agents. Fictive learning exemplifies the broader opportunities for deeper collaboration between theorists and experimentalists in understanding intelligence.
Submission Number: 136
Loading