Keywords: goals, reinforcement learning, cognitive science, computational modeling, autotelic agents, curriculum development
TL;DR: A newly defined, "latent" form of learning progress provides a valuable signal for goal selection in human reinforcement learning
Abstract: Humans are autotelic agents who learn by setting and pursuing their own goals. However, the precise mechanisms guiding human goal selection remain unclear. Learning progress, typically measured as the observed change in performance, can provide a valuable signal for goal selection in both humans and artificial agents. We hypothesize that human choices of goals may also be driven by _latent learning progress_, which humans can estimate through knowledge of their actions and the environment – even without experiencing immediate changes in performance. To test this hypothesis, we designed a hierarchical reinforcement learning task in which human participants (N = 175) repeatedly chose their own goals and learned goal-conditioned policies. Our behavioral and computational modeling results confirm the influence of latent learning progress on goal selection and uncover inter-individual differences, partially mediated by recognition of the task's hierarchical structure. By investigating the role of latent learning progress in human goal selection, we pave the way for more effective and personalized learning experiences as well as the advancement of more human-like autotelic machines.
Primary Area: Neuroscience and cognitive science (neural coding, brain-computer interfaces)
Flagged For Ethics Review: true
Submission Number: 7592
Loading