Computational discovery of human reinforcement learning dynamics from choice behavior

Published: 10 Oct 2024, Last Modified: 10 Oct 2024NeurIPS 2024 Workshop on Behavioral MLEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Automated model discovery, AI4Science, Reinforcement learning, Behavioral science, Cognitive science
Abstract: This paper presents a novel machine learning approach for inferring interpretable human reinforcement learning models from behavioral data. By combining recurrent neural networks, sparse identification of nonlinear dynamics, and neural network ensemble training, we automate the discovery of underlying cognitive mechanisms from behavioral data. By constraining the network to a low-dimensional memory state, we extract latent dynamical system variables that represent human behavior. These variables are then used to identify interpretable sparse non-linear dynamics that describe how action values are updated based on cognitive mechanisms. To address the noise inherent in human behavior, we employ an ensemble training procedure of the network to ensure stable convergence. Our approach effectively recovers various ground truth models in a two-armed bandit task, demonstrating its ability to infer expressive yet interpretable models of human reinforcement learning.
Submission Number: 59
Loading