Learning to learn dynamical associations with reward-gated local plasticity

Published: 03 Mar 2026, Last Modified: 06 Mar 2026NFAM 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: dynamical associative memories, recurrent neural networks
Abstract: A central question pertinent to neuroscience is how learning is how learning creates and reshapes memories under constraints faced by biological circuits: recurrent dynamics, local synaptic plasticity, and delayed feedback signals. Classical theories of associative memory formalize memory formation as content-addressable storage of patterns, often in the form of attractors in Hopfield-like networks. Yet, animals are routinely confronted with tasks that require **dynamical associations**, memories embedded in the evolving neural trajectories rather than in static neural states. How such dynamical memories are formed through biologically plausible learning, and how rule variants shape the resulting circuit solutions, remains unclear. Here, we introduce a meta-learning framework that discovers families of reward-gated local plasticity rules that enable recurrent circuits to acquire dynamical associations from delayed reinforcement signals. We equip synapses with eligibility traces that accumulate pre–post synaptic co-activity and allow for punctuated changes of neuronal interactions upon reward delivery. Rule parameters shape the eligibility dynamics, thereby controlling how co-activation patterns drive plasticity and the formation of dynamical memories. To avoid differentiating through the full learning trajectory, we optimize rule parameters using a policy-gradient estimator of the expected cumulative reward, and use forward-mode differentiation to compute sensitivities of the eligibility dynamics with respect to rule parameters. This framework enables systematic search and analysis of biologically plausible rules for acquiring dynamical associative memories that support learning common neuroscience tasks from delayed rewards.
Submission Number: 51
Loading