Shaping Sequence Attractor Schema in Recurrent Neural Networks

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: schema, attractor, behavioral shaping, sequence
TL;DR: This study reveals that shaping progressively builds attractor dynamics in RNNs for generalizable sequence schemas, facilitating efficient learning and abstraction.
Abstract: Sequence schemas are abstract, reusable knowledge structures that facilitate rapid adaptation and generalization in novel sequential tasks. In both animals and humans, shaping is an efficient way for acquiring such schemas, particularly in complex sequential tasks. As a form of curriculum learning, shaping works by progressively advancing from simple subtasks to integrated full sequences, and ultimately enabling generalization across different task variations. Despite the importance of schemas in cognition and shaping in schema acquisition, the underlying neural dynamics at play remain poorly understood. To explore this, we train recurrent neural networks on an odor-sequence task using a shaping protocol inspired by well-established paradigms in experimental neuroscience. Our model provides the first systematic reproduction of key features of schema learning observed in the orbitofrontal cortex, including rapid adaptation to novel tasks, structured neural representation geometry, and progressive dimensionality compression during learning. Crucially, analysis of the trained RNN reveals that the learned schema is implemented through sequence attractors. These attractor dynamics emerge gradually through the shaping process: starting with isolated discrete attractors in simple tasks, evolving into linked sequences, and eventually abstracting into generalizable attractors that capture shared task structure. Moreover, applying our method to a keyword spotting task shows that shaping facilitates the rapid development of sequence attractor-like schemas, leading to enhanced learning efficiency. In summary, our work elucidates a novel attractor-based mechanism underlying schema representation and its evolution via shaping, with the potential to provide new insights into the acquisition of abstract knowledge across biological and artificial intelligence.
Supplementary Material: zip
Primary Area: Neuroscience and cognitive science (e.g., neural coding, brain-computer interfaces)
Submission Number: 18882
Loading