Brain-inspired Representation Transfer through Invariant Input-driven Continuous Attractors in a Modular RNN Framework

19 Sept 2025 (modified: 24 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Representation Transfer; Attractor Dynamics; Domain Adaptation; Cognitive-inspired Computation
TL;DR: Invariant input-driven attractors enable robust representation transfer and fast domain adaptation in brain-inspired AI
Abstract: Conventional end-to-end deep neural networks often degrade under domain shifts and require costly retraining when deployed in unpredictable, noisy environments. Inspired by biological neural computation, we propose a modular framework in which each module is a recurrent neural network pretrained using a simple, task-agnostic protocol to learn robust, transferable features. We show that low-dimensional, input-driven continuous attractor manifolds, embedded in a high-dimensional latent space, yield task-invariant representations that enable robust transfer and resilience to temporal perturbations. At deployment, only a lightweight adapter needs training, allowing rapid adaptation to new tasks. Validated on the Dynamic Vision Sensor (DVS) Gesture benchmark and a custom rehabilitation action recognition dataset we collected, our framework achieves accuracy competitive with state-of-the-art methods, especially in few-shot settings, while requiring an order of magnitude fewer parameters and minimal training. By integrating biologically inspired attractor dynamics with cortical-like modular composition, the framework provides a practical route to robust, continual adaptation in real-world information processing.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 15623
Loading