On Biologically Plausible Learning in Continuous Time

ICLR 2026 Conference Submission22112 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Biologically plausible learning, plasticity, SGD, feedback alignment, dynamics, continuous time models
TL;DR: We study a continuous-time neural model that unifies various error propagation learning algorithms, showing that learning requires input-error overlap and predicting seconds-scale eligibility traces in biology.
Abstract: Biological learning unfolds continuously in time, yet most algorithmic models rely on discrete updates and separate inference and learning phases. We study a continuous-time neural model that unifies several biologically plausible learning algorithms and removes the need for phase separation. Rules including stochastic gradient descent (SGD), feedback alignment (FA), direct feedback alignment (DFA), and Kolen–Pollack (KP) emerge naturally as limiting cases of the dynamics. Simulations show that these continuous-time networks stably learn at biological timescales, even under temporal mismatches and integration noise. Our results reveal that, in the absence of longer-range memory mechanisms, learning is constrained by the temporal overlap of inputs and errors. Robust learning requires potentiation timescales that outlast the stimulus window by at least an order of magnitude, placing the effective eligibility regime in the few-second range. More broadly, this identifies a unifying principle: learning succeeds when input and error are temporally correlated at each synapse, a rule that yields testable predictions for neuroscience and practical design guidance for analog hardware.
Primary Area: applications to neuroscience & cognitive science
Submission Number: 22112
Loading