Can Biologically Plausible Temporal Credit Assignment Rules Match BPTT for Neural Similarity? E-prop as an Example

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY-NC-SA 4.0
TL;DR: We investigate whether the pursuit of biologically plausible learning rules prioritizes plausibility at the level of synaptic implementation (e.g., locality) at the expense of reproducing brain-like neural activity.
Abstract: Understanding how the brain learns may be informed by studying biologically plausible learning rules. These rules, often approximating gradient descent learning to respect biological constraints such as locality, must meet two critical criteria to be considered an appropriate brain model: (1) good neuroscience task performance and (2) alignment with neural recordings. While extensive research has assessed the first criterion, the second remains underexamined. Employing methods such as Procrustes analysis on well-known neuroscience datasets, this study demonstrates the existence of a biologically plausible learning rule — namely e-prop, which is based on gradient truncation and has demonstrated versatility across a wide range of tasks — that can achieve neural data similarity comparable to Backpropagation Through Time (BPTT) when matched for task accuracy. Our findings also reveal that model architecture and initial conditions can play a more significant role in determining neural similarity than the specific learning rule. Furthermore, we observe that BPTT-trained models and their biologically plausible counterparts exhibit similar dynamical properties at comparable accuracies. These results underscore the substantial progress made in developing biologically plausible learning rules, highlighting their potential to achieve both competitive task performance and neural data similarity.
Lay Summary: The brain learns by adjusting the strength of connections between neurons. But how are these countless connection updates coordinated so that learning leads to symphony — where neurons work together to solve a task — rather than cacophony? To investigate this, computational neuroscientists look to the standard training methods in deep learning for inspiration. While powerful, these methods don't align with known biological mechanisms. To bridge this gap, researchers have proposed more biologically plausible learning models — inspired by standard deep learning methods but grounded in known biological processes. But how good are these models? In particular: (1) can they support good task performance, and (2) do they produce brain-like neural activity? While (1) has been studied widely, our study focuses on (2). We show that standard deep learning training can match the brain-like activity of one such biologically plausible learning model — if the architecture and initialization are well chosen. This suggests that brain-like learning in machines may be within reach, with implications for human- or animal-aligned AI and improved understanding of learning in the brain.
Link To Code: https://github.com/Helena-Yuhan-Liu/LearningRuleSimilarities/tree/main
Primary Area: Applications->Neuroscience, Cognitive Science
Keywords: Computational neuroscience, biologically-plausible learning rules, recurrent neural networks, neural data similarity, neural representations
Submission Number: 7490
Loading