Keywords: Hebbian plasticity; dimensionality expansion; reservoir computing; mixed selectivity
TL;DR: Hebb-only learning makes excitatory networks self-decorrelate and expand representation, improving downstream decoders.
Abstract: We show that local, unsupervised Hebbian plasticity is sufficient for purely excitatory recurrent networks to self–decorrelate their population activity, thereby expanding representational dimensionality—without supervision. In a twin-reservoir protocol to isolate the causal effect of plasticity, and across both rate-based and spiking reservoirs driven by naturalistic audio (Japanese Vowels, CatsDogs), four canonical rules (Oja, BCM, pairwise STDP, triplet STDP) consistently reduce pairwise correlations, PCA-based metrics and spike-time synchrony relative to frozen controls, while maintaining stable dynamics in the echo-state regime. We provide a simple mechanistic account: when two neurons are strongly correlated, Hebbian plasticity pushes neurons to operate into distinct nonlinear regimes, decorrelating their outputs, lowering redundancy and yielding richer population codes. These results identify a minimal and biologically plausible route to high-dimensional coding and offer a hardware-friendly recipe for upgrading reservoir architectures with on-chip, unsupervised local plasticity. Our findings bridge machine learning and systems neuroscience by showing how Hebbian synapses alone can sculpt random recurrent substrates into high-capacity representational engines.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 8008
Loading