Continual familiarity decoding from recurrent connections in spiking networks

Published: 12 Jan 2025, Last Modified: 21 Feb 2025OpenReview Archive Direct UploadEveryoneCC BY-NC-ND 4.0
Abstract: Familiarity memory enables recognition of previously encountered inputs as familiar without recalling detailed stimuli information, which supports adaptive behavior across various timescales. We present a spiking neural network model with lateral connectivity shaped by unsupervised spike-timing-dependent plasticity (STDP) that encodes familiarity via local plasticity events. We show that familiarity can be decoded from network activity using both frequency (spike count) and temporal (spike synchrony) characteristics of spike trains. Temporal coding demonstrates enhanced performance under sparse input conditions, consistent with the principles of sparse coding observed in the brain. We also show how connectivity structure supports each decoding strategy, revealing different plasticity regimes. Our approach outperforms LSTM in temporal generalizability on the continual familiarity detection task, with input stimuli being naturally encoded in the recurrent connectivity without a separate training stage.
Loading