Sequential Learning and Retrieval in a Sparse Distributed Memory: The K-winner Modern Hopfield Network

Published: 27 Oct 2023, Last Modified: 26 Nov 2023AMHN23 OralEveryoneRevisionsBibTeX
Keywords: Associative Memory, Modern Hopfield Network, Hippocampus, Sequential Learning, Biologically Plausible Algorithms
TL;DR: We design an MHN-inspired sparse distributed memory that shows superior one-shot retention for older memories compared to a slot-based MHN, suggesting the relevance of sparse, distributed latent representations in designing robust memory systems.
Abstract: Many autoassociative memory models rely on a localist framework, using a neuron or slot for each memory. However, neuroscience research suggests that memories depend on sparse, distributed representations over neurons with sparse connectivity. Accordingly, we extend a canonical localist memory model---the modern Hopfield network (MHN)---to a distributed variant called the K-winner modern Hopfield network, equating the number of synaptic parameters (weights) in the localist and K-winner variants. We study both models' abilities to reconstruct once-presented patterns organized into long presentation sequences, updating the parameters of the best-matching memory neuron (or k best neurons) as each new pattern is presented. We find that K-winner MHN's exhibit superior retention of older memories.
Submission Number: 36