Keywords: Recognition Memory, Associative Memory, Energy-based models, Hopfield Networks, Predictive Coding, biologically plausible models, metric learning, computational neuroscience
TL;DR: We applied Hopfield networks and predictive coding models for associative memory to recognition memory tasks, and explained their differences in performance mathematically as learning different metrics
Abstract: Associative memory (AM) and recognition memory (RM) are fundamental in human and machine cognition. RM refers to an ability to recognize if the stimulus has been seen before, or is novel. Neuroscience studies reveal that regions such as the hippocampus, known for AM, are also involved in RM. Inspired by repetition suppression in the brain, this work presents an energy-based approach to RM, where a model learns by adjusting an energy function. We employed this energy-based approach to Hopfield Networks (HNs) and Predictive Coding Networks (PCNs). Our simulations indicate that PCN outperforms HNs in RM tasks, especially with correlated patterns.
In this work, we also unify the theoretical understanding of HN and PCN in RM, revealing that both perform metric learning. This theory is crucial in explaining PCN's superior performance in handling correlated data as it reveals that PCNs employ a statistical whitening step in its metric learning, which refines the distinction between familiar and novel stimuli. Overall, the superior performance of PCN, as well as the unique error neurons in its circuit implementation matching repetition suppression, provide a plausible account of how the brain performs RM, within the network architecture known to also support AM.
Submission Number: 20
Loading