Associative Memory Learning Through Redundancy Maximization

Published: 05 Mar 2025, Last Modified: 20 Apr 2025NFAM 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: long paper (up to 5 pages)
Keywords: Information Theory, Partial Information Decomposition, Local Learning, Hopfield Networks
TL;DR: This paper uses a redundancy measure from information theory to analyze Hebbian Hopfield Networks and constructs a new learning rule based on it.
Abstract: Hopfield networks mark an important milestone in the development of modern artificial intelligence architectures. In this work, we argue that a foundational principle for solving such associative memory problems at the neuron scale is to promote redundancy between the input pattern and the network's internal state in the neurons' activity. We demonstrate how to quantify this redundancy in classical Hebbian Hopfield networks using Partial Information Decomposition (PID), and reveal that redundancy plays a dominant role compared to synergy or uniqueness when operating below capacity. Beyond analysis, we show that redundancy can be used as a learning goal for Hopfield networks by constructing associative memory networks from neurons that directly optimize PID-based goal functions. In experiments, we find that these "infomorphic" Hopfield networks greatly outperform the original Hebbian networks and achieve promising performance with the potential for further improvement. This work offers novel insights into how associative memory functions at an information-theoretic level of abstraction and opens pathways to designing new learning rules for different associative memory architectures based on redundancy maximization goals.
Submission Number: 12
Loading