TL;DR: A nonparametric construction for transformer-compatible modern Hopfield models and utilize this framework to debut efficient variants.
Abstract: We present a nonparametric interpretation for deep learning compatible modern Hopfield models and utilize this new perspective to debut efficient variants.
Our key contribution stems from interpreting the memory storage and retrieval processes in modern Hopfield models as a nonparametric regression problem subject to a set of query-memory pairs.
Interestingly,
our framework not only recovers the known results from the original dense modern Hopfield model but also fills the void in the literature regarding efficient modern Hopfield models, by introducing *sparse-structured* modern Hopfield models with sub-quadratic complexity.
We establish that this sparse model inherits the appealing theoretical properties of its dense analogue --- connection with transformer attention, fixed point convergence and exponential memory capacity.
Additionally, we showcase the versatility of our framework by constructing a family of modern Hopfield models as extensions, including linear, random masked, top-$K$ and positive random feature modern Hopfield models.
Empirically, we validate our framework in both synthetic and realistic settings for memory retrieval and learning tasks.
Lay Summary: Modern AI systems often depend on an “attention” step that looks at every piece of data all at once, a procedure that quickly becomes slow and costly as models grow. Our work recasts a powerful alternative, the modern Hopfield network, as a simple form of pattern-matching regression. This fresh view lets us build sparse-structured Hopfield memories that ignore the many data items that do not matter, slashing the usual quadratic compute cost to nearly linear while keeping three key benefits: (i) one-shot recall of stored patterns, (ii) strong resistance to noise, and (iii) the ability to hold an exponential number of memories. We back these claims with new mathematical error bounds and with experiments on images, time-series data and multiple-instance learning tasks. In practice, our layers can drop into today’s deep-learning code as a cheaper replacement for attention, delivering similar or better accuracy while using far less computation.
Link To Code: https://github.com/MAGICS-LAB/NonparametricHopfield
Primary Area: Deep Learning->Attention Mechanisms
Keywords: dense associative memory, modern Hopfield model, Hopfield network, efficient transformer, efficient attention, transformer, attention, foundation model, large language model
Submission Number: 836
Loading