Keywords: Associative networks, Generalization, Data structure
TL;DR: We show that Hopfield networks with structured examples have a surprisingly rich phase diagram.
Abstract: It has been recently shown that, when an Hopfield Network stores examples generated as superposition of random features, new attractors appear in the model corresponding to such features. In this work we expand that result to superpositions of a finite number of features and we show numerically that the network remains capable of learning the features.
Furthermore, we reveal that the network also develops attractors corresponding to previously unseen examples generated with the same set of features. We support this result with a simple signal-to-noise argument and we conjecture a phase diagram.
Submission Number: 32
Loading