Inverse distance weighting attention

Published: 27 Oct 2023, Last Modified: 26 Nov 2023AMHN23 PosterEveryoneRevisionsBibTeX
Keywords: attention, distance
TL;DR: We report how a specific form of distance-based attention leads to formation of prototypes in a single-hidden-layer network trained with vanilla cross-entropy loss.
Abstract: We report the effects of replacing the scaled dot-product (within softmax) attention with the negative-log of Euclidean distance. This form of attention simplifies to inverse distance weighting interpolation. Used in simple one hidden layer networks and trained with vanilla cross-entropy loss on classification problems, it tends to produce a key matrix containing prototypes and a value matrix with corresponding logits. We also show that the resulting interpretable networks can be augmented with manually-constructed prototypes to perform low-impact handling of special cases.
Submission Number: 19
Loading