Algebraic Positional Encodings

Published: 25 Sept 2024, Last Modified: 15 Jan 2025NeurIPS 2024 spotlightEveryoneRevisionsBibTeXCC BY 4.0
Keywords: positional encodings, transformers, structured attention, group theory
TL;DR: Positional encodings as group homomorphisms: it's beautiful and it works.
Abstract: We introduce a novel positional encoding strategy for Transformer-style models, addressing the shortcomings of existing, often ad hoc, approaches. Our framework implements a flexible mapping from the algebraic specification of a domain to a positional encoding scheme where positions are interpreted as orthogonal operators. This design preserves the structural properties of the source domain, thereby ensuring that the end-model upholds them. The framework can accommodate various structures, including sequences, grids and trees, but also their compositions. We conduct a series of experiments demonstrating the practical applicability of our method. Our results suggest performance on par with or surpassing the current state of the art, without hyper-parameter optimizations or ``task search'' of any kind. Code is available through https://aalto-quml.github.io/ape/.
Primary Area: Deep learning architectures
Submission Number: 3373
Loading