Learnable Fourier Features for Multi-dimensional Spatial Positional EncodingDownload PDF

21 May 2021, 20:51 (edited 12 Jan 2022)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: Positional encoding, Transformer, Fourier features, multi-dimensional spatial tasks, image classification, image generation, object detection, widget captioning, UI modeling
  • TL;DR: The work presents a method for encoding multi-dimensional positions using learnable Fourier features for Transformer models.
  • Abstract: Attentional mechanisms are order-invariant. Positional encoding is a crucial component to allow attention-based deep model architectures such as Transformer to address sequences or images where the position of information matters. In this paper, we propose a novel positional encoding method based on learnable Fourier features. Instead of hard-coding each position as a token or a vector, we represent each position, which can be multi-dimensional, as a trainable encoding based on learnable Fourier feature mapping, modulated with a multi-layer perceptron. The representation is particularly advantageous for a spatial multi-dimensional position, e.g., pixel positions on an image, where $L_2$ distances or more complex positional relationships need to be captured. Our experiments based on several public benchmark tasks show that our learnable Fourier feature representation for multi-dimensional positional encoding outperforms existing methods by both improving the accuracy and allowing faster convergence.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
16 Replies

Loading