Group Equivariant Stand-Alone Self-Attention For VisionDownload PDF

28 Sept 2020, 15:52 (edited 10 Feb 2022)ICLR 2021 PosterReaders: Everyone
  • Keywords: group equivariant transformers, group equivariant self-attention, group equivariance, self-attention, transformers
  • Abstract: We provide a general self-attention formulation to impose group equivariance to arbitrary symmetry groups. This is achieved by defining positional encodings that are invariant to the action of the group considered. Since the group acts on the positional encoding directly, group equivariant self-attention networks (GSA-Nets) are steerable by nature. Our experiments on vision benchmarks demonstrate consistent improvements of GSA-Nets over non-equivariant self-attention networks.
  • One-sentence Summary: We provide a general self-attention formulation to impose group equivariance to arbitrary symmetry groups.
  • Supplementary Material: zip
  • Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
  • Code: [![github](/images/github_icon.svg) dwromero/g_selfatt](https://github.com/dwromero/g_selfatt)
  • Data: [CIFAR-10](https://paperswithcode.com/dataset/cifar-10)
14 Replies

Loading