Inductive Biases and Variable Creation in Self-Attention MechanismsDownload PDF

Published: 28 Jan 2022, Last Modified: 22 Oct 2023ICLR 2022 SubmittedReaders: Everyone
Keywords: transformers, attention
Abstract: Self-attention, an architectural motif designed to model long-range interactions in sequential data, has driven numerous recent breakthroughs in natural language processing and beyond. This work provides a theoretical analysis of the inductive biases of self-attention modules, where our focus is to rigorously establish which functions and long-range dependencies self-attention blocks prefer to represent. We show that bounded-norm Transformer layers create sparse variables: they can represent sparse Lipschitz functions of the input sequence, with sample complexity scaling only logarithmically with the context length. We propose new experimental protocols to support the analysis and guide the practice of training Transformers, built around the rich theory of learning sparse Boolean functions.
One-sentence Summary: We analyze the inductive bias of self-attention modules through capacity analyses and give evidence that they learn sparse Boolean functions.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2110.10090/code)
13 Replies

Loading