Transformers are Universal Predictors

Published: 01 Jan 2023, Last Modified: 01 Oct 2024CoRR 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We find limits to the Transformer architecture for language modeling and show it has a universal prediction property in an information-theoretic sense. We further analyze performance in non-asymptotic data regimes to understand the role of various components of the Transformer architecture, especially in the context of data-efficient training. We validate our theoretical analysis with experiments on both synthetic and real datasets.
Loading