Landmark Attention: Random-Access Infinite Context Length for Transformers

Published: 20 Jun 2023, Last Modified: 16 Jul 2023ES-FoMO 2023 PosterEveryoneRevisionsBibTeX
Keywords: large language models, memory, context length
TL;DR: Landmark attention allows transformers to handle any inference context length, regardless of their training context length, enabling LLAMA7B to process contexts with 32k+ tokens.
Abstract: While transformers have shown remarkable success in natural language processing, their attention mechanism's large memory requirements have limited their ability to handle longer contexts. Prior approaches, such as recurrent memory or retrieval-based augmentation, have either compromised the random-access flexibility of attention (i.e., the capability to select any token in the entire context) or relied on separate mechanisms for relevant context retrieval, which may not be compatible with the model's attention. In this paper, we present a novel approach that allows access to the complete context while retaining random-access flexibility, closely resembling running attention on the entire context. Our method uses a landmark token to represent each block of the input and trains the attention to use it for selecting relevant blocks, enabling retrieval of blocks directly through the attention mechanism instead of by relying on a separate mechanism. Our approach seamlessly integrates with specialized data structures and the system's memory hierarchy, enabling processing of arbitrarily long context lengths. To demonstrate the capabilities of our method, we show that fine-tuning LLaMA 7B with our method successfully extends its context length capacity beyond 32k tokens, allowing for inference at the context lengths of GPT-4.
Submission Number: 33
Loading