Characterizing Prompt Compression Methods for Long Context Inference

Published: 21 Jun 2024, Last Modified: 26 Jul 2024ES-FoMo-II 2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: prompt compression, long context inference
TL;DR: We analyze important design decisions for building prompt compression methods for long context inference
Abstract: Retrieval-augmented generation has become a popular paradigm to integrate custom data sources with large language models (LLMs). However, this often leads to large contexts of tens of thousands of tokens. Long context inference presents challenges at the system level with increased compute and memory requirements, as well as from an accuracy perspective in being able to reason over long contexts. This has led to prompt compression techniques that aim to reduce the size of provided context, while preserving key information. However, despite the wide variety of recently proposed methodologies for compressing long contexts, little standardized analysis has been done to analyze the behavior of different methods across compression rates and tasks. In this paper, we provide a comprehensive characterization and evaluation of prompt compression methods, giving insight into building compression techniques for long context applications. We analyze extractive compression, summarization-based abstractive compression, and token pruning methods. We find that extractive compression is a strong choice, often being able to compress over 10x with minimal accuracy loss. Token pruning demonstrates marginal improvements over extractive compression on summarization tasks. Furthermore, the performance of abstractive compression can be significantly enhanced, by up to 10 points in multi-document QA tasks at 30x compression, through the generation of query-aware summaries.
Submission Number: 73
Loading