Abstract: Empowering LLMs with the ability to precisely understand long contexts is crucial for many downstream applications. However, handling long contexts with conventional transformer architecture requires substantial training and inference resources. Existing context condensing methods cannot accurately understand the full context, as there is a considerable amount of information loss in the condensing process.
To address these issues, we present **FocusLLM**, a framework designed to extend the fixed context length of any decoder-only LLM, allowing the model to focus on relevant information from very long sequences.
FocusLLM first divides long text input into chunks based on the model's original context length. It then employs the ***dynamic condensing*** process to distill crucial information from each chunk. Ultimately, through the novel ***parallel decoding*** mechanism, FocusLLM can integrate the extracted information into its local context.
FocusLLM stands out for great training efficiency and versatility: trained with an 8K input length and with much less training cost than previous methods, FocusLLM exhibits superior performance across downstream tasks and maintains strong language modeling ability when handling extensive long texts, even up to 400K tokens. Our code is available at https://anonymous.4open.science/r/FocusLLM.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: Large language model, Long context, Condensing
Contribution Types: NLP engineering experiment, Publicly available software and/or pre-trained models
Languages Studied: English
Submission Number: 930
Loading