Abstract: Large language models (LLMs) have transformed AI research thanks to their powerful internal
capabilities and knowledge. However, existing LLMs still fail to effectively incorporate the massive
external knowledge when interacting with the world. Although retrieval-augmented LLMs are
proposed to mitigate the issue, they are still fundamentally constrained by the context length of
LLMs, as they can only retrieve top-K raw data chunks from the external knowledge base which often
consists of millions of data chunks. Here we propose Thought-Retriever, a novel model-agnostic
algorithm that helps LLMs generate output conditioned on arbitrarily long external data, without being
constrained by the context length or number of retrieved data chunks. Our key insight is to let an LLM
fully leverage its intermediate responses generated when solving past user queries (thoughts), filtering
meaningless and redundant thoughts, organizing them in thought memory, and retrieving the relevant
thoughts when addressing new queries. Besides algorithmic innovation, we further meticulously
prepare a novel benchmark, AcademicEval, which requires an LLM to faithfully leverage ultra-
long context to answer queries based on real-world academic papers. Extensive experiments on
AcademicEval and two other public datasets validate that Thought-Retriever remarkably outperforms
state-of-the-art baselines, achieving an average increase of at least 7.6% in F1 score and 16% in
win rate across various tasks. More importantly, we further demonstrate two exciting findings: (1)
Thought-Retriever can indeed help LLM self-evolve after solving more user queries; (2) Thought-
Retriever learns to leverage deeper thoughts to answer more abstract user queries.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Shuiwang_Ji1
Submission Number: 5869
Loading