Keywords: long-context reasoning, dataset, benchmark, information aggregation
Abstract: As model context lengths continue to grow, concerns about whether models effectively use the full context length have persisted. While several carefully designed long-context evaluations have recently been released, these evaluations tend to rely on retrieval from one or more sections of the context, which allows nearly all of the context tokens to be disregarded as noise. This represents only one type of task that might be performed with long context. We introduce Oolong, a benchmark of long-context reasoning tasks that require analyzing individual chunks of text on an atomic level, and then aggregating these analyses to answer distributional questions. Oolong is separated into two task sets: Oolong-synth, a set of naturalistic synthetic tasks, where we can easily ablate components of the reasoning problem; and Oolong-real, a downstream setting which requires reasoning over real-world conversational data. Oolong requires models to reason over large quantities of examples, to perform both classification and counting in-context, and to reason over temporal and user relations. Even frontier models struggle on Oolong, with the best model, GPT-5, achieving less than 50% accuracy on both splits at 128K. We release the benchmark examples, code to construct additional evaluation examples for Oolong-synth, and full outputs and task-specific evaluation results for all models tested to enable further development of models that can reason over large quantities of text.
Primary Area: datasets and benchmarks
Submission Number: 19537
Loading