Keywords: Deep Research, Large Language Models, Benchmarks, Rubrics, LLM-as-a-judge, Multi-step Reasoning, Cross-document Synthesis, Long-form Question Answering, Evidence-based Reasoning, Evaluation Frameworks, Natural Language Processing
Abstract: Deep Research (DR) is an emerging agent application that leverages large language models (LLMs) to address open-ended queries. It requires the integration of several capabilities, including multi-step reasoning, cross-document synthesis, and the generation of evidence-backed, long-form answers. Evaluating DR remains challenging because responses are lengthy and diverse, admit many valid solutions, and often depend on dynamic information sources. We introduce ResearchRubrics, a standardized benchmark for DR that pairs realistic, domain-diverse prompts with expert‑written, fine‑grained rubrics to assess factual grounding, reasoning soundness, and clarity. We also propose a new complexity framework for categorizing DR tasks along three axes: conceptual breadth, logical nesting, and exploration. In addition, we develop human and model-based evaluation protocols that measure rubric adherence for DR agents. We evaluate several state‑of‑the‑art DR systems and find that even leading agents like Gemini's DR and OpenAI's DR achieve under 59\% average compliance with our rubrics, primarily due to missed implicit context and inadequate reasoning about retrieved information. Our results highlight the need for robust, scalable assessment of deep research capabilities, to which end we release ResearchRubrics (including all prompts, rubrics, and evaluation tools) to facilitate progress toward well‑justified research assistants.
Supplementary Material: zip
Primary Area: datasets and benchmarks
Submission Number: 21676
Loading