Keywords: Information Seeking, Document Question Answering, LLM Agents, Tool Use
Abstract: Document Question Answering (DocQA) focuses on answering questions grounded in given documents, yet existing DocQA agents lack effective tool utilization and largely rely on closed-source models.
In this work, we introduce DocDancer, an end-to-end trained open-source Doc agent.
We formulate DocQA as an information-seeking problem and propose a tool-driven agent framework that explicitly models document exploration and comprehension.
To enable end-to-end training of such agents, we introduce an Exploration-then-Synthesis data synthesis pipeline that addresses the scarcity of high-quality training data for DocQA.
Training on the synthesized data, the trained models on two long-context document understanding benchmarks, MMLongBench-Doc and DocBench, show their effectiveness.
Further analysis provides valuable insights for the agentic tool design and synthetic data.
Paper Type: Long
Research Area: Question Answering
Research Area Keywords: multihop QA, question generation, multimodal QA
Contribution Types: NLP engineering experiment, Publicly available software and/or pre-trained models
Languages Studied: English
Submission Number: 7878
Loading