Keywords: Document Visual Question Answering, Document Understanding, Multimodal Large Language Models, Reinforcement Learning
Abstract: Multi-page Document Visual Question Answering requires reasoning over semantics, layouts, and visual elements in long, visually dense documents. Existing OCR-free methods face a trade-off between capacity and precision: end-to-end models scale poorly with document length, while visual retrieval-based pipelines are brittle and passive. We propose Doc-$V^\*$, an OCR-free agentic framework that casts multi-page DocVQA as sequential evidence aggregation. Doc-$V^\*$ begins with a thumbnail overview, then actively navigates via semantic retrieval and targeted page fetching, and aggregates evidence in a structured working memory for grounded reasoning. Trained by imitation learning from expert trajectories and further optimized with Group Relative Policy Optimization, Doc-$V^\*$ balances answer accuracy with evidence-seeking efficiency. Across five benchmarks, Doc-$V^\*$ outperforms open-source baselines and approaches proprietary models, improving out-of-domain performance by up to 47.9% over RAG baseline. Other results reveal effective evidence aggregation with selective attention, not increased input pages.
Paper Type: Long
Research Area: Multimodality and Language Grounding to Vision, Robotics and Beyond
Research Area Keywords: vision question answering, cross-modal application, image text matching, multimodality
Languages Studied: English
Submission Number: 8881
Loading