Keywords: Long-Context Reasoning, QA Synthesis, Reinforcement Learning
Abstract: Reinforcement Learning with Verifiable Rewards (RLVR) has proven effective in enhancing LLMs' short-context reasoning but falters in long-context scenarios requiring precise grounding and multi-hop reasoning. We identify the "almost-there" phenomenon—trajectories that are largely correct but fail at the final step—in long-context reasoning RL and attribute this failure to two factors: (1) the lack of high reasoning density in long-context QA data, and (2) indiscriminate penalization of partially correct trajectories during long-context RL. To overcome this bottleneck, we propose DeepReasonQA, a KG-driven synthesis framework that controllably constructs high-difficulty, multi-hop long-context QA pairs with inherent reasoning chains. Building on this, we introduce Long-context Process Advantage Shaping (LongPAS), a simple yet effective method that performs fine-grained credit assignment by measuring reasoning steps along Validity and Relevance dimensions, which captures critical signals from "almost-there" trajectories. Experiments on three long-context reasoning benchmarks show that our approach substantially outperforms RLVR baselines and matches frontier LLMs while using far fewer parameters. Further analysis confirms the effectiveness of our methods in strengthening long-context reasoning while maintaining stable RL training.
Paper Type: Long
Research Area: Question Answering
Research Area Keywords: multihop QA,reasoning
Contribution Types: NLP engineering experiment, Reproduction study, Publicly available software and/or pre-trained models, Data resources
Languages Studied: English
Submission Number: 3453
Loading