PlanRAG-Audio: Planning and Retrieval Augmented Generation for Long-form Audio Understanding

ACL ARR 2026 January Submission6644 Authors

05 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Long-form Audio Understanding, Retrieval-Augmented Generation, Planning-based Retrieval, Audio Language Models, Multimodal Reasoning
Abstract: Long-form audio understanding poses significant challenges for large audio language models (LALMs) due to the extreme length of audio sequences and the need to reason over heterogeneous acoustic cues distributed over time, such as speech content, speaker identity, emotion, and sound events. To address these challenges, we propose PlanRAG-Audio, a planning-based retrieval-augmented generation framework for scalable long-form audio understanding. Rather than having audio LALMs process entire recordings directly, PlanRAG-Audio explicitly plans which modalities and temporal spans are required for a given query, and retrieves only query-relevant information from a structured text and audio database. This retrieval planning enables effective reasoning over complex, cross-domain audio queries while substantially reducing the input length passed to the large language models. Experiments across a wide range of speech/audio retrieval demonstrate that PlanRAG-Audio improves reasoning accuracy and stabilizes performance as audio duration increases by decoupling inference cost from raw audio length.
Paper Type: Long
Research Area: Speech Processing and Spoken Language Understanding
Research Area Keywords: Multimodality and Language Grounding to Vision, Robotics and Beyond, Speech Recognition, Text-to-Speech and Spoken Language Understanding
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 6644
Loading