Keywords: Audio Question Answering, Audio Reasoning, Adversarial Benchmark, Human-authored Questions, Item Response Theory, Multimodal Evaluation, Audio Understanding, Dataset Bias, Trivia Benchmark, Auditory Comprehension
Abstract: Existing audio question answering benchmarks largely emphasize sound event classification or caption-grounded queries, often enabling models to succeed through short-duration cues, lexical priors, or dataset-specific biases rather than reasoning. Thus, we present AUDITA (AUDIO UNDERSTANDING FROM DIVERSE INTERNET TRIVIA AUTHORS), a large-scale, real-world benchmark to rigorously evaluate audio reasoning beyond surface-level acoustic recognition. AUDITA comprises carefully curated, human-authored trivia questions grounded in real-world audio, explicitly designed to introduce adversarial distractors, long-range temporal dependencies, through probing queries that cannot be answered from isolated text or sound cues alone. A human average accuracy of 32.13%, shows both the challenge of the task while demonstrating meaningful comprehension of the audio. In stark contrast, state-of-the-art audio question answering models perform poorly, with average accuracy below 7.97%. Beyond raw accuracy, we apply Item Response Theory (IRT) to estimate latent proficiency, question difficulty, expose systematic deficiencies of the models and data.
Paper Type: Long
Research Area: Question Answering
Research Area Keywords: commonsense QA , logical reasoning, multimodal QA, interpretability, generalization, reasoning
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data resources, Data analysis
Languages Studied: English
Submission Number: 7687
Loading