Keywords: Speech Reasoning, Large Audio-Language Models
Abstract: Large audio-language models (LALMs) have exhibited human-comparable capabilities in sentence-level transcription and emotion recognition. However, existing evaluations mainly focus on surface-level perception, leaving the capacity of models for contextual and inference-driven reasoning in speech-based scenarios insufficiently examined.
To address this gap, we introduce SpeechR, a unified benchmark for evaluating reasoning over speech in large audio-language models. SpeechR evaluates models along three key dimensions: factual retrieval, procedural inference, and normative judgment. It includes three distinct evaluation formats. The multiple-choice version measures answer selection accuracy. The generative version assesses both the coherence and logical consistency of reasoning chains. The acoustic-feature version investigates whether variations in stress and emotion affect reasoning performance. Evaluations on thirteen state-of-the-art LALMs reveal that high transcription accuracy does not translate into strong reasoning capabilities. SpeechR establishes a structured benchmark for evaluating reasoning in spoken language, enabling more targeted analysis of model capabilities across diverse dialogue-based tasks. We provide SpeechR dataset through the anonymous link below: \href{https://anonymous.4open.science/r/Audio-CORE-E32E/README.md}{SpeechR}.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: Speech Reasoning, Large Audio-Language Models
Languages Studied: English
Submission Number: 6521
Loading