VoiceAssistant-Eval: Benchmarking AI Assistants across Listening, Speaking, and Viewing

13 Sept 2025 (modified: 26 Jan 2026)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: voice assistant, multimodality, large language model
TL;DR: VoiceAssistant-Eval comprises 10,497 curated examples spanning 13 task categories, including natural sounds, music, and spoken dialogue for listening; various scenarios for speaking; and highly heterogeneous images for viewing.
Abstract: The growing capabilities of large language models and multimodal systems have spurred interest in voice-first AI assistants, yet existing benchmarks are inadequate for evaluating the full range of these systems’ capabilities. We introduce VoiceAssistant-Eval, a comprehensive benchmark designed to assess AI assistants across listening, speaking, and viewing. VoiceAssistant-Eval comprises 10,497 curated examples spanning 13 task categories. These tasks include natural sounds, music, and spoken dialogue for listening; multi-turn dialogue, role-play imitation, and various scenarios for speaking; and highly heterogeneous images for viewing. To demonstrate its utility, we evaluate 21 open-source models, GPT-4o-Audio and Gemini-live-2.5-flash, measuring the quality of the response content and speech, as well as their consistency. The results reveal three key findings: (1) open-source models can be highly competitive with proprietary models; (2) most models excel at speaking tasks but lag in audio understanding; and (3) well-designed smaller models can rival much larger ones. Notably, the mid-sized Step-Audio-2-mini (7B) achieves more than double the listening accuracy of LLaMA-Omni2-32B-Bilingual. However, challenges remain: multimodal (audio+visual) input and role-play voice imitation tasks are difficult for current models, and significant gaps persist in robustness and safety alignment. VoiceAssistant-Eval identifies these gaps and establishes a rigorous framework for evaluating and guiding the development of next-generation multimodal voice assistants.
Primary Area: datasets and benchmarks
Submission Number: 4778
Loading