Keywords: Singing Voice Synthesis, Reward Model, Evaluation, Data Synthesis, Reaction
Abstract: Singing voice synthesis (SVS) has advanced significantly, enabling models to generate vocals with accurate pitch and consistent style.
As these capabilities improve, the need for reliable evaluation and optimization becomes increasingly critical.
However, current methods like reward systems often rely on single numerical scores, struggle to capture various dimensions such as phrasing or expressiveness, and require costly annotations, limiting interpretability and generalization.
To address these issues, we propose a generative feedback (i.e., reward model) framework that provides multi-dimensional language and audio feedback for SVS assessment.
Our approach leverages an audio-language model to generate text and audio critiques—covering aspects such as melody, content, and auditory quality.
The model is fine-tuned on a hybrid dataset combining human music reactions and synthetic critiques from a MLLMs, enhancing diversity and linguistic richness.
Quantitative experiments validate the effectiveness of the proposed dataset and training strategy, demonstrating that the framework produces musically accurate and interpretable evaluations suitable for guiding generative model improvement.
The code is at https://github.com/opendilab/VocalCritic
Track: Paper Track
Confirmation: Paper Track: I confirm that I have followed the formatting guideline and anonymized my submission.
(Optional) Supplementary Material: zip
Submission Number: 31
Loading