VocalRep: Structure-Aware Vocal Representations for Multimodal Generation

ACL ARR 2026 January Submission9535 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: vocal representations, task-oriented, music source separation
Abstract: Modern speech and multimodal generation systems, such as singing voice conversion and audio-driven lip synchronization, critically depend on temporally stable and semantically unambiguous vocal representations. In practical pipelines, such representations are typically derived from music source separation (MSS) applied to mixed musical recordings. However, standard MSS paradigms often aggregate lead vocals and backing harmonies into a single vocal stream. Although multi-stem separation has been explored, existing approaches remain primarily optimized for signal-level reconstruction, often overlooking the intricate structural disentanglement required by downstream generation tasks. From a generation-oriented perspective, this motivates revisiting vocal separation from a representation learning standpoint. To this end, we propose VocalRep, a structure-aware learning framework designed to disentangle lead vocals, harmonies, and accompaniment while enforcing role consistency across long-form audio. By integrating global vocal identity conditioning with ranking-based objectives, VocalRep extracts role-consistent lead vocal representations without relying on explicit pitch or symbolic annotations. Experimental results demonstrate that VocalRep significantly improves performance in downstream singing voice conversion and audio-driven lip synchronization.
Paper Type: Long
Research Area: Speech Processing and Spoken Language Understanding
Research Area Keywords: speech technologies, speech and vision, task-oriented
Contribution Types: Model analysis & interpretability
Languages Studied: N/A
Submission Number: 9535
Loading