Keywords: universal audio generation, flow matching, multi-task learning, temporal alignment
Abstract: Audio generation, including speech, music and sound effects, has advanced rapidly in recent years.
These tasks can be divided into two categories: time-aligned (TA) tasks, where each input unit corresponds to a specific segment of the output audio (e.g., phonemes aligned with frames in speech synthesis); and non-time-aligned (NTA) tasks, where such alignment is not available.
Since modeling paradigms for the two types are typically different, research on different audio generation tasks has traditionally followed separate trajectories.
However, audio is not inherently divided into such categories, making a unified model a natural and necessary goal for general audio generation.
Prior works on universal audio generation remain limited: auto-regressive models struggle with NTA tasks, while diffusion models often overlook TA tasks.
In this work, we propose UniFlow-Audio, a universal audio generation framework based on flow matching.
We propose a dual-fusion mechanism that temporally aligns audio latents with TA features and integrates NTA features via cross-attention in each model block.
Task-balanced data sampling is employed to maintain strong performance across both TA and NTA tasks.
UniFlow-Audio supports omni-modalities, including text, audio, and video.
By leveraging the advantage of multi-task learning and the generative modeling capabilities of flow matching, UniFlow-Audio achieves strong results across 7 tasks using fewer than 8K hours of public training data and under 1B trainable parameters.
Even the small variant with only $~$200M parameters shows competitive performance, highlighting UniFlow-Audio as a potential non-auto-regressive foundation model for audio generation.
Code and models will be available at https://anonymous3387a8c.github.io/uniflow_audio.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 17916
Loading