Keywords: text-to-sound generation, real-time sound generation, distillation models, diffusion models
TL;DR: Introduce Sound Consistency Trajectory Models that can both high-quality 1-step sound generation and higher-quality muti-step generation by addressing the limitation of CTM's training framework
Abstract: Recent high-quality diffusion-based sound generation models can serve as valuable tools for sound content creators.
However, despite producing high-quality sounds, these models often suffer from slow inference speeds. This drawback burdens creators, who typically refine their sounds through trial and error to align sounds with their artistic intentions. To address this issue, we introduce Sound Consistency Trajectory Models (SoundCTM). Our model enables flexible transitioning between high-quality $1$-step sound generation and superior sound quality through multi-step generation. This allows creators to initially control sounds with $1$-step samples before refining them through multi-step generation. We reframe original CTM's training framework and introduce a novel feature distance by utilizing the teacher's network for a distillation loss. Additionally, while distilling classifier-free guided trajectories, we train conditional and unconditional student models simultaneously and interpolate between these models during inference. SoundCTM achieves both promising $1$-step and multi-step real-time sound generation. Audio samples are available at https://anonymus-soundctm.github.io/soundctm/.
Submission Number: 10
Loading