TL;DR: IMPACT is a text-to-audio generation framework that combines iterative mask-based parallel decoding with continuous representations driven by latent diffusion models to achieve high-performing audio quality and fidelity while ensuring fast inference.
Abstract: Text-to-audio generation synthesizes realistic sounds or music given a natural language prompt. Diffusion-based frameworks, including the Tango and the AudioLDM series, represent the state-of-the-art in text-to-audio generation. Despite achieving high audio fidelity, they incur significant inference latency due to the slow diffusion sampling process. MAGNET, a mask-based model operating on discrete tokens, addresses slow inference through iterative mask-based parallel decoding. However, its audio quality still lags behind that of diffusion-based models. In this work, we introduce IMPACT, a text-to-audio generation framework that achieves high performance in audio quality and fidelity while ensuring fast inference. IMPACT utilizes iterative mask-based parallel decoding in a continuous latent space powered by diffusion modeling. This approach eliminates the fidelity constraints of discrete tokens while maintaining competitive inference speed. Results on AudioCaps demonstrate that IMPACT achieves state-of-the-art performance on key metrics including Fréchet Distance (FD) and Fréchet Audio Distance (FAD) while significantly reducing latency compared to prior models. The project website is available at https://audio-impact.github.io/.
Lay Summary: Imagine typing a sentence like “a dog barking in the park” and having a computer generate a realistic audio clip to match. This is the goal of text-to-audio generation, but current methods often take a long time to produce high-quality sounds. Some fast models generate sound quickly but sacrifice realism; others sound great but are painfully slow.
Our research introduces IMPACT, a new method that combines the best of both worlds. It generates audio using a smart technique that masks and fills in missing parts step by step, guided by a simplified version of a powerful method called diffusion modeling. Unlike earlier systems that use inefficient components or only work with rough sound units, IMPACT works in a smooth, continuous space, enabling both realism and speed.
Why does this matter? IMPACT achieves state-of-the-art audio quality on standard benchmarks while being much faster than previous high-quality models. This opens the door for real-time applications like sound design, immersive gaming, and creative tools where both fidelity and responsiveness are crucial.
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Link To Code: https://audio-impact.github.io/
Primary Area: Deep Learning->Generative Models and Autoencoders
Keywords: Text-to-audio, Diffusion models, Iterative parallel decoding, Mask-based generative modeling
Submission Number: 2605
Loading