ETTA: Elucidating the Design Space of Text-to-Audio Models

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We elucidate the design space of text-to-audio and present ETTA with state-of-the-art result and improved abilities to generate creative audio.
Abstract: Recent years have seen significant progress in Text-To-Audio (TTA) synthesis, enabling users to enrich their creative workflows with synthetic audio generated from natural language prompts. Despite this progress, the effects of data, model architecture, training objective functions, and sampling strategies on target benchmarks are not well understood. With the purpose of providing a holistic understanding of the design space of TTA models, we set up a large-scale empirical experiment focused on diffusion and flow matching models. Our contributions include: 1) AF-Synthetic, a large dataset of high quality synthetic captions obtained from an audio understanding model; 2) a systematic comparison of different architectural, training, and inference design choices for TTA models; 3) an analysis of sampling methods and their Pareto curves with respect to generation quality and inference speed. We leverage the knowledge obtained from this extensive analysis to propose our best model dubbed Elucidated Text-To-Audio (ETTA). When evaluated on AudioCaps and MusicCaps, ETTA provides improvements over the baselines trained on publicly available data, while being competitive with models trained on proprietary data. Finally, we show ETTA's improved ability to generate creative audio following complex and imaginative captions -- a task that is more challenging than current benchmarks.
Lay Summary: Text-to-audio models convert text descriptions into sound. While recent models can produce realistic audio from simple descriptions, they often struggle with complex and imaginative ones — such as generating music made from fictional or abstract sources. Understanding what makes these models work well, and what limits their capabilities, remain an open question. In this work, we conduct a large-scale study to analyze how different factors — including data quality, model architecture, training methods, and sampling strategies — affect the result of text-to-audio models. We introduce AF-Synthetic, a new dataset containing over one million high-quality text–audio pairs, and use it to train our model called ETTA. ETTA demonstrates strong results on standard benchmarks and shows a significantly improved ability to follow complex and creative descriptions, including generating audio that does not have real-world counterparts. This suggests that models can go beyond mimicking the real world to synthesizing entirely novel sounds, powered by model and data scaling.
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Link To Code: https://github.com/NVIDIA/elucidated-text-to-audio
Primary Area: Applications->Everything Else
Keywords: audio generation, text-to-audio, synthetic data, diffusion, flow matching
Submission Number: 7535
Loading