Zero-Shot Unsupervised and Text-Based Audio Editing Using DDPM Inversion

Published: 17 Jun 2024, Last Modified: 13 Jul 20242nd SPIGM @ ICML PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Diffusion models, Audio Editing, Music Editing, Unsupervised editing, Text-based editing, Text-guided editing, Prompt-based editing
TL;DR: We present two zero-shot methods, for both text-based and unsupervised editing of real audio and music signals using DDPM inversion.
Abstract: Editing signals using large pre-trained models, in a zero-shot manner, has recently seen rapid advancements in the image domain. However, this wave has yet to reach the audio domain. In this paper, we explore two zero-shot editing techniques for audio signals, which use DDPM inversion with pre-trained diffusion models. The first, which we coin *ZEro-shot Text-based Audio (ZETA)* editing, is adopted from the image domain. The second, named *ZEro-shot UnSupervized (ZEUS)* editing, is a novel approach for discovering semantically meaningful editing directions without supervision. When applied to music signals, this method exposes a range of musically interesting modifications, from controlling the participation of specific instruments to improvisations on the melody. Samples, code, and the full version of this paper can be found on the [project web page](https://hilamanor.github.io/AudioEditing/).
Submission Number: 62
Loading