Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Source Separation, Deep Learning, Diffusion Models, Upper Bound, Information Bound
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: The problem of speech separation, also known as the cocktail party problem,
refers to the task of isolating a single speech signal from a mixture of speech
signals. Previous work on source separation derived an upper bound for the
source separation task in the domain of human speech. This bound is derived for
deterministic models. Recent advancements in generative models challenge this
bound. We show how the upper bound can be generalized to the case of random
generative models. Applying a diffusion model Vocoder that was pretrained to
model single-speaker voices on the output of a deterministic separation model leads
to state-of-the-art separation results. It is shown that this requires one to combine
the output of the separation model with that of the diffusion model. In our method,
a linear combination is performed, in the frequency domain, using weights that are
inferred by a learned model. We show state-of-the-art results on 2, 3, 5, 10, and 20
speakers on multiple benchmarks. In particular, for two speakers, our method is
able to surpass what was previously considered the upper performance bound.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: zip
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Primary Area: generative models
Submission Number: 3618
Loading