Keywords: contrastive learning, music representation learning, music source separation, music similarity
TL;DR: A method for learning useful representations from musical audio signals through self-supervision and associating isolated instrumental tracks with complete musical mixtures.
Abstract: Contrastive learning constitutes an emerging branch of self-supervised learning that leverages large amounts of unlabeled data, by learning a latent space, where pairs of different views of the same sample are associated. In this paper, we propose musical source association as a pair generation strategy in the context of contrastive music representation learning. To this end, we modify COLA, a widely used contrastive learning audio framework, to learn to associate a song excerpt with a stochastically selected and automatically extracted vocal or instrumental source. We further introduce a novel modification to the contrastive loss to incorporate information about the existence or absence of specific sources. Our experimental evaluation in three different downstream tasks (music auto-tagging, instrument classification and music genre classification) using the publicly available Magna-Tag-A-Tune (MTAT) as a pre-training dataset yields competitive results to existing literature methods, as well as faster network convergence. The results also show that this pre-training method can be steered towards specific features, according to the selected musical source, while also being dependent on the quality of the separated sources.
This paper has been accepted for publication in the Sound and Music Computing Conference 2023 (SMC-2023), Stockholm, Sweden.
Paper reference: C. Garoufis, A. Zlatintsi, and P. Maragos, "Multi-Source Contrastive Learning from Musical Audio", Proc. SMC 2023, Stockholm, Sweden, 2023.
Link to the accepted paper: http://cvsp.cs.ntua.gr/publications/confr/Garoufis_SMC2023_paper.pdf
Submission track: 6. Other
Submission Number: 104
Loading