Masked Autoencoders that ListenDownload PDF

Published: 31 Oct 2022, Last Modified: 03 Jul 2024NeurIPS 2022 AcceptReaders: Everyone
Keywords: self-supervised learning, audio representation learning, audio classification
Abstract: This paper studies a simple extension of image-based Masked Autoencoders (MAE) to self-supervised representation learning from audio spectrograms. Following the Transformer encoder-decoder design in MAE, our Audio-MAE first encodes audio spectrogram patches with a high masking ratio, feeding only the non-masked tokens through encoder layers. The decoder then re-orders and decodes the encoded context padded with mask tokens, in order to reconstruct the input spectrogram. We find it beneficial to incorporate local window attention in the decoder, as audio spectrograms are highly correlated in local time and frequency bands. We then fine-tune the encoder with a lower masking ratio on target datasets. Empirically, Audio-MAE sets new state-of-the-art performance on six audio and speech classification tasks, outperforming other recent models that use external supervised pre-training. Our code and models is available at https://github.com/facebookresearch/AudioMAE.
TL;DR: Audio-MAE learns SoTA embeddings from audio spectrograms. Without external pretraining, it achieves best performance with high masking ratio (80%) and decoders with local attention. Qualitative audible reconstructions demonstrate its effectiveness.
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/masked-autoencoders-that-listen/code)
17 Replies

Loading