Learned in Speech Recognition: Contextual Acoustic Word EmbeddingsDownload PDF

Anonymous

22 Oct 2018 (modified: 05 May 2023)NIPS 2018 Workshop IRASL Blind SubmissionReaders: Everyone
Abstract: End-to-end acoustic-to-word speech recognition models have recently gained popularity because they are easy to train, scale well to large amounts of training data, and do not require a lexicon. In addition, word models may also be easier to integrate with downstream tasks such as spoken language understanding, because inference (search) is much simplified compared to phoneme, character or any other sort of sub-word units. In this paper, we describe methods to construct contextual acoustic word embeddings directly from a supervised sequence-to-sequence acoustic-to-word speech recognition model using the learned attention distribution. On a suite of 16 standard sentence evaluation tasks, our embeddings show competitive performance against a word2vec model trained on the speech transcriptions. In addition, we evaluate these embeddings on a spoken language understanding task and observe that our embeddings match the performance of text-based embeddings in a pipeline of first performing speech recognition and then constructing word embeddings from transcriptions.
Keywords: acoustic word embeddings, contextual embeddings, attention, acoustic-to-word speech recognition
TL;DR: Methods to learn contextual acoustic word embeddings from an end-to-end speech recognition model that perform competitively with text-based word embeddings.
8 Replies

Loading