Wav2Letter: an End-to-End ConvNet-based Speech Recognition SystemDownload PDF

10 Apr 2025 (modified: 23 Mar 2025)Submitted to ICLR 2017Readers: Everyone
Abstract: This paper presents a simple end-to-end model for speech recognition, combining a convolutional network based acoustic model and a graph decoding. It is trained to output letters, with transcribed speech, without the need for force alignment of phonemes. We introduce an automatic segmentation criterion for training from sequence annotation without alignment that is on par with CTC (Graves et al., 2006) while being simpler. We show competitive results in word error rate on the Librispeech corpus (Panayotov et al., 2015) with MFCC features, and promising results from raw waveform.
TL;DR: We propose convnet models and new sequence criterions for training end-to-end letter-based speech systems.
Conflicts: fb.com
Keywords: Deep learning, Speech, Structured prediction
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 7 code implementations](https://www.catalyzex.com/paper/wav2letter-an-end-to-end-convnet-based-speech/code)
18 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview