Lexicon Size and Language Model Order Optimization for Russian LVCSR

Published: 01 Jan 2013, Last Modified: 28 Mar 2025SPECOM 2013EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In this paper, the comparison of 2,3,4-gram language models with various lexicon sizes is presented. The text data forming the training corpus has been collected from recent Internet news sites; total size of the corpus is about 350 million words (2.4 GB data). The language models were built using the recognition lexicons of 110K, 150K, 219K, and 303K words. For evaluation of these models such characteristics as perplexity, OOV words rate and n-gram hit rate were computed. Experimental results on continuous Russian speech recognition are also given in the paper.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview