Intelligent Selection of Language Model Training DataDownload PDFOpen Website

2010 (modified: 12 Nov 2022)ACL (Short Papers) 2010Readers: Everyone
Abstract: We address the problem of selecting non-domain-specific language model training data to build auxiliary language models for use in tasks such as machine translation. Our approach is based on comparing the cross-entropy, according to domain-specific and non-domain-specifc language models, for each sentence of the text source used to produce the latter language model. We show that this produces better language models, trained on less data, than both random data selection and two other previously proposed methods.
0 Replies

Loading