Hierarchical Latent Words Language Models for Robust Modeling to Out-Of Domain TasksDownload PDF

2015 (modified: 16 Jul 2019)EMNLP 2015Readers: Everyone
Abstract: This paper focuses on language modeling with adequate robustness to support different domain tasks. To this end, we propose a hierarchical latent word language model (h-LWLM). The proposed model can be regarded as a generalized form of the standard LWLMs. The key advance is introducing a multiple latent variable space with hierarchical structure. The structure can flexibly take account of linguistic phenomena not present in the training data. This paper details the definition as well as a training method based on layer-wise inference and a practical usage in natural language processing tasks with an approximation technique. Experiments on speech recognition show the effectiveness of hLWLM in out-of domain tasks.
0 Replies

Loading