A Neural Knowledge Language Model

Sungjin Ahn, Heeyoul Choi, Tanel Parnamaa, Yoshua Bengio

Nov 03, 2016 (modified: Dec 27, 2016) ICLR 2017 conference submission readers: everyone
  • Abstract: Current language models have significant limitations in their ability to encode and decode knowledge. This is mainly because they acquire knowledge based on statistical co-occurrences, even if most of the knowledge words are rarely observed named entities. In this paper, we propose a Neural Knowledge Language Model (NKLM) which combines symbolic knowledge provided by a knowledge graph with the RNN language model. At each time step, the model predicts a fact on which the observed word is to be based. Then, a word is either generated from the vocabulary or copied from the knowledge graph. We train and test the model on a new dataset, WikiFacts. In experiments, we show that the NKLM significantly improves the perplexity while generating a much smaller number of unknown words. In addition, we demonstrate that the sampled descriptions include named entities which were used to be the unknown words in RNN language models.
  • TL;DR: A neural recurrent language model which can extract knowledge from a knowledge base to generate knowledge related words such as person names, locations, years, etc.
  • Keywords: Natural language processing, Deep learning
  • Conflicts: umontreal.ca, iro.umontreal.ca, samsung.com

Loading