Abstract: Text in many domains involves a significant amount of named entities. Predicting the entity names is often challenging
for a language model as they appear less
frequent on the training corpus. In this
paper, we propose a novel and effective
approach to building a discriminative language model which can learn the entity
names by leveraging their entity type information. We also introduce two benchmark datasets based on recipes and Java
programming codes, on which we evaluate the proposed model. Experimental results show that our model achieves 52.2%
better perplexity in recipe generation and
22.06% on code generation than the stateof-the-art language models.
0 Replies
Loading