Abstract: Most named entity recognition models comprehend words based solely on their contexts and man-made features, but neglect relevant knowledge. Incorporating prior knowledge from a knowledge base is straightforward but non-trivial due to two challenges: knowledge noise from unrelated span-entity mappings and knowledge gap between a text and a knowledge base. To tackle these challenges, we propose KA-NER, a novel knowledge-augmented named entity recognition model, in which sanitized entities from a knowledge base are injected to sentences as prior knowledge. Specifically, our model consists of two components: a knowledge filtering module to filter domain-relevant entities and a knowledge fushion module to bridge the knowledge gap when incorporating knowledge into a NER model. Experimental results show that our model achieves significant improvements against baseline models on different domain datasets.
0 Replies
Loading