Abstract: We introduce the task of correcting named
entity recognition (NER) errors without retraining the model. After a NER model is
trained and deployed in production, it makes
prediction errors, which usually need to be
fixed quickly. To address this problem, we
firstly construct a gazetteer containing named
entities and corresponding possible entity types.
And then, we propose type-enhanced BERT
(TyBERT), a method that integrates the named
entity’s type information into BERT by an
adapter layer. When errors are identified, we
can repair the model by updating the gazetteer.
In other words, the gazetteer becomes a trigger
to control the NER model’s output. The experiment results in multiple corpus show the effectiveness of our method, which outperforms
strong baselines.
Loading