SKD-NER: Continual Named Entity Recognition via Span-based Knowledge Distillation with Reinforcement Learning

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 MainEveryoneRevisionsBibTeX
Submission Type: Regular Long Paper
Submission Track: Information Extraction
Submission Track 2: Efficient Methods for NLP
Keywords: Continual learning, Named Entity Recognition, Knowledge Distillation, Reinforcement Learning
TL;DR: We utilize knowledge distillation (KD) to preserve memory and employ reinforcement learning strategies during the KD process to optimize the soft labeling and distillation losses generated by the teacher model.
Abstract: Continual learning for named entity recognition (CL-NER) aims to enable models to continuously learn new entity types while retaining the ability to recognize previously learned ones. However, the current strategies fall short of effectively addressing the catastrophic forgetting of previously learned entity types. To tackle this issue, we propose the SKD-NER model, an efficient continual learning NER model based on the span-based approach, which innovatively incorporates reinforcement learning strategies to enhance the model's ability against catastrophic forgetting. Specifically, we leverage knowledge distillation (KD) to retain memory and employ reinforcement learning strategies during the KD process to optimize the soft labeling and distillation losses generated by the teacher model to effectively prevent catastrophic forgetting during continual learning. This approach effectively prevents or mitigates catastrophic forgetting during continuous learning, allowing the model to retain previously learned knowledge while acquiring new knowledge. Our experiments on two benchmark datasets demonstrate that our model significantly improves the performance of the CL-NER task, outperforming state-of-the-art methods.
Submission Number: 2583
Loading