Win-Win Cooperation: Bundling Sequence and Span Models for Named Entity Recognition

Published: 2025, Last Modified: 27 Sept 2025IEEE Trans. Knowl. Data Eng. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: For Named Entity Recognition (NER), sequence labeling-based and span-based paradigms are quite different. Previous studies have demonstrated the clear complementary advantages of the two paradigms, but few models have tried to incorporate them into a single NER model as far as we know. In our previous work, we proposed a paradigm called Bundling Learning (BL) to explore the above issue, which bundles the two NER paradigms, enabling NER models to jointly tune their parameters by weighted summing each paradigm's training loss. However, three critical issues remain unresolved: When does BL work? Why does BL work? Can BL enhance existing state-of-the-art NER models? To address the first two issues, we design three NER models: a sequence labeling-based model – SeqNER, a span-based NER model – SpanNER, and BL-NER which bundles SeqNER and SpanNER. We draw two conclusions regarding the two issues based on the experimental results on eleven NER datasets. To investigate the third issue, we apply BL to five existing state-of-the-art NER models, including three sequence labeling-based and two span-based models. Experimental results indicate consistent NER performance gains, suggesting a feasible way to construct new state-of-the-art NER systems by applying BL to the current state-of-the-art systems. Moreover, investigation results show that BL reduces both entity boundary and type prediction errors. In addition, we compare two commonly used label tagging methods and three types of span semantic representations.
Loading