Majority or Minority: Data Imbalance Learning Method for Named Entity Recognition

Published: 05 Mar 2024, Last Modified: 12 May 2024PML4LRS OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: natural language processing, named entity recognition, data imbalance, cost-sensitive learning
TL;DR: We propose a simple and effective learning method named majority or minority (MoM) learning which incorporates the loss computed only for samples whose ground truth is the majority class into the loss of the conventional ML model.
Abstract: Data imbalance presents a significant challenge in various machine learning (ML) tasks, particularly named entity recognition (NER) within natural language processing (NLP). NER exhibits a data imbalance with a long-tail distribution, featuring numerous minority classes (i.e., entity classes) and a single majority class (i.e., O-class). This imbalance leads to misclassifications of the entity classes as the O-class. To tackle this issue, we propose a simple and effective learning method named majority or minority (MoM) learning. MoM learning incorporates the loss computed only for samples whose ground truth is the majority class into the loss of the conventional ML model. Evaluation experiments on four NER datasets (Japanese and English) showed that MoM learning improves prediction performance of the minority classes without sacrificing the performance of the majority class and is more effective than widely known and state-of-the-art methods. We also evaluated MoM learning using frameworks as sequential labeling and machine reading comprehension, which are commonly used in NER. Furthermore, MoM learning has achieved consistent performance improvements regardless of language or framework.
Submission Number: 6
Loading