Keywords: Synthetic oversampling; Imbalanced classification; Concentration inequality; Excess risk bound; AM-risk
TL;DR: The paper investigates some concentration inequalities and excess risk bounds for imbalanced classification problems when using some synthetic oversampling techniques as SMOTE.
Abstract: Synthetic oversampling of minority examples using SMOTE and its variants is a leading strategy for addressing imbalanced classification problems. Despite the success of this approach in practice, its theoretical foundations remain underexplored. We develop a theoretical framework to analyze the behavior of SMOTE and related methods when classifiers are trained on synthetic data. We first derive a uniform concentration bound on the discrepancy between the empirical risk over synthetic minority samples and the population risk on the true minority distribution. We then provide a nonparametric excess risk guarantee for kernel-based classifiers trained using such synthetic data. These results lead to practical guidelines for better parameter tuning of both SMOTE and the downstream learning algorithm. Numerical experiments are provided to illustrate and support the theoretical findings.
Primary Area: Theory (e.g., control theory, learning theory, algorithmic game theory)
Submission Number: 21221
Loading