A replica analysis of under-bagging

TMLR Paper2583 Authors

25 Apr 2024 (modified: 17 Jul 2024)Decision pending for TMLREveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Under-bagging (UB), which combines under-sampling and bagging, is a popular ensemble learning method for training classifiers on an imbalanced data. Using bagging to reduce the increased variance caused by the reduction in sample size due to under-sampling is a natural approach. However, it has recently been pointed out that in generalized linear models, naive bagging, which does not consider the class imbalance structure, and ridge regularization can produce the same results. Therefore, it is not obvious whether it is better to use UB, which requires an increased computational cost proportional to the number of under-sampled data sets, when training linear models. Given such a situation, in this study, we heuristically derive a sharp asymptotics of UB and use it to compare with several other standard methods for learning from imbalanced data, in the scenario where a linear classifier is trained from a two-component mixture data. The methods compared include the under-sampling (US) method, which trains a model using a single realization of the subsampled data, and the simple weighting (SW) method, which trains a model with a weighted loss on the entire data. It is shown that the performance of UB is improved by increasing the size of the majority class while keeping the size of the minority fixed, even though the class imbalance can be large, especially when the size of the minority class is small. This is in contrast to US, whose performance is almost independent of the majority class size. In this sense, bagging and simple regularization differ as methods to reduce the variance increased by under-sampling. On the other hand, the performance of SW with the optimal weighting coefficients is almost equal to UB, indicating that the combination of reweighting and regularization may be similar to UB.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: * We have deanonymized the submission. * Appendix D, which compares the under-bagging and the weighting method in a real-world dataset, is included in section 4.2.1. * The method for generating error bars in the real-world data experiment has been changed to “random train-validation split” instead of “random test-validation split”. This does not change the claim at all. See figure 9. * References have been updated to reflect the current status. * Some typos are fixed.
Assigned Action Editor: ~Bruno_Loureiro1
Submission Number: 2583
Loading