TL;DR: We propose the Dual-Balance Collaborative Experts (DCE) framework to address intra-domain class imbalance and cross-domain class distribution shifts in domain-incremental learning.
Abstract: Domain-Incremental Learning (DIL) focuses on continual learning in non-stationary environments, requiring models to adjust to evolving domains while preserving historical knowledge. DIL faces two critical challenges in the context of imbalanced data: intra-domain class imbalance and cross-domain class distribution shifts. These challenges significantly hinder model performance, as intra-domain imbalance leads to underfitting of few-shot classes, while cross-domain shifts require maintaining well-learned many-shot classes and transferring knowledge to improve few-shot class performance in old domains. To overcome these challenges, we introduce the Dual-Balance Collaborative Experts (DCE) framework. DCE employs a frequency-aware expert group, where each expert is guided by specialized loss functions to learn features for specific frequency groups, effectively addressing intra-domain class imbalance. Subsequently, a dynamic expert selector is learned by synthesizing pseudo-features through balanced Gaussian sampling from historical class statistics. This mechanism navigates the trade-off between preserving many-shot knowledge of previous domains and leveraging new data to improve few-shot class performance in earlier tasks. Extensive experimental results on four benchmark datasets demonstrate DCE’s state-of-the-art performance.
Lay Summary: Many machine learning systems struggle to keep learning over time, especially when the data they receive comes from different environments and is unevenly distributed. For example, in some domains, a system may get many examples of certain categories and very few of others. When these imbalances exist both within and across learning stages, the model may forget what it learned before or fail to learn the less frequent categories well.Our research addresses this problem by designing a method that helps the model balance both old and new knowledge. We train a group of specialized networks, each focusing on different types of categories — from common to rare — to better handle uneven data. Then, we teach the system how to choose and combine these networks based on past experience and new data. This dual approach helps the model remember important past knowledge while still adapting to new information, especially for rare or previously under-learned categories. We show that our method significantly improves learning on several benchmark tasks.
Link To Code: https://github.com/Lain810/DCE
Primary Area: Deep Learning->Algorithms
Keywords: Domain-incremental learning,Class-imbalanced learning
Submission Number: 15062
Loading