Less Forgetting, More OOD Generalization: Adaptive Augmented Reweighted Replay (AA-RR) for Continual Learning
Abstract: Machine learning models often forget previously learned classes when trained sequentially. Rehearsal-based methods mitigate this by replaying stored samples, but their reliance on memorization leads to poor out-of-distribution (OOD) generalization—a problem that remains largely unstudied. This memorization is driven by unbalanced gradient updates, spurious correlations, and class-imbalanced replay buffers. To address these issues, we introduce Adaptive Augmented Reweighted Replay (AA-RR), a lightweight framework designed to improve generalization in rehearsal-based continual learning (CL). AA-RR applies adaptive, class-aware loss reweighting to correct gradient imbalance while accounting for data recency and limited buffer capacity. It further incorporates data-centric augmentation and a principled sample-selection strategy based on forgetting dynamics to retain representative, consistently learned examples. Experiments on standard CL benchmarks show that AA-RR markedly boosts generalization and surpasses state-of-the-art baselines, especially under covariate shift.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Yen-Chang_Hsu1
Submission Number: 7454
Loading