Out-of-Distribution Forgetting: Vulnerability of Continual Learning to Intra-class Distribution Shift
Abstract: Continual learning (CL) is a key technique enabling neural networks to acquire new tasks while retaining efficiency in previous ones. Standard CL tests revisit old tasks after learning, assuming stable data distribution, which is often impractical. Meanwhile, it is well known that the out-of-distribution (OOD) problem will severely impair the ability of networks to generalize. Rare research considered the influence of CL on the generalizing ability of neural networks. Our research highlights a special form of catastrophic forgetting raised by the OOD problem in CL settings. Through continual image classification experiments, we discovered that: introducing a tiny intra-class distribution shift within a specific category significantly impairs the recognition accuracy of many CL methods. We named it out-of-distribution forgetting (OODF). Moreover, the performance degradation caused by OODF is special for CL, as the same level of distribution shift had only negligible effects in the joint learning scenario. We verified that most CL strategies except for parameter isolation ones are vulnerable to OODF. Taken together, our work identified an under-attended risk during CL, highlighting the importance of developing approaches that can overcome OODF. Code available: https://github.com/Hiroid/OODF.
Loading