Progressive Prototype Evolving for Dual-Forgetting Mitigation in Non-Exemplar Online Continual Learning

Published: 20 Jul 2024, Last Modified: 21 Jul 2024MM2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Online Continual Learning (OCL) aims at learning a model through a sequence of single-pass data, usually encountering the challenges of catastrophic forgetting both between different learning stages and within a stage. Currently, existing OCL methods address these issues by replaying part of previous data but inevitably raise data privacy concerns and stand in contrast to the setting of online learning where data can only be accessed once. Moreover, their performance will dramatically drop without any replay buffer. In this paper, we propose a Non-Exemplar Online Continual Learning method named Progressive Prototype Evolving (PPE). The core of our PPE is to progressively learn class-specific prototypes during the online learning phase without reusing any previously seen data. Meanwhile, the progressive prototypes of the current learning stage, serving as the accumulated knowledge of different classes, are fed back to the model to mitigate intra-stage forgetting. Additionally, to resist inter-stage forgetting, we introduce the Prototype Similarity Preserving and Prototype-Guided Gradient Constraint modules which distill and leverage the historical knowledge conveyed by prototypes to regularize the one-way model learning. Consequently, extensive experiments on three widely used datasets demonstrate the superiority of the proposed PPE against the state-of-the-art exemplar-based OCL approaches. Our code will be released.
Primary Subject Area: [Experience] Multimedia Applications
Secondary Subject Area: [Content] Vision and Language
Relevance To Conference: Our work focuses on the area of Non-Exemplar Online Continual Learning (NEOCL). Online Continual Learning (OCL) plays a pivotal role in advancing multimedia and multimodal processing by enabling multimedia models capable of efficiently handling dynamic data streams. This enables multimedia systems to continuously update their models in real time, incorporating new information and adapting to changing trends and patterns in multimedia content. Most of the existing OCL methods address the catastrophic forgetting issues of continual learning by replaying part of previous data but inevitably raise data privacy concerns. Thus we focus on a challenge scenario where no previous data can be accessed during the training and propose a progressive prototype evolving method to mitigate the catastrophic forgetting. Experiments show that our method achieves superior results to the state-of-the-art exemplar-based online continual learning approaches. By leveraging our non-exemplar online continual learning approaches, multimedia, and multimodal processing systems can remain up-to-date without the burden of storing previous training data, ultimately advancing the capabilities of intelligent technologies.
Supplementary Material: zip
Submission Number: 340
Loading