CLoG: Benchmarking Continual Learning of Image Generation Models

Published: 10 Oct 2024, Last Modified: 10 Oct 2024Continual FoMo PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Continual Learning, Generative Model, Diffusion Model
Abstract: Continual Learning (CL) poses a significant challenge in Artificial Intelligence, aiming to incrementally acquire knowledge and skills. While extensive research has focused on CL within the context of classification tasks, the advent of increasingly powerful generative models necessitates the exploration of Continual Learning of Generative models (CLoG). This paper advocates for shifting the research focus from classification-based CL to CLoG. We systematically identify the unique challenges presented by CLoG compared to traditional classification-based CL. We adapt three types of existing CL methodologies—replay-based, regularization-based, and parameter-isolation-based methods—to generative tasks and introduce comprehensive benchmarks for CLoG that feature great diversity and broad task coverage. Our benchmarks and results yield intriguing insights that can be valuable for developing future CLoG methods. We believe shifting the research focus to CLoG will benefit the CL community and illuminate the path for AI-generated content (AIGC) in a lifelong learning paradigm.
Submission Number: 23
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview