Image Synthesis Under Limited Data: A Survey and Taxonomy

Published: 01 Jan 2025, Last Modified: 20 Jul 2025Int. J. Comput. Vis. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Deep generative models, which target reproducing the data distribution to produce novel images, have made unprecedented advancements in recent years. However, one critical prerequisite for their tremendous success is the availability of a sufficient number of training samples, which requires massive computation resources. When trained on limited data, generative models tend to suffer from severe performance deterioration due to overfitting and memorization. Accordingly, researchers have devoted considerable attention to develop novel models that are capable of generating plausible and diverse images from limited training data recently. Despite numerous efforts to enhance training stability and synthesis quality in the limited data scenarios, there is a lack of a systematic survey that provides (1) a clear problem definition, challenges, and taxonomy of various tasks; (2) an in-depth analysis on the pros, cons, and limitations of existing literature; and (3) a thorough discussion on the potential applications and future directions in this field. To fill this gap and provide an informative introduction to researchers who are new to this topic, this survey offers a comprehensive review and a novel taxonomy on the development of image synthesis under limited data. In particular, it covers the problem definition, requirements, main solutions, popular benchmarks, and remaining challenges in a comprehensive and all-around manner. We hope this survey can provide an informative overview and a valuable resource for researchers and practitioners. Apart from the relevant references, we aim to constantly maintain a timely up-to-date repository to track the latest advances at awesome-few-shot-generation.
Loading