EXPLORING FEW-SHOT IMAGE GENERATION WITH MINIMIZED RISK OF OVERFITTING

18 Sept 2024 (modified: 15 Nov 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: few shot learning, generative model, diffusion model
TL;DR: We present a novel representation learning framework for Few-Shot Image Generation, featuring a tunable parameter to explicitly mitigate overfitting while adapting a specific domain.
Abstract: Few-shot image generation (FSIG) using deep generative models (DGMs) presents a significant challenge in accurately estimating the distribution of the target domain with extremely limited samples. Recent work has addressed the problem using a transfer learning approach, i.e., fine-tuning, leveraging a DGM that pre-trained on a large-scale source domain dataset, and then adapting it to the target domain with very limited samples. However, despite various proposed regularization techniques, existing frameworks lack a systematic mechanism to analyze the degree of overfitting, relying primarily on empirical validation without rigorous theoretical grounding. We present Few-Shot Diffusion-regularized Representation Learning (FS-DRL), an innovative approach designed to minimize the risk of over-fitting while preserving distribution consistency in target image adaptation. Our method is distinct from conventional methods in two aspects: First, instead of fine-tuning, FS-DRL employs a novel scalable Invariant Guidance Matrix (IGM) during the diffusion process, which acts as a regularizer in the feature space of the model. This IGM is designed to have the same dimensionality as the target images, effectively constraining its capacity and encouraging it to learn a low-dimensional manifold that captures the essential structure of the target domain. Second, our method introduces a controllable parameter called sharing degree, which determines how many target images correspond to each IGM, enabling a fine-grained balance between overfitting risk and model flexibility, thus providing a quantifiable mechanism to analyze and mitigate overfitting. Extensive experiments demonstrate that our approach effectively mitigates overfitting, enabling efficient and robust few-shot learning across diverse domains.
Primary Area: generative models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1537
Loading