EqGAN: Reformation-based Feature Equalization Fusion for Few-shot Image Generation

Published: 2025, Last Modified: 25 Jan 2026ICASSP 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Due to the absence or mismatch of semantic information, existing few-shot image generation methods suffer from unsatisfactory generation quality and diversity, which have minimal benefits as data augmentation for downstream classification tasks. Reformatting the contextual and textural information of features at different scales, we propose a novel Feature Equalization Fusion Generative Adversarial Network (EqGAN) for few-shot image generation. Specifically, we first decompose the encoded features into textual and structural components to mitigate the influence of irrelevant and redundant information. Based on feature correlation learning and attention mechanism, we then obtain fused features by refining different contents (i.e., textures and structures) with a more fine-grained semantic alignment. Moreover, an attention-based reconstruction loss and a consistency-based equalization loss are devised to provide better training stability and generation performance. Comprehensive experiments on three public datasets demonstrate that EqGAN not only significantly improves the FID scores (by up to 14.10%) and LPIPS scores (by up to 3.17%) of generated images, but also outperforms the state-of-the-art in terms of accuracy (by up to 3.89%) for downstream classification.
Loading