Keywords: Generative Models, Representation Learning
Abstract: Generative and representation models, whether trained independently or evolved separately, require high-quality, diverse training data, imposing limitations on their advancement.
Specifically, self-supervised learning, as a popular paradigm for representation learning, decreases the reliance on labeled data in representation models.
However, it still necessitates large datasets, specialized data augmentation techniques, and tailored training strategies.
While generative models have shown promise in generating diverse data, ensuring semantic consistency is still a challenge.
This paper introduces a novel co-evolution framework (referred to as CORE) designed to address these challenges through the mutual enhancement of generative and representation models.
Without incurring additional, unacceptable training overhead compared to independent training, the generative model utilizes semantic information from the representation model to enhance the quality and semantic consistency of generated data.
Simultaneously, the representation model gains from the diverse data produced by the generative model, leading to richer and more generalized representations.
By iteratively applying this co-evolution framework, both models can be continuously enhanced.
Experiments demonstrate the effectiveness of the co-evolution framework across datasets of varying scales and resolutions.
For example, implementing our framework in LDM can reduce the FID from $43.40$ to $20.13$ in unconditional generation tasks over the ImageNet-1K dataset.
In more challenging scenarios, such as tasks with limited data, this framework significantly outperforms independent training of generative or representation model.
Furthermore, employing the framework in a self-consuming loop effectively mitigates model collapse.
Our code will be publicly released.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 10318
Loading