Hierarchical divide-and-conquer grouping for Zero shot learning

18 Sept 2024 (modified: 13 Nov 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Generalized Zero-shot learning, Visual Language Models, Hierarchical Divide-and-Conquer
Abstract: Generalized Zero-Shot Learning (GZSL) faces a key challenge in transferring knowledge from base classes to classify samples from both base and novel classes. This transfer learning paradigm inherently risks a prediction bias, wherein test samples are disproportionately classified towards the base classes due to the models' familiarity and overfitting to those classes during training. To tackle the prediction bias issue, we introduce a divide-and-conquer strategy that segregates the united label space into distinct base and novel subspaces. Within each subspace, we train a customized model to ensure specialized learning tailored to the distinct characteristics of the respective classes. To compensate for the absence of novel classes, we propose utilizing off-the-shelf diffusion-based generative models, conditioned on class-level descriptions crafted by Large Language Models (LLMs), to synthesize diverse visual samples representing the novel classes. To further relieve the class confusion in each subspace, we propose to further divide each subspace into two smaller subspaces, where the classes in each smaller subspace are obtained with the unsupervised cluster strategy in the text embedding space. With our hierarchical divide-and-conquer approach, the test samples are first divided into a smaller subspace and then predicted the class labels with the specialized model trained with the classes present within the subspace. Comprehensive evaluations across three GZSL benchmarks underscore the effectiveness of our method, demonstrating its ability to perform competitively and outperform existing approaches.
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1566
Loading