Image-Free Zero-Shot Learning via Adaptive Semantic-Guided Classifier Injection

18 Sept 2025 (modified: 03 Dec 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Zero-Shot Learning, Image-Free Learning, Model-Free Learning, Semantic Guidance
TL;DR: We propose a novel Image-free Zero-Shot Learning framework that eliminates reliance on manually curated descriptions.
Abstract: *Zero-Shot Learning* (ZSL) aims to classify images from *unseen* classes by leveraging semantic relationships with *seen* classes. Most ZSL methods require access to visual data for training or adaptation, limiting their applicability in image-free scenarios. *Image-free Zero-Shot Learning* (I-ZSL) addresses this challenge by enabling pre-trained models to recognize unseen classes without image data. However, existing I-ZSL approaches rely on pre-defined class descriptions and task-agnostic text encoders, which often fail to capture domain-specific semantics. We propose *Adaptive Semantic-Guided Classifier Injection* (ASCI), a novel I-ZSL framework that eliminates reliance on manually curated descriptions. ASCI leverages large language models to generate class-pair affinity descriptions, capturing structured relationships between seen and unseen classes. A trainable text encoder refines these descriptions, ensuring alignment with task-specific semantics. Dynamically computed affinity scores guide the injection of robust classifiers for unseen classes while preserving the structural consistency of the pre-trained classification space. Experiments on benchmark datasets demonstrate that ASCI outperforms existing I-ZSL methods, particularly in fine-grained classification tasks.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 10883
Loading