Abstract: Zero-shot learning (ZSL) enables the recognition of novel classes by leveraging semantic knowledge transfer from known to unknown categories. This knowledge, typically encapsulated in attribute descriptions, aids in identifying class-specific visual features, thus facilitating visual-semantic alignment and improving ZSL performance. However, real-world challenges such as distribution imbalances and attribute co-occurrence among instances often hinder the discernment of local variances in images, a problem exacerbated by the scarcity of fine-grained, region-specific attribute annotations. Moreover, the variability in visual presentation within categories can also skew attribute-category associations. In response, we propose a bidirectional cross-modal ZSL approach CREST. It begins by extracting representations for attribute and visual localization and employs Evidential Deep Learning (EDL) to measure underlying epistemic uncertainty, thereby enhancing the model's resilience against hard negatives. CREST incorporates dual learning pathways, focusing on both visual-category and attribute-category alignments, to ensure robust correlation between latent and observable spaces. Moreover, we introduce an uncertainty-informed cross-modal fusion technique to refine visual-attribute inference. Extensive experiments demonstrate our model's effectiveness and unique explainability across multiple datasets. Our code and data are available at: https://anonymous.4open.science/r/CREST-1CEC.
Primary Subject Area: [Content] Vision and Language
Secondary Subject Area: [Content] Multimodal Fusion
Relevance To Conference: In this submission, we introduce CREST, a bidirectional cross-modal Zero-shot Learning (ZSL) framework specifically designed to recognize unseen visual categories by leveraging semantic attribute transfer. Addressing the inherent challenge of limited fine-grained, region-specific annotations in multimedia content, CREST employs a novel approach. It integrates Evidential Deep Learning (EDL) to quantify and navigate epistemic uncertainty, a significant stride in handling the often imbalanced and co-occurring attribute distributions in multimodal datasets.
The motivation behind CREST is to bridge the gap between visual and semantic domains more effectively, countering the visual variability that weakens attribute-category associations in traditional ZSL methods. By pioneering dual learning pathways, CREST meticulously aligns both visual-category and attribute-category relationships, bolstering the correlation between latent and observable spaces within multimedia frameworks.
We believe CREST advances ZSL with its fusion of uncertainty and cross-modal insights, enhancing adaptability in multimedia/multimodal processing.
Supplementary Material: zip
Submission Number: 5158
Loading