Semantic-Guided Multi-Attention Localization for Zero-Shot LearningDownload PDF

Yizhe Zhu, Jianwen Xie, Zhiqiang Tang, Xi Peng, Ahmed Elgammal

06 Sept 2019 (modified: 05 May 2023)NeurIPS 2019Readers: Everyone
Abstract: Zero-shot learning extends the conventional object classification to the unseen class recognition by introducing semantic representations of classes. Existing approaches predominantly focus on learning the proper mapping function for visual-semantic embedding, while neglecting the effect of learning discriminative visual features. In this paper, we study the significance of the discriminative region localization. We propose a semantic-guided multi-attention localization model, which automatically discovers the most discriminative parts of objects for zero-shot learning without any human annotations. A joint global and local feature learning model is presented that takes as input the whole object as well as the detected parts as global and local cues, and learns discriminative features to categorize objects based on semantic descriptions. Moreover, with the joint supervision of embedding softmax loss and class-agent triplet loss, the model is encouraged to learn features with high inter-class dispersion and intra-class compactness. We conduct comprehensive experiments on three widely used zero-shot learning benchmarks and show that our proposed approach improves the state-of-the-art results by a considerable margin with different types of semantic representations.
Code Link: https://github.com/EthanZhu90/ZSL_SGMA
CMT Num: 8507
1 Reply

Loading