A transformer-based dual contrastive learning approach for zero-shot learning

Published: 01 Jan 2025, Last Modified: 15 Apr 2025Neurocomputing 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The goal of zero-shot learning is to utilize attribute information for seen classes so as to generalize the learned knowledge to unseen classes. However, current algorithms often overlook the fact that the same attribute may exhibit different visual features across domains, leading to domain shift issues when transferring knowledge. Furthermore, in terms of visual feature extraction, networks like ResNet are not effective in capturing global information from images, adversely impacting recognition accuracy. To address these challenges, we propose an end-to-end Transformer-Based Dual Contrastive Learning Approach (TFDNet) for zero-shot learning. The network leverages the Vision Transformer (ViT) for extracting visual features and includes a mechanism for attribute localization to identify regions most relevant to the given attributes. Subsequently, it employs a dual contrastive learning method as a constraint, optimizing the learning process to better capture global feature representations. The proposed method makes the classifier more robust and enhances the ability to discriminate and generalize the unseen classes. Experimental results on three public datasets demonstrate the superiority of TFDNet over current state-of-the-art algorithms, validating its effectiveness in the field of zero-shot learning.
Loading