Zero-shot classification with unseen prototype learningDownload PDFOpen Website

Published: 01 Jan 2023, Last Modified: 17 Nov 2023Neural Comput. Appl. 2023Readers: Everyone
Abstract: Zero-shot learning (ZSL) aims at recognizing instances from unseen classes via training a classification model with only seen data. Most existing approaches easily suffer from the classification bias from unseen to seen categories since the models are only trained with seen data. In this paper, we tackle the task of ZSL with a novel Unseen Prototype Learning (UPL) model, which is a simple yet effective framework to learn visual prototypes for unseen categories from the corresponding class-level semantic information, and the learned features are treated as latent classifiers directly. Two types of constraints are proposed to improve the performance of the learned prototypes. Firstly, we utilize an autoencoder framework to learn visual prototypes from the semantic prototypes and reconstruct the original semantic information by a decoder to ensure the prototypes have a strong correlation with the corresponding categories. Secondly, we employ a triplet loss to make the average of visual features per class supervise the learned visual prototypes. In this way, the visual prototypes are more discriminative and as a result, the classification bias problem can be alleviated well. Besides, based on the episodic training paradigm in meta-learning, the model can accumulate wealthy experiences in predicting unseen classes. Extensive experiments on four datasets under both traditional ZSL and generalized ZSL show the effectiveness of our proposed UPL method.
0 Replies

Loading