Abstract: Suffering from the singularity of generated samples, the most of the existing generative models cannot achieve satisfactory performance on Generalized Zero-shot Learning tasks. Generative methods applied to generalized zero-shot learning mainly focus on the data generation of the entire class domain, and do not pay enough attention to the intra-instance relationships, which severely limits the robustness of generators. Therefore, we propose a model that mainly utilizes Cramer distance and Wasserstein Generative Adversarial Network -divergence to generate various diversified visual features, which effectively alleviate the domain shift problem and facilitates better classification than traditional Wasserstein distance and its variants. In addition, we also use perturbation-based attack strategy to standardize the range of generated features, so as not to generate too grotesque features and lead to incorrect classification results. Finally, supported by the above, we utilize two different classifiers to obtain better Generative Zero-shot Learning performance, i.e., the traditional softmax classifier and the normalized prototypes classifier. Extensive experimental results show that our proposed method can outperform the most of state-of-the-art methods on five benchmark datasets in generalized zero-shot learning setting and various measurement criteria.
Loading