Enhancing Generalized Zero-Shot Learning with Dynamic Selective Knowledge Distillation

Published: 2024, Last Modified: 06 Mar 2025WASA (1) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Generalized Zero-shot Learning (GZSL) aims to recognize categories unseen during training. Despite the success of generative-based methods in GZSL, they often suffer from bias towards seen data, impacting performance. In this paper, we introduce a novel approach to improve GZSL methods using dynamic selective knowledge distillation. We address this issue from two angles: 1) Ensuring the teacher generates unbiased unseen data, making GZSL a pseudo-fully-supervised problem. 2) Incorporating the student’s learning requirements into the teacher’s feedback loop, enhancing the distribution of generated unseen data. We employ a filtering mechanism during student training to select unbiased unseen features from a well-trained teacher as guidance. During student feedback, we utilize feedback on unseen data identification to regulate the teacher’s output distribution. Additionally, we propose a dynamic fusion strategy to effectively combine seen and unseen information, providing suitable supervision for the student. Our framework is compatible with various generative-based methods and demonstrates superior effectiveness in extensive experiments compared to state-of-the-art approaches across multiple benchmarks.
Loading