Abstract: Data-Free Knowledge Distillation (DFKD) enables knowledge transfer from teacher networks without access to the real dataset. However, generator-based DFKD methods often suffer from insufficient diversity or low-confidence in synthetic images, negatively impacting student network performance. This paper introduces DFMC, a generative feature-driven framework to mitigate the inherent limitations of DFKD. We propose exploiting semantic description between generative feature domains to guide augmentation strategies, avoiding random abstract inputs caused by inconsistent semantic quality. Then, by applying noise to the generative features, we produce contrastive learning pairs indirectly, limiting the sampling range of the feature domain to encourage the student network to learn domain-invariant features. Finally, we guide the student network to deeply mimic the teacher’s layer-wise implicit classification behavior for the augmented synthetic images. Extensive experiments across various datasets and downstream tasks demonstrate the effectiveness of DFMC, achieving significant improvements while preventing student networks from overfitting to semantic ambiguous images.
External IDs:dblp:journals/tcsv/ZhangXWXCXXG25
Loading