DFMC: Feature-Driven Data-Free Knowledge Distillation

Zherui Zhang, Rongtao Xu, Changwei Wang, Wenhao Xu, Shunpeng Chen, Shibiao Xu, Guangyuan Xu, Li Guo

Published: 01 Oct 2025, Last Modified: 29 Jan 2026IEEE Transactions on Circuits and Systems for Video TechnologyEveryoneRevisionsCC BY-SA 4.0
Abstract: Data-Free Knowledge Distillation (DFKD) enables knowledge transfer from teacher networks without access to the real dataset. However, generator-based DFKD methods often suffer from insufficient diversity or low-confidence in synthetic images, negatively impacting student network performance. This paper introduces DFMC, a generative feature-driven framework to mitigate the inherent limitations of DFKD. We propose exploiting semantic description between generative feature domains to guide augmentation strategies, avoiding random abstract inputs caused by inconsistent semantic quality. Then, by applying noise to the generative features, we produce contrastive learning pairs indirectly, limiting the sampling range of the feature domain to encourage the student network to learn domain-invariant features. Finally, we guide the student network to deeply mimic the teacher’s layer-wise implicit classification behavior for the augmented synthetic images. Extensive experiments across various datasets and downstream tasks demonstrate the effectiveness of DFMC, achieving significant improvements while preventing student networks from overfitting to semantic ambiguous images.
Loading