Dual Contrastive Inversion with Distributional Priors for Diversity-Aware Data-Free Knowledge Distillation
Keywords: Model Inversion;Data-Free Knowledge Distillation;Distributional Priors;Contrastive Learning;Diversity Enhancement
Abstract: Model inversion (MI) has emerged as a key paradigm for data-free knowledge distillation (DFKD), yet existing MI methods suffer from limited diversity in synthetic data due to simplistic unimodal priors and the lack of explicit mechanisms for instance separability. We propose **D2CIP** (*Dual Contrastive Inversion with Distributional Priors*), a two-stage framework that enhances diversity by first recovering a class-conditional distributional prior with a Gaussian Mixture Model (GMM) aligned to teacher predictions and batch-normalization statistics, and then applying dual contrastive learning at both latent and instance levels with memory banks to enlarge the set of negatives. We further formalize data diversity as expected pairwise separability and establish its monotonic relationship with the contrastive loss, providing a principled justification for diversity maximization. Experiments on **CIFAR-10**, **CIFAR-100**, and **Tiny-ImageNet** demonstrate that D2CIP consistently outperforms state-of-the-art MI-based DFKD methods in both synthetic data diversity and distillation accuracy. The code is available at [https://anonymous.4open.science/r/gmdmi4dfkd-53E3](https://anonymous.4open.science/r/gmdmi4dfkd-53E3).
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 12556
Loading