Iterative knowledge distillation and pruning for model compression in unsupervised domain adaptation

Published: 01 Jan 2025, Last Modified: 15 May 2025Pattern Recognit. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Highlights•Device resource constraints and cross-domain data distribution diversity challenge effective deep learning deployment.•Current model compression methods suffer from accuracy degradation, and it is difficult to achieve a balance between model size and accuracy.•The progressively iterative knowledge distillation and pruning effectively balances model size and performance in the target domain.•Transfer knowledge distillation compresses models by transferring knowledge from a teacher to a student model while facilitating cross-domain adaptation.•A flexible pruning strategy adaptively removes redundant channels, reducing model size and computational cost without sacrificing accuracy.
Loading