Cross-task knowledge distillation for few-shot detection

Published: 27 Aug 2025, Last Modified: 01 Oct 2025LIMIT 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Object detection, Visual encoders, knowledge distillation
TL;DR: Few-shot object detection improved using correlation-based distillation from large-scale pretrained image classifiers.
Abstract: While powerful pretrained visual encoders have advanced many vision tasks, their knowledge is not fully leveraged by object detectors, especially in few-shot settings. A key challenge in transferring this knowledge via cross-task distillation is the semantic mismatch between outputs: classifiers produce clean probability distributions, while detector scores implicitly encode both class and objectness. To address this, we propose a lightweight fine-tuning strategy guided by a novel, correlation-based distillation loss. This loss aligns the detector's relative class preferences with those of a strong image classifier, effectively decoupling the learning of class semantics from objectness. Applied to a state-of-the-art detector, our method consistently improves performance in a low-data regime, demonstrating an effective way to bridge the gap between powerful classifiers and object detectors.
Submission Number: 19
Loading