Abstract: Practical applications often face a challenging continuous low-shot detection scenario, where a target detection task only has a few annotated training images, and a number of such new tasks come in sequence. To address this challenge, we propose a generic detection scheme via Disentangling-Imprinting-Distilling (DID). DID can leverage delicate transfer insights into the main development flow of deep learning, i.e., architecture design (Disentangling), model initialization (Imprinting), and training methodology (Distilling). This allows DID to be a simple but effective solution for continuous low-shot detection. In addition, DID can integrate the supervision from different detection tasks into a progressive learning procedure. As a result, one can efficiently adapt the previous detector for a new low-shot task, while maintaining the learned detection knowledge in the history. Finally, we evaluate our DID on a number of challenging settings in continuous/incremental low-shot detection. All the results demonstrate that our DID outperforms the recent state-of-the-art approaches. The code and models are available at https://github.com/chenxy99/DID.
0 Replies
Loading