Task Regularized Hybrid Knowledge Distillation For Incremental Object Detection

19 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Knowledge Distillation, Continual Object Detection
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Incremental object detection (IOD) task is trapped in well-known catastrophic forgetting. Knowledge distillation has been used to overcome this problem. Previous works mainly focus on combining different distillation methods, including feature, classification, location and relation, into a mixed scheme to solve this problem. In this paper, we find two reasons of catastrophic forgetting, knowledge fuzziness and imbalance learning. We propose a task regularized hybrid knowledge distillation method for IOD task. Our method integrates knowledge selection strategy and knowledge transfer strategy. First, we propose an image-level hybrid knowledge representation by combining instance-level hard knowledge and soft knowledge to use teacher knowledge critically. Second, we propose a task-based regularization distillation loss by taking account of loss difference between old and new tasks to make incremental learning more balance. Extensive experiments conducted on MS COCO and Pascal VOC demonstrate that our method achieves state-of-the-art performance. Remarkably, we reduce the mAP gap between incremental leaning and joint learning to 6\% under the most difficult Five-Step scenario of MS COCO, which is significantly superior to previous best method.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1532
Loading