Keywords: Continual Learning, Semi-Supervised Learning, Object Detection, Online Continual Learning
TL;DR: We address the problem of label-efficient online continual object detection by introducing ContinualCropBank, an object-level replay module that mitigates catastrophic forgetting while improving detection performance under limited supervision.
Abstract: Deep learning has achieved remarkable progress in object detection, but most advances rely on static, fully labeled datasets$\textemdash$an unrealistic assumption in dynamic, real-world environments. Continual Learning (CL) aims to overcome this limitation by enabling models to acquire new knowledge without forgetting prior tasks; however, many approaches assume known task boundaries and require multiple passes over the data. Online Continual Learning (OCL) offers a more practical alternative by processing data in a single pass; however, it remains limited by its dependence on costly annotations. To address this limitation, Label-Efficient Online Continual Object Detection (LEOCOD) extends OCL with a semi-supervised formulation, enabling detectors to leverage unlabeled data alongside limited labeled samples. In this paper, we propose ContinualCropBank, an object-level replay module for LEOCOD that stores object patches cropped from bounding box regions and pastes them into stream images during training. This solution enables fine-grained replay, mitigating catastrophic forgetting while addressing foreground–background imbalance and the scarcity of small objects. Experiments on two benchmark datasets demonstrate that incorporating ContinualCropBank improves detection accuracy and resilience to forgetting, achieving gains of up to $9.57$ percentage points in average accuracy and reducing degradation from forgetting by up to $2.32$ points.
Supplementary Material: zip
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 25010
Loading