Abstract: Lifelong person re-identification (LReID) aims to continuously learn from sequential data streams, enabling cross-camera matching of individuals over time. A critical challenge in LReID lies in balancing the preservation of previously acquired knowledge with the incremental acquisition of new information, due to task-level gaps and limited representation capacity. Conventional methods relying on CNN backbones struggle to fully capture the diverse perspectives of each instance, leading to suboptimal model performance. To tackle these limitations, we propose a diverse representation embedding (DRE) framework that balances preserving old knowledge with adapting to new information. Specifically, our DRE incorporates a robust Transformer-based backbone that utilizes maximum embedding (ME) and multiple class tokens to generate overlapping representations for each instance. To further enhance the model’s representation capacity, we design an adaptive constraint module (ACM), which performs integration and discrimination operations on overlapping representations to yield diverse yet diverse representations. Furthermore, we propose two strategies: knowledge update (KU) and knowledge preservation (KP), implemented within the adjustment and learner models, respectively. The KU strategy enhances the learner model’s ability to adapt to new information by leveraging prior knowledge from the adjustment model. The KP strategy ensures the retention of historical knowledge while maintaining the model’s adaptability. Extensive experiments validate that our DRE surpasses state-of-the-art approaches across large-scale, occluded, and holistic datasets, demonstrating significant performance gains. Our code is available at https://github.com/LiuShiBen/DRE
External IDs:dblp:journals/tnn/LiuFWCHT25
Loading