Exemplar-Free Lifelong Person Re-identification via Prompt-Guided Adaptive Knowledge Consolidation

Published: 01 Jan 2024, Last Modified: 28 Jan 2025Int. J. Comput. Vis. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Lifelong person re-identification (LReID) refers to matching people across different cameras given continuous data streams. The challenge of catastrophic forgetting of old knowledge and the effective acquisition of new knowledge form a significant dilemma for LReID. Most current LReID methods propose to retain abundant exemplars from historical data, which are further rehearsed to fully fine-tune the whole model. However, such a learning paradigm will inevitably hinder data privacy and result in substantial computation costs. In this paper, we propose a paradigm for exemplar-free LReID through model re-parameterization. Without retaining any exemplars, our designed method adopts a novel Prompt-guided Adaptive Exponential Moving Average (PAEMA) strategy to achieve dynamic knowledge consolidation. Our key idea is to leverage visual prompting as the guidance for model re-parameterization to benefit knowledge preservation. Conventional Exponential Moving Average (EMA) methods rely on fixed or time-varied constants as weighting parameters, the unpredictable correlation between new and old data streams may lead to varying levels of model parameter drifting during LReID learning. Hence, we argue that a proper weighting parameter should be conditioned on the variation of new and old models to provide an adaptive knowledge consolidation for LReID. To do so, an adaptive mechanism is proposed to utilize the visual prompt as a surrogate for model variation estimation. Consequently, without using any exemplars, the forgetting issue in LReID is greatly alleviated. Experiments on various LReID benchmarks have verified the superiority of our method against the state-of-the-art lifelong learning and LReID approaches. Code is available at https://github.com/zhoujiahuan1991/IJCV2024-PAEMA/.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview