Abstract: In real applications, person re-identification (ReID)
expects to retrieve the target person at any time, including both daytime and nighttime, ranging from
short-term to long-term. However, existing ReID
tasks and datasets cannot meet this requirement, as
they are constrained by available time and only provide training and evaluation for specific scenarios.
Therefore, we investigate a new task called Anytime Person Re-identification (AT-ReID), which
aims to achieve effective retrieval in multiple scenarios based on variations in time. To address
the AT-ReID problem, we collect the first largescale dataset, AT-USTC, which contains 135k images of individuals wearing multiple clothes captured by RGB and IR cameras. Our data collection spans over an entire year and 270 volunteers
were photographed on average 29.1 times across
different dates or scenes, 4-15 times more than current datasets, providing conditions for follow-up
investigations in AT-ReID. Further, to tackle the
new challenge of multi-scenario retrieval, we propose a unified model named Uni-AT, which comprises a multi-scenario ReID (MS-ReID) framework for scenario-specific features learning, a
Mixture-of-Attribute-Experts (MoAE) module to
alleviate inter-scenario interference, and a Hierarchical Dynamic Weighting (HDW) strategy to ensure balanced training across all scenarios. Extensive experiments show that our model leads to satisfactory results and exhibits excellent generalization
to all scenarios.
Loading