Top-Push Constrained Modality-Adaptive Dictionary Learning for Cross-Modality Person Re-IdentificationDownload PDF

22 Feb 2020OpenReview Archive Direct UploadReaders: Everyone
Abstract: Person re-identification aims to match person captured by multiple non-overlapping cameras that mainly mean standard RGB cameras. In contemporary surveillance, cameras of different modalities such as infrared cameras and depth cameras are introduced because of their unique advantages in poor illumination scenarios. However, re-identifying the persons across such cameras of different modalities is extremely difficult and, unfortunately, seldom discussed. It is mainly caused by extremely different appearances of the person shown under such different camera modalities. In this paper, we tackle this challenging cross-modality people re-identification through a top-push constrained modality-adaptive dictionary learning. The proposed model asymmetrically projects the heterogeneous features from dissimilar modalities onto a common space. In this way, the modality-specific bias is mitigated. Thus, the heterogeneous data can be simultaneously enforced by a shared dictionary in a canonical space. Moreover, a top-push ranking graph regularization is embedded in the proposed model to improve the discriminability, which efficiently further boosts the matching accuracy. In order to implement the proposed model, an iterative process is developed in this paper to optimize these two processes jointly. Extensive experiments on the benchmark SYSU-MM01 and BIWI RGBD-ID person re-identification datasets show promising results which outperform state-of-the-art methods.
0 Replies

Loading