Pretrain then Adapt: Uncertainty-Aware Test-Time Adaptation for Text-based Person Search

04 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: person retreival, domain gap, test-time adaptation, uncertainty
Abstract: Text-based person retrieval faces inherent limitations due to data scarcity, driven by stringent privacy constraints and the high cost of manual annotation. To mitigate this, existing methods usually rely on a \textbf{Pretrain-then-Finetune} paradigm, where models are first pretrained on synthetic person-caption data to establish cross-modal alignment, followed by fine-tuning on labeled real-world datasets. However, this paradigm lacks practicality in real-world deployment scenarios, where large-scale annotated target-domain data is typically inaccessible. In this work, we propose a new \textbf{Pretrain-then-Adapt} paradigm that eliminates reliance on extensive target-domain supervision. The key underpinning our approach is Uncertainty-Aware Test-Time Adaptation (UATTA), a framework enabling dynamic model adaptation using only unlabeled test data, with minimal computational overhead. UATTA introduces a bidirectional retrieval disagreement mechanism to estimate uncertainty, \ie, low uncertainty is assigned when an image-text pair ranks highly in both image-to-text and text-to-image retrieval, indicating high alignment; otherwise, high uncertainty is detected. This indicator drives test-time model recalibration without labels, effectively mitigating domain shift. We validate UATTA on four benchmarks, \ie, CUHK-PEDES, ICFG-PEDES, RSTPReid, and PAB, showing consistent improvements across both CLIP-based (one-stage) and XVLM-based (two-stage) frameworks. Ablation studies confirm that UATTA outperforms existing test-time adaptation strategies, establishing a new benchmark for label-efficient, deployable person retrieval systems.
Supplementary Material: pdf
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 2103
Loading