TPOUR: Temporal Preference Optimization for Unsupervised Retrieval

ICLR 2026 Conference Submission16758 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Temporal Retrieval, Information Retrieval, Unsupervised Learning, Contrastive Learning, Preference Optimization, Time Vector
TL;DR: We introduce an unsupervised retrieval training method (TPOUR) that combines contrastive learning with our Temporal Retrieval Preference Optimization (TRPO) to align document retrieval with both implicit and explicit temporal contexts.
Abstract: Unsupervised retrievers offer scalability by learning semantic similarity from unlabeled documents via contrastive learning. However, they struggle to capture the temporal relevance, often retrieving semantically related but temporally misaligned documents--an important aspect when a document collection spans multiple time periods (e.g., For the query "Who is the president in 2019?" retrieving from related documents spanning 2018–2025 introduces temporal ambiguity if relying solely on semantics). Existing methods rely on supervised training with explicit timestamps, which are not always feasible. We propose TPOUR (Temporal Preference Optimization for Unsupervised Retriever), which integrates our novel training method Temporal Retrieval Preference Optimization (TRPO). TRPO reinterprets preference learning in the temporal dimension, guiding the retriever to favor temporally aligned documents. TPOUR constructs temporally aligned and misaligned document pairs by leveraging document corpora collected at different times and trains the retriever without supervision to prioritize temporally aligned over misaligned documents. Furthermore, TPOUR generalizes to unseen time periods by interpolating time vectors, enabling continuous temporal alignment. Experiments on temporal QA with a mixed-timestamp document collection show that TPOUR outperforms both unsupervised and supervised baselines. Compared to Nomic Embed v2 MoE, TPOUR Contriever improves nDCG@5 by +7.13 (+23.5%) on explicit and +7.76 (+25.5%) on implicit queries on average.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 16758
Loading