Keywords: time prediction, geolocalization, contrastive learning, metric learning
Abstract: Timestamp prediction aims to accurately determine the date and hour at which an image was captured using only visual cues, with applications ranging from image retrieval and metadata correction to digital forensics. In outdoor scenes, this can be inferred from variables such as overall brightness, hue, and shadow positions for hourly estimations, as well as weather patterns or seasonal changes for determining the date. However, these factors vary greatly depending on geographical location, making the challenges of time-of-capture prediction closely related to geo-localization. To address this problem, we introduce GeoTimeCLIP, a novel method capable of simultaneously estimating both the capture time (i.e., hour and month) and geo-location (i.e., GPS coordinates) of an image using a retrieval approach. Our model employs an image encoder, a time encoder, and a location encoder, aligning the time and GPS embeddings with the image embeddings in a continuous high-dimensional feature space. Considering the cyclical nature of days and years, we propose an effective way to represent time using Random Fourier Features. To learn image-time embedding alignment, rather than applying a standard contrastive loss with hard positives and negatives, we propose a more effective metric learning-based objective, which provides soft targets by considering the time difference between samples over a toroidal manifold. We introduce new benchmarks for time prediction, where we show that our jointly optimized time-location-based method outperforms baselines optimized solely for time. We also evaluate our method on existing geo-localization protocols, demonstrating that our approach performs competitively with expert geo-localization methods. Our shared embedding space enables various downstream tasks, such as compositional retrieval and text-based retrieval.
Primary Area: applications to computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3906
Loading