Keywords: Novel View Synthesis, Visual Relocalization, Egocentric, SLAM, 3D Reconstruction
TL;DR: A large-scale egocentric dataset with GT 3D and lighting variation for benchmarking NVS and visual relocalization.
Abstract: We introduce Oxford Day-and-Night, a large-scale, egocentric dataset for novel view synthesis (NVS) and visual relocalisation under challenging lighting conditions. Existing datasets often lack crucial combinations of features such as ground-truth 3D geometry, wide-ranging lighting variation, and full 6DoF motion. Oxford Day-and-Night addresses these gaps by leveraging Meta ARIA glasses to capture egocentric video and applying multi-session SLAM to estimate camera poses, reconstruct 3D point clouds, and align sequences captured under varying lighting conditions, including both day and night. The dataset spans over 30 km of recorded trajectories and covers an area of $40{,}000\mathrm{m}^2$, offering a rich foundation for egocentric 3D vision research. It supports two core benchmarks, NVS and relocalisation, providing a unique platform for evaluating models in realistic and diverse environments. Project page: https://oxdan.active.vision/
Croissant File: json
Dataset URL: https://huggingface.co/datasets/active-vision-lab/oxford-day-and-night
Supplementary Material: zip
Primary Area: Datasets & Benchmarks for applications in computer vision
Submission Number: 4
Loading