Seeing in the Dark: Benchmarking Egocentric 3D Vision with the Oxford Day-and-Night Dataset
By: Zirui Wang , Wenjing Bian , Xinghui Li and more
Potential Business Impact:
Helps robots see in the dark and day.
We introduce Oxford Day-and-Night, a large-scale, egocentric dataset for novel view synthesis (NVS) and visual relocalisation under challenging lighting conditions. Existing datasets often lack crucial combinations of features such as ground-truth 3D geometry, wide-ranging lighting variation, and full 6DoF motion. Oxford Day-and-Night addresses these gaps by leveraging Meta ARIA glasses to capture egocentric video and applying multi-session SLAM to estimate camera poses, reconstruct 3D point clouds, and align sequences captured under varying lighting conditions, including both day and night. The dataset spans over 30 $\mathrm{km}$ of recorded trajectories and covers an area of 40,000 $\mathrm{m}^2$, offering a rich foundation for egocentric 3D vision research. It supports two core benchmarks, NVS and relocalisation, providing a unique platform for evaluating models in realistic and diverse environments.
Similar Papers
EgoNight: Towards Egocentric Vision Understanding at Night with a Challenging Benchmark
CV and Pattern Recognition
Helps cameras see and understand things in the dark.
Spatial Reasoning with Vision-Language Models in Ego-Centric Multi-View Scenes
CV and Pattern Recognition
Helps robots understand 3D space from their own eyes.
EgoCampus: Egocentric Pedestrian Eye Gaze Model and Dataset
CV and Pattern Recognition
Helps computers guess where people look while walking.