Unconstrained Large-scale 3D Reconstruction and Rendering across Altitudes
By: Neil Joshi , Joshua Carney , Nathanael Kuo and more
Potential Business Impact:
Creates 3D maps from few, messy photos.
Production of photorealistic, navigable 3D site models requires a large volume of carefully collected images that are often unavailable to first responders for disaster relief or law enforcement. Real-world challenges include limited numbers of images, heterogeneous unposed cameras, inconsistent lighting, and extreme viewpoint differences for images collected from varying altitudes. To promote research aimed at addressing these challenges, we have developed the first public benchmark dataset for 3D reconstruction and novel view synthesis based on multiple calibrated ground-level, security-level, and airborne cameras. We present datasets that pose real-world challenges, independently evaluate calibration of unposed cameras and quality of novel rendered views, demonstrate baseline performance using recent state-of-practice methods, and identify challenges for further research.
Similar Papers
Beyond a Single Light: A Large-Scale Aerial Dataset for Urban Scene Reconstruction Under Varying Illumination
CV and Pattern Recognition
Makes 3D maps look good in any light.
3DAeroRelief: The first 3D Benchmark UAV Dataset for Post-Disaster Assessment
CV and Pattern Recognition
Helps drones map damaged buildings after disasters.
AerialMegaDepth: Learning Aerial-Ground Reconstruction and View Synthesis
CV and Pattern Recognition
Helps computers understand pictures from sky and ground.