UAV4D: Dynamic Neural Rendering of Human-Centric UAV Imagery using Gaussian Splatting
By: Jaehoon Choi , Dongki Jung , Christopher Maxey and more
Potential Business Impact:
Makes drone videos look real from any angle.
Despite significant advancements in dynamic neural rendering, existing methods fail to address the unique challenges posed by UAV-captured scenarios, particularly those involving monocular camera setups, top-down perspective, and multiple small, moving humans, which are not adequately represented in existing datasets. In this work, we introduce UAV4D, a framework for enabling photorealistic rendering for dynamic real-world scenes captured by UAVs. Specifically, we address the challenge of reconstructing dynamic scenes with multiple moving pedestrians from monocular video data without the need for additional sensors. We use a combination of a 3D foundation model and a human mesh reconstruction model to reconstruct both the scene background and humans. We propose a novel approach to resolve the scene scale ambiguity and place both humans and the scene in world coordinates by identifying human-scene contact points. Additionally, we exploit the SMPL model and background mesh to initialize Gaussian splats, enabling holistic scene rendering. We evaluated our method on three complex UAV-captured datasets: VisDrone, Manipal-UAV, and Okutama-Action, each with distinct characteristics and 10~50 humans. Our results demonstrate the benefits of our approach over existing methods in novel view synthesis, achieving a 1.5 dB PSNR improvement and superior visual sharpness.
Similar Papers
UAVTwin: Neural Digital Twins for UAVs using Gaussian Splatting
CV and Pattern Recognition
Creates fake worlds to train flying robots.
Event-guided 3D Gaussian Splatting for Dynamic Human and Scene Reconstruction
CV and Pattern Recognition
Makes blurry videos show moving people clearly.
AHA! Animating Human Avatars in Diverse Scenes with Gaussian Splatting
CV and Pattern Recognition
Makes animated people look real in 3D videos.