Event-guided 3D Gaussian Splatting for Dynamic Human and Scene Reconstruction
By: Xiaoting Yin , Hao Shi , Kailun Yang and more
Potential Business Impact:
Makes blurry videos show moving people clearly.
Reconstructing dynamic humans together with static scenes from monocular videos remains difficult, especially under fast motion, where RGB frames suffer from motion blur. Event cameras exhibit distinct advantages, e.g., microsecond temporal resolution, making them a superior sensing choice for dynamic human reconstruction. Accordingly, we present a novel event-guided human-scene reconstruction framework that jointly models human and scene from a single monocular event camera via 3D Gaussian Splatting. Specifically, a unified set of 3D Gaussians carries a learnable semantic attribute; only Gaussians classified as human undergo deformation for animation, while scene Gaussians stay static. To combat blur, we propose an event-guided loss that matches simulated brightness changes between consecutive renderings with the event stream, improving local fidelity in fast-moving regions. Our approach removes the need for external human masks and simplifies managing separate Gaussian sets. On two benchmark datasets, ZJU-MoCap-Blur and MMHPSD-Blur, it delivers state-of-the-art human-scene reconstruction, with notable gains over strong baselines in PSNR/SSIM and reduced LPIPS, especially for high-speed subjects.
Similar Papers
AHA! Animating Human Avatars in Diverse Scenes with Gaussian Splatting
CV and Pattern Recognition
Makes animated people look real in 3D videos.
UAV4D: Dynamic Neural Rendering of Human-Centric UAV Imagery using Gaussian Splatting
CV and Pattern Recognition
Makes drone videos look real from any angle.
EBAD-Gaussian: Event-driven Bundle Adjusted Deblur Gaussian Splatting
CV and Pattern Recognition
Fixes blurry pictures to make 3D scenes clear.