SHARE: Scene-Human Aligned Reconstruction
By: Joshua Li , Brendan Chharawala , Chang Shu and more
Potential Business Impact:
Puts people in 3D worlds accurately from videos.
Animating realistic character interactions with the surrounding environment is important for autonomous agents in gaming, AR/VR, and robotics. However, current methods for human motion reconstruction struggle with accurately placing humans in 3D space. We introduce Scene-Human Aligned REconstruction (SHARE), a technique that leverages the scene geometry's inherent spatial cues to accurately ground human motion reconstruction. Each reconstruction relies solely on a monocular RGB video from a stationary camera. SHARE first estimates a human mesh and segmentation mask for every frame, alongside a scene point map at keyframes. It iteratively refines the human's positions at these keyframes by comparing the human mesh against the human point map extracted from the scene using the mask. Crucially, we also ensure that non-keyframe human meshes remain consistent by preserving their relative root joint positions to keyframe root joints during optimization. Our approach enables more accurate 3D human placement while reconstructing the surrounding scene, facilitating use cases on both curated datasets and in-the-wild web videos. Extensive experiments demonstrate that SHARE outperforms existing methods.
Similar Papers
WATCH: World-aware Allied Trajectory and pose reconstruction for Camera and Human
CV and Pattern Recognition
Makes videos show people moving in 3D space.
Dynamic Avatar-Scene Rendering from Human-centric Context
CV and Pattern Recognition
Makes videos of people look real, even with backgrounds.
AnimateScene: Camera-controllable Animation in Any Scene
CV and Pattern Recognition
Makes animated people fit perfectly into real scenes.