Diffusion-based 3D Hand Motion Recovery with Intuitive Physics
By: Yufei Zhang , Zijun Cui , Jeffrey O. Kephart and more
Potential Business Impact:
Makes computer hands move realistically when touching things.
While 3D hand reconstruction from monocular images has made significant progress, generating accurate and temporally coherent motion estimates from videos remains challenging, particularly during hand-object interactions. In this paper, we present a novel 3D hand motion recovery framework that enhances image-based reconstructions through a diffusion-based and physics-augmented motion refinement model. Our model captures the distribution of refined motion estimates conditioned on initial ones, generating improved sequences through an iterative denoising process. Instead of relying on scarce annotated video data, we train our model only using motion capture data without images. We identify valuable intuitive physics knowledge during hand-object interactions, including key motion states and their associated motion constraints. We effectively integrate these physical insights into our diffusion model to improve its performance. Extensive experiments demonstrate that our approach significantly improves various frame-wise reconstruction methods, achieving state-of-the-art (SOTA) performance on existing benchmarks.
Similar Papers
Follow My Hold: Hand-Object Interaction Reconstruction through Geometric Guidance
CV and Pattern Recognition
Makes 3D object shapes from one picture.
Bimanual 3D Hand Motion and Articulation Forecasting in Everyday Images
CV and Pattern Recognition
Lets computers guess how hands will move.
Learning to Align and Refine: A Foundation-to-Diffusion Framework for Occlusion-Robust Two-Hand Reconstruction
CV and Pattern Recognition
Fixes computer drawings of hands that overlap.