DexAvatar: 3D Sign Language Reconstruction with Hand and Body Pose Priors
By: Kaustubh Kundu , Hrishav Bakul Barua , Lucy Robertson-Bell and more
The trend in sign language generation is centered around data-driven generative methods that require vast amounts of precise 2D and 3D human pose data to achieve an acceptable generation quality. However, currently, most sign language datasets are video-based and limited to automatically reconstructed 2D human poses (i.e., keypoints) and lack accurate 3D information. Furthermore, existing state-of-the-art for automatic 3D human pose estimation from sign language videos is prone to self-occlusion, noise, and motion blur effects, resulting in poor reconstruction quality. In response to this, we introduce DexAvatar, a novel framework to reconstruct bio-mechanically accurate fine-grained hand articulations and body movements from in-the-wild monocular sign language videos, guided by learned 3D hand and body priors. DexAvatar achieves strong performance in the SGNify motion capture dataset, the only benchmark available for this task, reaching an improvement of 35.11% in the estimation of body and hand poses compared to the state-of-the-art. The official website of this work is: https://github.com/kaustesseract/DexAvatar.
Similar Papers
Text-Driven 3D Hand Motion Generation from Sign Language Data
CV and Pattern Recognition
Creates realistic hand movements from text descriptions.
Avatar4D: Synthesizing Domain-Specific 4D Humans for Real-World Pose Estimation
CV and Pattern Recognition
Creates realistic people for sports training videos.
ViSA: 3D-Aware Video Shading for Real-Time Upper-Body Avatar Creation
CV and Pattern Recognition
Creates realistic 3D people from one picture.