Hand-Aware Egocentric Motion Reconstruction with Sequence-Level Context
By: Kyungwon Cho, Hanbyul Joo
Egocentric vision systems are becoming widely available, creating new opportunities for human-computer interaction. A core challenge is estimating the wearer's full-body motion from first-person videos, which is crucial for understanding human behavior. However, this task is difficult since most body parts are invisible from the egocentric view. Prior approaches mainly rely on head trajectories, leading to ambiguity, or assume continuously tracked hands, which is unrealistic for lightweight egocentric devices. In this work, we present HaMoS, the first hand-aware, sequence-level diffusion framework that directly conditions on both head trajectory and intermittently visible hand cues caused by field-of-view limitations and occlusions, as in real-world egocentric devices. To overcome the lack of datasets pairing diverse camera views with human motion, we introduce a novel augmentation method that models such real-world conditions. We also demonstrate that sequence-level contexts such as body shape and field-of-view are crucial for accurate motion reconstruction, and thus employ local attention to infer long sequences efficiently. Experiments on public benchmarks show that our method achieves state-of-the-art accuracy and temporal smoothness, demonstrating a practical step toward reliable in-the-wild egocentric 3D motion understanding.
Similar Papers
Flowing from Reasoning to Motion: Learning 3D Hand Trajectory Prediction from Egocentric Human Interaction Videos
CV and Pattern Recognition
Helps robots predict hand movements by watching.
Uni-Hand: Universal Hand Motion Forecasting in Egocentric Views
CV and Pattern Recognition
Finds exact moments hands touch objects.
UniEgoMotion: A Unified Model for Egocentric Motion Reconstruction, Forecasting, and Generation
CV and Pattern Recognition
Lets computers guess how people will move.