Score: 2

Learning Human Motion with Temporally Conditional Mamba

Published: October 14, 2025 | arXiv ID: 2510.12573v1

By: Quang Nguyen , Tri Le , Baoru Huang and more

Potential Business Impact:

Makes computer-made people move like real humans.

Business Areas:
Motion Capture Media and Entertainment, Video

Learning human motion based on a time-dependent input signal presents a challenging yet impactful task with various applications. The goal of this task is to generate or estimate human movement that consistently reflects the temporal patterns of conditioning inputs. Existing methods typically rely on cross-attention mechanisms to fuse the condition with motion. However, this approach primarily captures global interactions and struggles to maintain step-by-step temporal alignment. To address this limitation, we introduce Temporally Conditional Mamba, a new mamba-based model for human motion generation. Our approach integrates conditional information into the recurrent dynamics of the Mamba block, enabling better temporally aligned motion. To validate the effectiveness of our method, we evaluate it on a variety of human motion tasks. Extensive experiments demonstrate that our model significantly improves temporal alignment, motion realism, and condition consistency over state-of-the-art approaches. Our project page is available at https://zquang2202.github.io/TCM.

Country of Origin
πŸ‡ΈπŸ‡¬ πŸ‡ΊπŸ‡Έ πŸ‡¦πŸ‡Ή Austria, Singapore, United States

Page Count
10 pages

Category
Computer Science:
CV and Pattern Recognition