Lang2Motion: Bridging Language and Motion through Joint Embedding Spaces
By: Bishoy Galoaa, Xiangyu Bai, Sarah Ostadabbas
Potential Business Impact:
Makes robots move like real things from words.
We present Lang2Motion, a framework for language-guided point trajectory generation by aligning motion manifolds with joint embedding spaces. Unlike prior work focusing on human motion or video synthesis, we generate explicit trajectories for arbitrary objects using motion extracted from real-world videos via point tracking. Our transformer-based auto-encoder learns trajectory representations through dual supervision: textual motion descriptions and rendered trajectory visualizations, both mapped through CLIP's frozen encoders. Lang2Motion achieves 34.2% Recall@1 on text-to-trajectory retrieval, outperforming video-based methods by 12.5 points, and improves motion accuracy by 33-52% (12.4 ADE vs 18.3-25.3) compared to video generation baselines. We demonstrate 88.3% Top-1 accuracy on human action recognition despite training only on diverse object motions, showing effective transfer across motion domains. Lang2Motion supports style transfer, semantic interpolation, and latent-space editing through CLIP-aligned trajectory representations.
Similar Papers
Motion is the Choreographer: Learning Latent Pose Dynamics for Seamless Sign Language Generation
CV and Pattern Recognition
Creates sign language videos of anyone signing anything.
Generative Action Tell-Tales: Assessing Human Motion in Synthesized Videos
CV and Pattern Recognition
Checks if fake human videos look real.
FoundationMotion: Auto-Labeling and Reasoning about Spatial Movement in Videos
CV and Pattern Recognition
Teaches computers to understand how things move.