Score: 1

Lang2Motion: Bridging Language and Motion through Joint Embedding Spaces

Published: December 11, 2025 | arXiv ID: 2512.10617v1

By: Bishoy Galoaa, Xiangyu Bai, Sarah Ostadabbas

Potential Business Impact:

Makes robots move like real things from words.

Business Areas:
Motion Capture Media and Entertainment, Video

We present Lang2Motion, a framework for language-guided point trajectory generation by aligning motion manifolds with joint embedding spaces. Unlike prior work focusing on human motion or video synthesis, we generate explicit trajectories for arbitrary objects using motion extracted from real-world videos via point tracking. Our transformer-based auto-encoder learns trajectory representations through dual supervision: textual motion descriptions and rendered trajectory visualizations, both mapped through CLIP's frozen encoders. Lang2Motion achieves 34.2% Recall@1 on text-to-trajectory retrieval, outperforming video-based methods by 12.5 points, and improves motion accuracy by 33-52% (12.4 ADE vs 18.3-25.3) compared to video generation baselines. We demonstrate 88.3% Top-1 accuracy on human action recognition despite training only on diverse object motions, showing effective transfer across motion domains. Lang2Motion supports style transfer, semantic interpolation, and latent-space editing through CLIP-aligned trajectory representations.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
11 pages

Category
Computer Science:
CV and Pattern Recognition