TSTMotion: Training-free Scene-aware Text-to-motion Generation
By: Ziyan Guo , Haoxuan Qu , Hossein Rahmani and more
Potential Business Impact:
Makes characters move realistically in any scene.
Text-to-motion generation has recently garnered significant research interest, primarily focusing on generating human motion sequences in blank backgrounds. However, human motions commonly occur within diverse 3D scenes, which has prompted exploration into scene-aware text-to-motion generation methods. Yet, existing scene-aware methods often rely on large-scale ground-truth motion sequences in diverse 3D scenes, which poses practical challenges due to the expensive cost. To mitigate this challenge, we are the first to propose a \textbf{T}raining-free \textbf{S}cene-aware \textbf{T}ext-to-\textbf{Motion} framework, dubbed as \textbf{TSTMotion}, that efficiently empowers pre-trained blank-background motion generators with the scene-aware capability. Specifically, conditioned on the given 3D scene and text description, we adopt foundation models together to reason, predict and validate a scene-aware motion guidance. Then, the motion guidance is incorporated into the blank-background motion generators with two modifications, resulting in scene-aware text-driven motion sequences. Extensive experiments demonstrate the efficacy and generalizability of our proposed framework. We release our code in \href{https://tstmotion.github.io/}{Project Page}.
Similar Papers
Text-driven Motion Generation: Overview, Challenges and Directions
CV and Pattern Recognition
Lets computers make characters move from words.
SimMotionEdit: Text-Based Human Motion Editing with Motion Similarity Prediction
CV and Pattern Recognition
Makes animated characters move like you describe.
MOVi: Training-free Text-conditioned Multi-Object Video Generation
CV and Pattern Recognition
Makes videos show many moving objects correctly.