PartMotionEdit: Fine-Grained Text-Driven 3D Human Motion Editing via Part-Level Modulation
By: Yujie Yang , Zhichao Zhang , Jiazhou Chen and more
Existing text-driven 3D human motion editing methods have demonstrated significant progress, but are still difficult to precisely control over detailed, part-specific motions due to their global modeling nature. In this paper, we propose PartMotionEdit, a novel fine-grained motion editing framework that operates via part-level semantic modulation. The core of PartMotionEdit is a Part-aware Motion Modulation (PMM) module, which builds upon a predefined five-part body decomposition. PMM dynamically predicts time-varying modulation weights for each body part, enabling precise and interpretable editing of local motions. To guide the training of PMM, we also introduce a part-level similarity curve supervision mechanism enhanced with dual-layer normalization. This mechanism assists PMM in learning semantically consistent and editable distributions across all body parts. Furthermore, we design a Bidirectional Motion Interaction (BMI) module. It leverages bidirectional cross-modal attention to achieve more accurate semantic alignment between textual instructions and motion semantics. Extensive quantitative and qualitative evaluations on a well-known benchmark demonstrate that PartMotionEdit outperforms the state-of-the-art methods.
Similar Papers
FineMotion: A Dataset and Benchmark with both Spatial and Temporal Annotation for Fine-grained Motion Generation and Editing
CV and Pattern Recognition
Creates better animated people from text.
Dynamic Motion Blending for Versatile Motion Editing
CV and Pattern Recognition
Makes animated characters move how you describe.
Fg-T2M++: LLMs-Augmented Fine-Grained Text Driven Human Motion Generation
CV and Pattern Recognition
Makes computer animations move exactly as you describe.