PersonaBooth: Personalized Text-to-Motion Generation
By: Boeun Kim , Hea In Jeong , JungHoon Sung and more
Potential Business Impact:
Creates unique character movements from text.
This paper introduces Motion Personalization, a new task that generates personalized motions aligned with text descriptions using several basic motions containing Persona. To support this novel task, we introduce a new large-scale motion dataset called PerMo (PersonaMotion), which captures the unique personas of multiple actors. We also propose a multi-modal finetuning method of a pretrained motion diffusion model called PersonaBooth. PersonaBooth addresses two main challenges: i) A significant distribution gap between the persona-focused PerMo dataset and the pretraining datasets, which lack persona-specific data, and ii) the difficulty of capturing a consistent persona from the motions vary in content (action type). To tackle the dataset distribution gap, we introduce a persona token to accept new persona features and perform multi-modal adaptation for both text and visuals during finetuning. To capture a consistent persona, we incorporate a contrastive learning technique to enhance intra-cohesion among samples with the same persona. Furthermore, we introduce a context-aware fusion mechanism to maximize the integration of persona cues from multiple input motions. PersonaBooth outperforms state-of-the-art motion style transfer methods, establishing a new benchmark for motion personalization.
Similar Papers
MotionPersona: Characteristics-aware Locomotion Control
Graphics
Makes characters move like real people.
PersonaAnimator: Personalized Motion Transfer from Unconstrained Videos
CV and Pattern Recognition
Makes characters move like real people in videos.
Text-driven Motion Generation: Overview, Challenges and Directions
CV and Pattern Recognition
Lets computers make characters move from words.