OmniHuman-1.5: Instilling an Active Mind in Avatars via Cognitive Simulation
By: Jianwen Jiang , Weihong Zeng , Zerong Zheng and more
Potential Business Impact:
Makes video characters act with real feelings.
Existing video avatar models can produce fluid human animations, yet they struggle to move beyond mere physical likeness to capture a character's authentic essence. Their motions typically synchronize with low-level cues like audio rhythm, lacking a deeper semantic understanding of emotion, intent, or context. To bridge this gap, \textbf{we propose a framework designed to generate character animations that are not only physically plausible but also semantically coherent and expressive.} Our model, \textbf{OmniHuman-1.5}, is built upon two key technical contributions. First, we leverage Multimodal Large Language Models to synthesize a structured textual representation of conditions that provides high-level semantic guidance. This guidance steers our motion generator beyond simplistic rhythmic synchronization, enabling the production of actions that are contextually and emotionally resonant. Second, to ensure the effective fusion of these multimodal inputs and mitigate inter-modality conflicts, we introduce a specialized Multimodal DiT architecture with a novel Pseudo Last Frame design. The synergy of these components allows our model to accurately interpret the joint semantics of audio, images, and text, thereby generating motions that are deeply coherent with the character, scene, and linguistic content. Extensive experiments demonstrate that our model achieves leading performance across a comprehensive set of metrics, including lip-sync accuracy, video quality, motion naturalness and semantic consistency with textual prompts. Furthermore, our approach shows remarkable extensibility to complex scenarios, such as those involving multi-person and non-human subjects. Homepage: \href{https://omnihuman-lab.github.io/v1_5/}
Similar Papers
Towards Interactive Intelligence for Digital Humans
CV and Pattern Recognition
Makes digital people act and learn like real ones.
Soul: Breathe Life into Digital Human for High-fidelity Long-term Multimodal Animation
CV and Pattern Recognition
Makes still pictures talk and move like real people.
InfinityHuman: Towards Long-Term Audio-Driven Human
CV and Pattern Recognition
Makes talking people in videos look real.