Score: 2

MOVi: Training-free Text-conditioned Multi-Object Video Generation

Published: May 29, 2025 | arXiv ID: 2505.22980v1

By: Aimon Rahman , Jiang Liu , Ze Wang and more

Potential Business Impact:

Makes videos show many moving objects correctly.

Business Areas:
Motion Capture Media and Entertainment, Video

Recent advances in diffusion-based text-to-video (T2V) models have demonstrated remarkable progress, but these models still face challenges in generating videos with multiple objects. Most models struggle with accurately capturing complex object interactions, often treating some objects as static background elements and limiting their movement. In addition, they often fail to generate multiple distinct objects as specified in the prompt, resulting in incorrect generations or mixed features across objects. In this paper, we present a novel training-free approach for multi-object video generation that leverages the open world knowledge of diffusion models and large language models (LLMs). We use an LLM as the ``director'' of object trajectories, and apply the trajectories through noise re-initialization to achieve precise control of realistic movements. We further refine the generation process by manipulating the attention mechanism to better capture object-specific features and motion patterns, and prevent cross-object feature interference. Extensive experiments validate the effectiveness of our training free approach in significantly enhancing the multi-object generation capabilities of existing video diffusion models, resulting in 42% absolute improvement in motion dynamics and object generation accuracy, while also maintaining high fidelity and motion smoothness.

Repos / Data Links

Page Count
16 pages

Category
Computer Science:
CV and Pattern Recognition