Score: 0

LongDiff: Training-Free Long Video Generation in One Go

Published: March 23, 2025 | arXiv ID: 2503.18150v1

By: Zhuoling Li , Hossein Rahmani , Qiuhong Ke and more

Potential Business Impact:

Makes short video tools create long videos.

Business Areas:
Video Streaming Content and Publishing, Media and Entertainment, Video

Video diffusion models have recently achieved remarkable results in video generation. Despite their encouraging performance, most of these models are mainly designed and trained for short video generation, leading to challenges in maintaining temporal consistency and visual details in long video generation. In this paper, we propose LongDiff, a novel training-free method consisting of carefully designed components \ -- Position Mapping (PM) and Informative Frame Selection (IFS) \ -- to tackle two key challenges that hinder short-to-long video generation generalization: temporal position ambiguity and information dilution. Our LongDiff unlocks the potential of off-the-shelf video diffusion models to achieve high-quality long video generation in one go. Extensive experiments demonstrate the efficacy of our method.

Country of Origin
🇦🇺 Australia

Page Count
19 pages

Category
Computer Science:
CV and Pattern Recognition