Score: 0

VidCLearn: A Continual Learning Approach for Text-to-Video Generation

Published: September 21, 2025 | arXiv ID: 2509.16956v1

By: Luca Zanchetta , Lorenzo Papa , Luca Maiano and more

Potential Business Impact:

Teaches AI to make new videos without forgetting old ones.

Business Areas:
Video Streaming Content and Publishing, Media and Entertainment, Video

Text-to-video generation is an emerging field in generative AI, enabling the creation of realistic, semantically accurate videos from text prompts. While current models achieve impressive visual quality and alignment with input text, they typically rely on static knowledge, making it difficult to incorporate new data without retraining from scratch. To address this limitation, we propose VidCLearn, a continual learning framework for diffusion-based text-to-video generation. VidCLearn features a student-teacher architecture where the student model is incrementally updated with new text-video pairs, and the teacher model helps preserve previously learned knowledge through generative replay. Additionally, we introduce a novel temporal consistency loss to enhance motion smoothness and a video retrieval module to provide structural guidance at inference. Our architecture is also designed to be more computationally efficient than existing models while retaining satisfactory generation performance. Experimental results show VidCLearn's superiority over baseline methods in terms of visual quality, semantic alignment, and temporal coherence.

Country of Origin
🇮🇹 Italy

Page Count
13 pages

Category
Computer Science:
CV and Pattern Recognition