VidCLearn: A Continual Learning Approach for Text-to-Video Generation
By: Luca Zanchetta , Lorenzo Papa , Luca Maiano and more
Potential Business Impact:
Teaches AI to make new videos without forgetting old ones.
Text-to-video generation is an emerging field in generative AI, enabling the creation of realistic, semantically accurate videos from text prompts. While current models achieve impressive visual quality and alignment with input text, they typically rely on static knowledge, making it difficult to incorporate new data without retraining from scratch. To address this limitation, we propose VidCLearn, a continual learning framework for diffusion-based text-to-video generation. VidCLearn features a student-teacher architecture where the student model is incrementally updated with new text-video pairs, and the teacher model helps preserve previously learned knowledge through generative replay. Additionally, we introduce a novel temporal consistency loss to enhance motion smoothness and a video retrieval module to provide structural guidance at inference. Our architecture is also designed to be more computationally efficient than existing models while retaining satisfactory generation performance. Experimental results show VidCLearn's superiority over baseline methods in terms of visual quality, semantic alignment, and temporal coherence.
Similar Papers
Bridging Text and Video Generation: A Survey
Graphics
Makes videos from written words.
Bring Your Dreams to Life: Continual Text-to-Video Customization
CV and Pattern Recognition
Teaches AI to make videos of new things.
Continual Learning for Image Captioning through Improved Image-Text Alignment
CV and Pattern Recognition
Teaches computers to describe new pictures over time.