Score: 0

Multimodal Cinematic Video Synthesis Using Text-to-Image and Audio Generation Models

Published: April 6, 2025 | arXiv ID: 2506.10005v1

By: Sridhar S , Nithin A , Shakeel Rifath and more

Potential Business Impact:

Makes movies from just words.

Business Areas:
Video Editing Content and Publishing, Media and Entertainment, Video

Advances in generative artificial intelligence have altered multimedia creation, allowing for automatic cinematic video synthesis from text inputs. This work describes a method for creating 60-second cinematic movies incorporating Stable Diffusion for high-fidelity image synthesis, GPT-2 for narrative structuring, and a hybrid audio pipeline using gTTS and YouTube-sourced music. It uses a five-scene framework, which is augmented by linear frame interpolation, cinematic post-processing (e.g., sharpening), and audio-video synchronization to provide professional-quality results. It was created in a GPU-accelerated Google Colab environment using Python 3.11. It has a dual-mode Gradio interface (Simple and Advanced), which supports resolutions of up to 1024x768 and frame rates of 15-30 FPS. Optimizations such as CUDA memory management and error handling ensure reliability. The experiments demonstrate outstanding visual quality, narrative coherence, and efficiency, furthering text-to-video synthesis for creative, educational, and industrial applications.

Country of Origin
🇮🇳 India

Page Count
11 pages

Category
Computer Science:
CV and Pattern Recognition