Multimodal Cinematic Video Synthesis Using Text-to-Image and Audio Generation Models
By: Sridhar S , Nithin A , Shakeel Rifath and more
Potential Business Impact:
Makes movies from just words.
Advances in generative artificial intelligence have altered multimedia creation, allowing for automatic cinematic video synthesis from text inputs. This work describes a method for creating 60-second cinematic movies incorporating Stable Diffusion for high-fidelity image synthesis, GPT-2 for narrative structuring, and a hybrid audio pipeline using gTTS and YouTube-sourced music. It uses a five-scene framework, which is augmented by linear frame interpolation, cinematic post-processing (e.g., sharpening), and audio-video synchronization to provide professional-quality results. It was created in a GPU-accelerated Google Colab environment using Python 3.11. It has a dual-mode Gradio interface (Simple and Advanced), which supports resolutions of up to 1024x768 and frame rates of 15-30 FPS. Optimizations such as CUDA memory management and error handling ensure reliability. The experiments demonstrate outstanding visual quality, narrative coherence, and efficiency, furthering text-to-video synthesis for creative, educational, and industrial applications.
Similar Papers
3MDiT: Unified Tri-Modal Diffusion Transformer for Text-Driven Synchronized Audio-Video Generation
Multimedia
Makes videos and sounds match perfectly.
TA-V2A: Textually Assisted Video-to-Audio Generation
CV and Pattern Recognition
Makes videos talk with matching sounds.
Video-GPT via Next Clip Diffusion
CV and Pattern Recognition
Teaches computers to predict what happens next in videos.