From Sound to Sight: Towards AI-authored Music Videos
By: Leo Vitasovic , Stella Graßhof , Agnes Mercedes Kloft and more
Potential Business Impact:
Makes music videos automatically from any song.
Conventional music visualisation systems rely on handcrafted ad hoc transformations of shapes and colours that offer only limited expressiveness. We propose two novel pipelines for automatically generating music videos from any user-specified, vocal or instrumental song using off-the-shelf deep learning models. Inspired by the manual workflows of music video producers, we experiment on how well latent feature-based techniques can analyse audio to detect musical qualities, such as emotional cues and instrumental patterns, and distil them into textual scene descriptions using a language model. Next, we employ a generative model to produce the corresponding video clips. To assess the generated videos, we identify several critical aspects and design and conduct a preliminary user evaluation that demonstrates storytelling potential, visual coherency and emotional alignment with the music. Our findings underscore the potential of latent feature techniques and deep generative models to expand music visualisation beyond traditional approaches.
Similar Papers
YingVideo-MV: Music-Driven Multi-Stage Video Generation
CV and Pattern Recognition
Makes music videos with moving cameras automatically.
Enhancing Video Music Recommendation with Transformer-Driven Audio-Visual Embeddings
Multimedia
Finds the perfect music for any video automatically.
Zero-Effort Image-to-Music Generation: An Interpretable RAG-based VLM Approach
Sound
Turns pictures into music with explanations.