Cross-Modal Learning for Music-to-Music-Video Description Generation
By: Zhuoyuan Mao , Mengjie Zhao , Qiyu Wu and more
Potential Business Impact:
Makes music videos from just songs.
Music-to-music-video generation is a challenging task due to the intrinsic differences between the music and video modalities. The advent of powerful text-to-video diffusion models has opened a promising pathway for music-video (MV) generation by first addressing the music-to-MV description task and subsequently leveraging these models for video generation. In this study, we focus on the MV description generation task and propose a comprehensive pipeline encompassing training data construction and multimodal model fine-tuning. We fine-tune existing pre-trained multimodal models on our newly constructed music-to-MV description dataset based on the Music4All dataset, which integrates both musical and visual information. Our experimental results demonstrate that music representations can be effectively mapped to textual domains, enabling the generation of meaningful MV description directly from music inputs. We also identify key components in the dataset construction pipeline that critically impact the quality of MV description and highlight specific musical attributes that warrant greater focus for improved MV description generation.
Similar Papers
Extending Visual Dynamics for Video-to-Music Generation
Multimedia
Makes videos match music's mood and rhythm.
A Survey on Music Generation from Single-Modal, Cross-Modal, and Multi-Modal Perspectives
Sound
Makes music from words, pictures, and videos.
MusFlow: Multimodal Music Generation via Conditional Flow Matching
Sound
Makes music from pictures, stories, or words.