Score: 1

Cross-Modal Learning for Music-to-Music-Video Description Generation

Published: March 14, 2025 | arXiv ID: 2503.11190v1

By: Zhuoyuan Mao , Mengjie Zhao , Qiyu Wu and more

BigTech Affiliations: Sony PlayStation

Potential Business Impact:

Makes music videos from just songs.

Business Areas:
Music Education Education, Media and Entertainment, Music and Audio

Music-to-music-video generation is a challenging task due to the intrinsic differences between the music and video modalities. The advent of powerful text-to-video diffusion models has opened a promising pathway for music-video (MV) generation by first addressing the music-to-MV description task and subsequently leveraging these models for video generation. In this study, we focus on the MV description generation task and propose a comprehensive pipeline encompassing training data construction and multimodal model fine-tuning. We fine-tune existing pre-trained multimodal models on our newly constructed music-to-MV description dataset based on the Music4All dataset, which integrates both musical and visual information. Our experimental results demonstrate that music representations can be effectively mapped to textual domains, enabling the generation of meaningful MV description directly from music inputs. We also identify key components in the dataset construction pipeline that critically impact the quality of MV description and highlight specific musical attributes that warrant greater focus for improved MV description generation.

Country of Origin
🇯🇵 Japan

Page Count
8 pages

Category
Computer Science:
Sound