Art2Music: Generating Music for Art Images with Multi-modal Feeling Alignment
By: Jiaying Hong , Ting Zhu , Thanet Markchom and more
Potential Business Impact:
Creates music from pictures and words.
With the rise of AI-generated content (AIGC), generating perceptually natural and feeling-aligned music from multimodal inputs has become a central challenge. Existing approaches often rely on explicit emotion labels that require costly annotation, underscoring the need for more flexible feeling-aligned methods. To support multimodal music generation, we construct ArtiCaps, a pseudo feeling-aligned image-music-text dataset created by semantically matching descriptions from ArtEmis and MusicCaps. We further propose Art2Music, a lightweight cross-modal framework that synthesizes music from artistic images and user comments. In the first stage, images and text are encoded with OpenCLIP and fused using a gated residual module; the fused representation is decoded by a bidirectional LSTM into Mel-spectrograms with a frequency-weighted L1 loss to enhance high-frequency fidelity. In the second stage, a fine-tuned HiFi-GAN vocoder reconstructs high-quality audio waveforms. Experiments on ArtiCaps show clear improvements in Mel-Cepstral Distortion, Frechet Audio Distance, Log-Spectral Distance, and cosine similarity. A small LLM-based rating study further verifies consistent cross-modal feeling alignment and offers interpretable explanations of matches and mismatches across modalities. These results demonstrate improved perceptual naturalness, spectral fidelity, and semantic consistency. Art2Music also maintains robust performance with only 50k training samples, providing a scalable solution for feeling-aligned creative audio generation in interactive art, personalized soundscapes, and digital art exhibitions.
Similar Papers
Zero-Effort Image-to-Music Generation: An Interpretable RAG-based VLM Approach
Sound
Turns pictures into music with explanations.
MusicAIR: A Multimodal AI Music Generation Framework Powered by an Algorithm-Driven Core
Sound
Makes songs from just words and pictures.
Story2MIDI: Emotionally Aligned Music Generation from Text
Sound
Turns stories into music that matches feelings.