Score: 0

Controllable Embedding Transformation for Mood-Guided Music Retrieval

Published: October 23, 2025 | arXiv ID: 2510.20759v1

By: Julia Wilkins , Jaehun Kim , Matthew E. P. Davies and more

Potential Business Impact:

Changes song mood without changing its style.

Business Areas:
Music Streaming Internet Services, Media and Entertainment, Music and Audio

Music representations are the backbone of modern recommendation systems, powering playlist generation, similarity search, and personalized discovery. Yet most embeddings offer little control for adjusting a single musical attribute, e.g., changing only the mood of a track while preserving its genre or instrumentation. In this work, we address the problem of controllable music retrieval through embedding-based transformation, where the objective is to retrieve songs that remain similar to a seed track but are modified along one chosen dimension. We propose a novel framework for mood-guided music embedding transformation, which learns a mapping from a seed audio embedding to a target embedding guided by mood labels, while preserving other musical attributes. Because mood cannot be directly altered in the seed audio, we introduce a sampling mechanism that retrieves proxy targets to balance diversity with similarity to the seed. We train a lightweight translation model using this sampling strategy and introduce a novel joint objective that encourages transformation and information preservation. Extensive experiments on two datasets show strong mood transformation performance while retaining genre and instrumentation far better than training-free baselines, establishing controllable embedding transformation as a promising paradigm for personalized music retrieval.

Page Count
5 pages

Category
Computer Science:
Sound