MR-FlowDPO: Multi-Reward Direct Preference Optimization for Flow-Matching Text-to-Music Generation
By: Alon Ziv , Sanyuan Chen , Andros Tjandra and more
Potential Business Impact:
Makes music generators create better, preferred songs.
A key challenge in music generation models is their lack of direct alignment with human preferences, as music evaluation is inherently subjective and varies widely across individuals. We introduce MR-FlowDPO, a novel approach that enhances flow-matching-based music generation models - a major class of modern music generative models, using Direct Preference Optimization (DPO) with multiple musical rewards. The rewards are crafted to assess music quality across three key dimensions: text alignment, audio production quality, and semantic consistency, utilizing scalable off-the-shelf models for each reward prediction. We employ these rewards in two ways: (i) By constructing preference data for DPO and (ii) by integrating the rewards into text prompting. To address the ambiguity in musicality evaluation, we propose a novel scoring mechanism leveraging semantic self-supervised representations, which significantly improves the rhythmic stability of generated music. We conduct an extensive evaluation using a variety of music-specific objective metrics as well as a human study. Results show that MR-FlowDPO significantly enhances overall music generation quality and is consistently preferred over highly competitive baselines in terms of audio quality, text alignment, and musicality. Our code is publicly available at https://github.com/lonzi/mrflow_dpo; Samples are provided in our demo page at https://lonzi.github.io/mr_flowdpo_demopage/.
Similar Papers
Beyond Single-Reward: Multi-Pair, Multi-Perspective Preference Optimization for Machine Translation
Computation and Language
Teaches computers to translate languages better.
SGDPO: Self-Guided Direct Preference Optimization for Language Model Alignment
Machine Learning (CS)
Makes AI understand what you like better.
Difficulty-Based Preference Data Selection by DPO Implicit Reward Gap
Computation and Language
Chooses smart examples to teach AI better.