Preference-Based Learning in Audio Applications: A Systematic Analysis
By: Aaron Broukhim , Yiran Shen , Prithviraj Ammanabrolu and more
Potential Business Impact:
Makes AI understand and create better music and speech.
Despite the parallel challenges that audio and text domains face in evaluating generative model outputs, preference learning remains remarkably underexplored in audio applications. Through a PRISMA-guided systematic review of approximately 500 papers, we find that only 30 (6%) apply preference learning to audio tasks. Our analysis reveals a field in transition: pre-2021 works focused on emotion recognition using traditional ranking methods (rankSVM), while post-2021 studies have pivoted toward generation tasks employing modern RLHF frameworks. We identify three critical patterns: (1) the emergence of multi-dimensional evaluation strategies combining synthetic, automated, and human preferences; (2) inconsistent alignment between traditional metrics (WER, PESQ) and human judgments across different contexts; and (3) convergence on multi-stage training pipelines that combine reward signals. Our findings suggest that while preference learning shows promise for audio, particularly in capturing subjective qualities like naturalness and musicality, the field requires standardized benchmarks, higher-quality datasets, and systematic investigation of how temporal factors unique to audio impact preference learning frameworks.
Similar Papers
Aligning Generative Music AI with Human Preferences: Methods and Challenges
Sound
AI makes music that people actually like.
Aligning Audio Captions with Human Preferences
Audio and Speech Processing
Makes AI describe sounds like people do.
Revisiting Audio-language Pretraining for Learning General-purpose Audio Representation
Audio and Speech Processing
Teaches computers to understand all sounds.