Aligning Audio Captions with Human Preferences
By: Kartik Hegde , Rehana Mahfuz , Yinyi Guo and more
Potential Business Impact:
Makes AI describe sounds like people do.
Current audio captioning systems rely heavily on supervised learning with paired audio-caption datasets, which are expensive to curate and may not reflect human preferences in real-world scenarios. To address this limitation, we propose a preference-aligned audio captioning framework based on Reinforcement Learning from Human Feedback (RLHF). To effectively capture nuanced human preferences, we train a Contrastive Language-Audio Pretraining (CLAP)-based reward model using human-labeled pairwise preference data. This reward model is integrated into a reinforcement learning framework to fine-tune any baseline captioning system without relying on ground-truth caption annotations. Extensive human evaluations across multiple datasets show that our method produces captions preferred over those from baseline models, particularly in cases where the baseline models fail to provide correct and natural captions. Furthermore, our framework achieves performance comparable to supervised approaches with ground-truth data, demonstrating its effectiveness in aligning audio captioning with human preferences and its scalability in real-world scenarios.
Similar Papers
Maximizing the efficiency of human feedback in AI alignment: a comparative analysis
Human-Computer Interaction
Teaches AI to learn faster from people's choices.
Maximizing the efficiency of human feedback in AI alignment: a comparative analysis
Human-Computer Interaction
Teaches computers to learn what people like faster.
Explainable reinforcement learning from human feedback to improve alignment
Machine Learning (CS)
Fixes bad AI answers by finding and removing wrong training data.