Reducing Object Hallucination in Large Audio-Language Models via Audio-Aware Decoding
By: Tzu-wen Hsu , Ke-Han Lu , Cheng-Han Chiang and more
Potential Business Impact:
Stops AI from making up sounds it hears.
Large Audio-Language Models (LALMs) can take audio and text as the inputs and answer questions about the audio. While prior LALMs have shown strong performance on standard benchmarks, there has been alarming evidence that LALMs can hallucinate what is presented in the audio. To mitigate the hallucination of LALMs, we introduce Audio-Aware Decoding (AAD), a lightweight inference-time strategy that uses contrastive decoding to compare the token prediction logits with and without the audio context. By contrastive decoding, AAD promotes the tokens whose probability increases when the audio is present. We conduct our experiment on object hallucination datasets with three LALMs and show that AAD improves the F1 score by 0.046 to 0.428. We also show that AAD can improve the accuracy on general audio QA datasets like Clotho-AQA by 5.4% to 10.3%. We conduct thorough ablation studies to understand the effectiveness of each component in AAD.
Similar Papers
Decoupling Contrastive Decoding: Robust Hallucination Mitigation in Multimodal Large Language Models
Machine Learning (CS)
Stops AI from making up fake answers.
Efficient Contrastive Decoding with Probabilistic Hallucination Detection - Mitigating Hallucinations in Large Vision Language Models -
CV and Pattern Recognition
Stops AI from making up fake answers about pictures.
Towards Audio Token Compression in Large Audio Language Models
Audio and Speech Processing
Makes AI understand long sounds with less computer power.