Optimizing LVLMs with On-Policy Data for Effective Hallucination Mitigation
By: Chengzhi Yu , Yifan Xu , Yifan Chen and more
Potential Business Impact:
Makes AI stop making up fake answers.
Recently, large vision-language models (LVLMs) have risen to be a promising approach for multimodal tasks. However, principled hallucination mitigation remains a critical challenge.In this work, we first analyze the data generation process in LVLM hallucination mitigation and affirm that on-policy data significantly outperforms off-policy data, which thus calls for efficient and reliable preference annotation of on-policy data. We then point out that, existing annotation methods introduce additional hallucination in training samples, which may enhance the model's hallucination patterns, to address this problem, we propose training a hallucination classifier giving binary annotations, which guarantee clean chosen samples for the subsequent alignment. To further harness of the power of on-policy data, we design a robust iterative direct preference optimization (DPO) algorithm adopting a dynamic sample reweighting scheme. We conduct comprehensive experiments on three benchmarks with comparison to 8 state-of-the-art baselines. In particular, our approach reduces the hallucination rate of LLaVA-1.5-7B on MMHalBench by 50.8% and the average hallucination rate on Object HalBench by 79.5%; more significantly, our method fully taps into the potential of open-source models, enabling LLaVA-1.5-13B to even surpass the performance of GPT-4V.
Similar Papers
Mitigating Hallucinations in Multimodal LLMs via Object-aware Preference Optimization
CV and Pattern Recognition
Makes AI stop making up fake answers.
Mitigating Hallucinations in Large Vision-Language Models via Entity-Centric Multimodal Preference Optimization
CV and Pattern Recognition
Fixes AI mistakes when it sees pictures.
Mitigating Image Captioning Hallucinations in Vision-Language Models
Multimedia
Fixes AI mistakes when it sees and talks.