AdaViP: Aligning Multi-modal LLMs via Adaptive Vision-enhanced Preference Optimization
By: Jinda Lu , Jinghan Li , Yuan Gao and more
Potential Business Impact:
Teaches AI to see and understand pictures better.
Preference alignment through Direct Preference Optimization (DPO) has demonstrated significant effectiveness in aligning multimodal large language models (MLLMs) with human preferences. However, existing methods focus primarily on language preferences while neglecting the critical visual context. In this paper, we propose an Adaptive Vision-enhanced Preference optimization (AdaViP) that addresses these limitations through two key innovations: (1) vision-based preference pair construction, which integrates multiple visual foundation models to strategically remove key visual elements from the image, enhancing MLLMs' sensitivity to visual details; and (2) adaptive preference optimization that dynamically balances vision- and language-based preferences for more accurate alignment. Extensive evaluations across different benchmarks demonstrate our effectiveness. Notably, our AdaViP-7B achieves 93.7% and 96.4% reductions in response-level and mentioned-level hallucination respectively on the Object HalBench, significantly outperforming current state-of-the-art methods.
Similar Papers
PaMi-VDPO: Mitigating Video Hallucinations by Prompt-Aware Multi-Instance Video Preference Learning
CV and Pattern Recognition
Teaches AI to describe videos without making things up.
Aligning Large Vision-Language Models by Deep Reinforcement Learning and Direct Preference Optimization
Machine Learning (CS)
Teaches AI to understand pictures and words better.
AdPO: Enhancing the Adversarial Robustness of Large Vision-Language Models with Preference Optimization
CV and Pattern Recognition
Protects AI from tricks, keeps answers correct.