Benchmarking Direct Preference Optimization for Medical Large Vision-Language Models
By: Dain Kim , Jiwoo Lee , Jaehoon Yun and more
Potential Business Impact:
Makes AI better at understanding medical pictures.
Large Vision-Language Models (LVLMs) hold significant promise for medical applications, yet their deployment is often constrained by insufficient alignment and reliability. While Direct Preference Optimization (DPO) has emerged as a potent framework for refining model responses, its efficacy in high-stakes medical contexts remains underexplored, lacking the rigorous empirical groundwork necessary to guide future methodological advances. To bridge this gap, we present the first comprehensive examination of diverse DPO variants within the medical domain, evaluating nine distinct formulations across two medical LVLMs: LLaVA-Med and HuatuoGPT-Vision. Our results reveal several critical limitations: current DPO approaches often yield inconsistent gains over supervised fine-tuning, with their efficacy varying significantly across different tasks and backbones. Furthermore, they frequently fail to resolve fundamental visual misinterpretation errors. Building on these insights, we present a targeted preference construction strategy as a proof-of-concept that explicitly addresses visual misinterpretation errors frequently observed in existing DPO models. This design yields a 3.6% improvement over the strongest existing DPO baseline on visual question-answering tasks. To support future research, we release our complete framework, including all training data, model checkpoints, and our codebase at https://github.com/dmis-lab/med-vlm-dpo.
Similar Papers
Aligning Large Vision-Language Models by Deep Reinforcement Learning and Direct Preference Optimization
Machine Learning (CS)
Teaches AI to understand pictures and words better.
Beyond Single: A Data Selection Principle for LLM Alignment via Fine-Grained Preference Signals
Machine Learning (CS)
Teaches AI to follow many different rules better.
VistaDPO: Video Hierarchical Spatial-Temporal Direct Preference Optimization for Large Video Models
CV and Pattern Recognition
Makes AI understand videos better, like people do.