Score: 1

Benchmarking Direct Preference Optimization for Medical Large Vision-Language Models

Published: January 25, 2026 | arXiv ID: 2601.17918v1

By: Dain Kim , Jiwoo Lee , Jaehoon Yun and more

Potential Business Impact:

Makes AI better at understanding medical pictures.

Business Areas:
Image Recognition Data and Analytics, Software

Large Vision-Language Models (LVLMs) hold significant promise for medical applications, yet their deployment is often constrained by insufficient alignment and reliability. While Direct Preference Optimization (DPO) has emerged as a potent framework for refining model responses, its efficacy in high-stakes medical contexts remains underexplored, lacking the rigorous empirical groundwork necessary to guide future methodological advances. To bridge this gap, we present the first comprehensive examination of diverse DPO variants within the medical domain, evaluating nine distinct formulations across two medical LVLMs: LLaVA-Med and HuatuoGPT-Vision. Our results reveal several critical limitations: current DPO approaches often yield inconsistent gains over supervised fine-tuning, with their efficacy varying significantly across different tasks and backbones. Furthermore, they frequently fail to resolve fundamental visual misinterpretation errors. Building on these insights, we present a targeted preference construction strategy as a proof-of-concept that explicitly addresses visual misinterpretation errors frequently observed in existing DPO models. This design yields a 3.6% improvement over the strongest existing DPO baseline on visual question-answering tasks. To support future research, we release our complete framework, including all training data, model checkpoints, and our codebase at https://github.com/dmis-lab/med-vlm-dpo.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Repos / Data Links

Page Count
16 pages

Category
Computer Science:
CV and Pattern Recognition