MedAlign: A Synergistic Framework of Multimodal Preference Optimization and Federated Meta-Cognitive Reasoning
By: Siyong Chen , Jinbo Wen , Jiawen Kang and more
Potential Business Impact:
Helps doctors understand medical images better.
Recently, large models have shown significant potential for smart healthcare. However, the deployment of Large Vision-Language Models (LVLMs) for clinical services is currently hindered by three critical challenges: a tendency to hallucinate answers not grounded in visual evidence, the inefficiency of fixed-depth reasoning, and the difficulty of multi-institutional collaboration. To address these challenges, in this paper, we develop MedAlign, a novel framework to ensure visually accurate LVLM responses for Medical Visual Question Answering (Med-VQA). Specifically, we first propose a multimodal Direct Preference Optimization (mDPO) objective to explicitly align preference learning with visual context. We then design a Retrieval-Aware Mixture-of-Experts (RA-MoE) architecture that utilizes image and text similarity to route queries to a specialized and context-augmented LVLM (i.e., an expert), thereby mitigating hallucinations in LVLMs. To achieve adaptive reasoning and facilitate multi-institutional collaboration, we propose a federated governance mechanism, where the selected expert, fine-tuned on clinical datasets based on mDPO, locally performs iterative Chain-of-Thought (CoT) reasoning via the local meta-cognitive uncertainty estimator. Extensive experiments on three representative Med-VQA datasets demonstrate that MedAlign achieves state-of-the-art performance, outperforming strong retrieval-augmented baselines by up to $11.85\%$ in F1-score, and simultaneously reducing the average reasoning length by $51.60\%$ compared with fixed-depth CoT approaches.
Similar Papers
Aligning Large Vision-Language Models by Deep Reinforcement Learning and Direct Preference Optimization
Machine Learning (CS)
Teaches AI to understand pictures and words better.
M3PO: Multimodal-Model-Guided Preference Optimization for Visual Instruction Following
Computation and Language
Teaches AI to follow picture instructions better.
AdaViP: Aligning Multi-modal LLMs via Adaptive Vision-enhanced Preference Optimization
CV and Pattern Recognition
Teaches AI to see and understand pictures better.