Self-Rewarded Multimodal Coherent Reasoning Across Diverse Visual Domains
By: Jesen Zhang , Ningyuan Liu , Kaitong Cai and more
Potential Business Impact:
Makes AI explain its thinking more clearly.
Multimodal LLMs often produce fluent yet unreliable reasoning, exhibiting weak step-to-step coherence and insufficient visual grounding, largely because existing alignment approaches supervise only the final answer while ignoring the reliability of the intermediate reasoning process. We introduce SR-MCR, a lightweight and label-free framework that aligns reasoning by exploiting intrinsic process signals derived directly from model outputs. Five self-referential cues -- semantic alignment, lexical fidelity, non-redundancy, visual grounding, and step consistency -- are integrated into a normalized, reliability-weighted reward that provides fine-grained process-level guidance. A critic-free GRPO objective, enhanced with a confidence-aware cooling mechanism, further stabilizes training and suppresses trivial or overly confident generations. Built on Qwen2.5-VL, SR-MCR improves both answer accuracy and reasoning coherence across a broad set of visual benchmarks; among open-source models of comparable size, SR-MCR-7B achieves state-of-the-art performance with an average accuracy of 81.4%. Ablation studies confirm the independent contributions of each reward term and the cooling module.
Similar Papers
Coherent Multimodal Reasoning with Iterative Self-Evaluation for Vision-Language Models
Computation and Language
Helps computers understand pictures and think step-by-step.
Video-R2: Reinforcing Consistent and Grounded Reasoning in Multimodal Language Models
CV and Pattern Recognition
Helps computers understand videos by watching carefully.
Self-Rewarding Vision-Language Model via Reasoning Decomposition
CV and Pattern Recognition
Teaches computers to see and describe pictures accurately.