Score: 1

Self-Rewarded Multimodal Coherent Reasoning Across Diverse Visual Domains

Published: December 27, 2025 | arXiv ID: 2512.22545v1

By: Jesen Zhang , Ningyuan Liu , Kaitong Cai and more

Potential Business Impact:

Makes AI explain its thinking more clearly.

Business Areas:
Image Recognition Data and Analytics, Software

Multimodal LLMs often produce fluent yet unreliable reasoning, exhibiting weak step-to-step coherence and insufficient visual grounding, largely because existing alignment approaches supervise only the final answer while ignoring the reliability of the intermediate reasoning process. We introduce SR-MCR, a lightweight and label-free framework that aligns reasoning by exploiting intrinsic process signals derived directly from model outputs. Five self-referential cues -- semantic alignment, lexical fidelity, non-redundancy, visual grounding, and step consistency -- are integrated into a normalized, reliability-weighted reward that provides fine-grained process-level guidance. A critic-free GRPO objective, enhanced with a confidence-aware cooling mechanism, further stabilizes training and suppresses trivial or overly confident generations. Built on Qwen2.5-VL, SR-MCR improves both answer accuracy and reasoning coherence across a broad set of visual benchmarks; among open-source models of comparable size, SR-MCR-7B achieves state-of-the-art performance with an average accuracy of 81.4%. Ablation studies confirm the independent contributions of each reward term and the cooling module.

Page Count
21 pages

Category
Computer Science:
CV and Pattern Recognition