CARE What Fails: Contrastive Anchored-REflection for Verifiable Multimodal
By: Yongxin Wang , Zhicheng Yang , Meng Cao and more
Group-relative reinforcement learning with verifiable rewards (RLVR) often wastes the most informative data it already has the failures. When all rollouts are wrong, gradients stall; when one happens to be correct, the update usually ignores why the others are close-but-wrong, and credit can be misassigned to spurious chains. We present CARE (Contrastive Anchored REflection), a failure-centric post-training framework for multimodal reasoning that turns errors into supervision. CARE combines: (i) an anchored-contrastive objective that forms a compact subgroup around the best rollout and a set of semantically proximate hard negatives, performs within-subgroup z-score normalization with negative-only scaling, and includes an all-negative rescue to prevent zero-signal batches; and (ii) Reflection-Guided Resampling (RGR), a one-shot structured self-repair that rewrites a representative failure and re-scores it with the same verifier, converting near-misses into usable positives without any test-time reflection. CARE improves accuracy and training smoothness while explicitly increasing the share of learning signal that comes from failures. On Qwen2.5-VL-7B, CARE lifts macro-averaged accuracy by 4.6 points over GRPO across six verifiable visual-reasoning benchmarks; with Qwen3-VL-8B it reaches competitive or state-of-the-art results on MathVista and MMMU-Pro under an identical evaluation protocol.
Similar Papers
GRPO-CARE: Consistency-Aware Reinforcement Learning for Multimodal Reasoning
CV and Pattern Recognition
Makes AI understand videos and explain its thinking.
Stabilizing Reinforcement Learning for Honesty Alignment in Language Models on Deductive Reasoning
Computation and Language
Teaches AI to reason honestly and avoid mistakes.
Conflict-Aware Soft Prompting for Retrieval-Augmented Generation
Computation and Language
Helps AI tell true facts from fake ones.