CASHEW: Stabilizing Multimodal Reasoning via Iterative Trajectory Aggregation
By: Chaoyu Li , Deeparghya Dutta Barua , Fei Tao and more
Vision-language models achieve strong performance across a wide range of multimodal understanding and reasoning tasks, yet their multi-step reasoning remains unstable. Repeated sampling over the same input often produces divergent reasoning trajectories and inconsistent final predictions. To address this, we introduce two complementary approaches inspired by test-time scaling: (1) CASHEW, an inference-time framework that stabilizes reasoning by iteratively aggregating multiple candidate trajectories into higher-quality reasoning traces, with explicit visual verification filtering hallucinated steps and grounding reasoning in visual evidence, and (2) CASHEW-RL, a learned variant that internalizes this aggregation behavior within a single model. CASHEW-RL is trained using Group Sequence Policy Optimization (GSPO) with a composite reward that encourages correct answers grounded in minimal yet sufficient visual evidence, while adaptively allocating reasoning effort based on task difficulty. This training objective enables robust self-aggregation at inference. Extensive experiments on 13 image understanding, video understanding, and video reasoning benchmarks show significant performance improvements, including gains of up to +23.6 percentage points on ScienceQA and +8.1 percentage points on EgoSchema.
Similar Papers
Video-R2: Reinforcing Consistent and Grounded Reasoning in Multimodal Language Models
CV and Pattern Recognition
Helps computers understand videos by watching carefully.
Guiding the Inner Eye: A Framework for Hierarchical and Flexible Visual Grounded Reasoning
CV and Pattern Recognition
Helps AI "see" and "think" about pictures better.
CogFlow: Bridging Perception and Reasoning through Knowledge Internalization for Visual Mathematical Problem Solving
CV and Pattern Recognition
Helps computers solve math problems with pictures.