SketchThinker-R1: Towards Efficient Sketch-Style Reasoning in Large Multimodal Models
By: Ruiyang Zhang, Dongzhan Zhou, Zhedong Zheng
Potential Business Impact:
Makes AI think faster and cheaper.
Despite the empirical success of extensive, step-by-step reasoning in large multimodal models, long reasoning processes inevitably incur substantial computational overhead, i.e., in terms of higher token costs and increased response time, which undermines inference efficiency. In contrast, humans often employ sketch-style reasoning: a concise, goal-directed cognitive process that prioritizes salient information and enables efficient problem-solving. Inspired by this cognitive efficiency, we propose SketchThinker-R1, which incentivizes sketch-style reasoning ability in large multimodal models. Our method consists of three primary stages. In the Sketch-Mode Cold Start stage, we convert standard long reasoning process into sketch-style reasoning and finetune base multimodal model, instilling initial sketch-style reasoning capability. Next, we train SketchJudge Reward Model, which explicitly evaluates thinking process of model and assigns higher scores to sketch-style reasoning. Finally, we conduct Sketch-Thinking Reinforcement Learning under supervision of SketchJudge to further generalize sketch-style reasoning ability. Experimental evaluation on four benchmarks reveals that our SketchThinker-R1 achieves over 64% reduction in reasoning token cost without compromising final answer accuracy. Qualitative analysis further shows that sketch-style reasoning focuses more on key cues during problem solving.
Similar Papers
OneThinker: All-in-one Reasoning Model for Image and Video
CV and Pattern Recognition
One model understands images and videos for many tasks.
OneThinker: All-in-one Reasoning Model for Image and Video
CV and Pattern Recognition
One AI understands pictures and videos for many tasks.
ProofSketch: Efficient Verified Reasoning for Large Language Models
Computation and Language
Makes AI think smarter, faster, and cheaper.