Group Pattern Selection Optimization: Let LRMs Pick the Right Pattern for Reasoning
By: Hanbin Wang , Jingwei Song , Jinpeng Li and more
Large reasoning models (LRMs) exhibit diverse high-level reasoning patterns (e.g., direct solution, reflection-and-verification, and exploring multiple solutions), yet prevailing training recipes implicitly bias models toward a limited set of dominant patterns. Through a systematic analysis, we identify substantial accuracy variance across these patterns on mathematics and science benchmarks, revealing that a model's default reasoning pattern is often sub-optimal for a given problem. To address this, we introduce Group Pattern Selection Optimization (GPSO), a reinforcement learning framework that extends GRPO by incorporating multi-pattern rollouts, verifier-guided optimal pattern selection per problem, and attention masking during optimization to prevent the leakage of explicit pattern suffixes into the learned policy. By exploring a portfolio of diverse reasoning strategies and optimizing the policy on the most effective ones, GPSO enables the model to internalize the mapping from problem characteristics to optimal reasoning patterns. Extensive experiments demonstrate that GPSO delivers consistent and substantial performance gains across various model backbones and benchmarks, effectively mitigating pattern sub-optimality and fostering more robust, adaptable reasoning. All data and codes are available at https://github.com/wanghanbinpanda/GPSO.
Similar Papers
Stepwise Guided Policy Optimization: Coloring your Incorrect Reasoning in GRPO
Machine Learning (CS)
Helps AI learn from mistakes, not just successes.
G$^2$RPO-A: Guided Group Relative Policy Optimization with Adaptive Guidance
Artificial Intelligence
Helps small AI learn to think better.
Multi-Layer GRPO: Enhancing Reasoning and Self-Correction in Large Language Models
Machine Learning (CS)
Teaches computers to fix their own mistakes.