Structured Reasoning for Large Language Models
By: Jinyi Han , Zixiang Di , Zishang Jiang and more
Potential Business Impact:
Makes AI think smarter, faster, and shorter.
Large language models (LLMs) achieve strong performance by generating long chains of thought, but longer traces always introduce redundant or ineffective reasoning steps. One typical behavior is that they often perform unnecessary verification and revisions even if they have reached the correct answers. This limitation stems from the unstructured nature of reasoning trajectories and the lack of targeted supervision for critical reasoning abilities. To address this, we propose Structured Reasoning (SCR), a framework that decouples reasoning trajectories into explicit, evaluable, and trainable components. We mainly implement SCR using a Generate-Verify-Revise paradigm. Specifically, we construct structured training data and apply Dynamic Termination Supervision to guide the model in deciding when to terminate reasoning. To avoid interference between learning signals for different reasoning abilities, we adopt a progressive two-stage reinforcement learning strategy: the first stage targets initial generation and self-verification, and the second stage focuses on revision. Extensive experiments on three backbone models show that SCR substantially improves reasoning efficiency and self-verification. Besides, compared with existing reasoning paradigms, it reduces output token length by up to 50%.
Similar Papers
From Chains to Graphs: Self-Structured Reasoning for General-Domain LLMs
Computation and Language
Helps computers think better by drawing thought maps.
A Stepwise-Enhanced Reasoning Framework for Large Language Models Based on External Subgraph Generation
Computation and Language
Helps computers think better by using facts.
Training Language Models to Reason Efficiently
Machine Learning (CS)
Makes smart computer programs think faster, cheaper.