Thinking Before Constraining: A Unified Decoding Framework for Large Language Models
By: Ngoc Trinh Hung Nguyen , Alonso Silva , Laith Zumot and more
Potential Business Impact:
Lets computers write answers that are both smart and organized.
Natural generation allows Language Models (LMs) to produce free-form responses with rich reasoning, but the lack of guaranteed structure makes outputs difficult to parse or verify. Structured generation, or constrained decoding, addresses this drawback by producing content in standardized formats such as JSON, ensuring consistency and guaranteed-parsable outputs, but it can inadvertently restrict the model's reasoning capabilities. In this work, we propose a simple approach that combines the advantages of both natural and structured generation. By allowing LLMs to reason freely until specific trigger tokens are generated, and then switching to structured generation, our method preserves the expressive power of natural language reasoning while ensuring the reliability of structured outputs. We further evaluate our approach on several datasets, covering both classification and reasoning tasks, to demonstrate its effectiveness, achieving a substantial gain of up to 27% in accuracy compared to natural generation, while requiring only a small overhead of 10-20 extra tokens.
Similar Papers
Structured Reasoning for Large Language Models
Computation and Language
Makes AI think smarter, faster, and shorter.
Last Layer Logits to Logic: Empowering LLMs with Logic-Consistent Structured Knowledge Reasoning
Computation and Language
Fixes computer "thinking" to be more logical.
LLMs Have a Heart of Stone: Demystifying the Soft Thinking Ability of Large Reasoning Models
Computation and Language
Makes AI think more creatively and solve problems better.