Improving Latent Reasoning in LLMs via Soft Concept Mixing
By: Kang Wang, Xiangyu Duan, Tianyi Du
Potential Business Impact:
Teaches computers to think with fuzzy ideas.
Unlike human reasoning in abstract conceptual spaces, large language models (LLMs) typically reason by generating discrete tokens, which potentially limit their expressive power. The recent work Soft Thinking has shown that LLMs' latent reasoning via soft concepts is a promising direction, but LLMs are trained on discrete tokens. To reduce this gap between the soft concepts in reasoning and the discrete tokens in training, we propose Soft Concept Mixing (SCM), a soft concept aware training scheme that directly exposes the model to soft representations during training. Specifically, SCM constructs a soft concept vector by forming a probability-weighted average of embeddings. Then, this vector is mixed into the model's hidden states, which embody rich contextual information. Finally, the entire latent reasoning process is optimized with Reinforcement Learning (RL). Experiments on five reasoning benchmarks demonstrate that SCM improves the reasoning performance of LLMs, and simultaneously maintains a stable training dynamic.
Similar Papers
LLMs are Single-threaded Reasoners: Demystifying the Working Mechanism of Soft Thinking
Computation and Language
Makes AI think more creatively and solve problems better.
LLMs Have a Heart of Stone: Demystifying the Soft Thinking Ability of Large Reasoning Models
Computation and Language
Makes AI think more creatively and solve problems better.
Latent Reasoning in LLMs as a Vocabulary-Space Superposition
Computation and Language
Makes computers think faster, using less power.