Score: 0

Improving Latent Reasoning in LLMs via Soft Concept Mixing

Published: November 21, 2025 | arXiv ID: 2511.16885v1

By: Kang Wang, Xiangyu Duan, Tianyi Du

Potential Business Impact:

Teaches computers to think with fuzzy ideas.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Unlike human reasoning in abstract conceptual spaces, large language models (LLMs) typically reason by generating discrete tokens, which potentially limit their expressive power. The recent work Soft Thinking has shown that LLMs' latent reasoning via soft concepts is a promising direction, but LLMs are trained on discrete tokens. To reduce this gap between the soft concepts in reasoning and the discrete tokens in training, we propose Soft Concept Mixing (SCM), a soft concept aware training scheme that directly exposes the model to soft representations during training. Specifically, SCM constructs a soft concept vector by forming a probability-weighted average of embeddings. Then, this vector is mixed into the model's hidden states, which embody rich contextual information. Finally, the entire latent reasoning process is optimized with Reinforcement Learning (RL). Experiments on five reasoning benchmarks demonstrate that SCM improves the reasoning performance of LLMs, and simultaneously maintains a stable training dynamic.

Country of Origin
🇨🇳 China

Page Count
7 pages

Category
Computer Science:
Computation and Language