ENTRA: Entropy-Based Redundancy Avoidance in Large Language Model Reasoning
By: Ruichu Cai , Haopeng Du , Qingwen Lin and more
Large Reasoning Models (LRMs) often suffer from overthinking, generating unnecessarily long reasoning chains even for simple tasks. This leads to substantial computational overhead with limited performance gain, primarily due to redundant verification and repetitive generation. While prior work typically constrains output length or optimizes correctness, such coarse supervision fails to guide models toward concise yet accurate inference. In this paper, we propose ENTRA, an entropy-based training framework that suppresses redundant reasoning while preserving performance. ENTRA first estimates the token-level importance using a lightweight Bidirectional Importance Estimation (BIE) method, which accounts for both prediction confidence and forward influence. It then computes a redundancy reward based on the entropy of low-importance tokens, normalized by its theoretical upper bound, and optimizes this reward via reinforcement learning. Experiments on mathematical reasoning benchmarks demonstrate that ENTRA reduces output length by 37% to 53% with no loss-and in some cases, gains-in accuracy. Our approach offers a principled and efficient solution to reduce overthinking in LRMs, and provides a generalizable path toward redundancy-aware reasoning optimization.
Similar Papers
Entropy-Guided Reasoning Compression
Computation and Language
Makes AI think shorter, faster, and smarter.
Efficient Reinforcement Learning with Semantic and Token Entropy for LLM Reasoning
Artificial Intelligence
Makes AI smarter and better at solving problems.
Think or Not? Exploring Thinking Efficiency in Large Reasoning Models via an Information-Theoretic Lens
Computation and Language
Makes smart computers think shorter, faster, and better.