Score: 2

Revisiting LLM Reasoning via Information Bottleneck

Published: July 24, 2025 | arXiv ID: 2507.18391v1

By: Shiye Lei , Zhihao Cheng , Kai Jia and more

BigTech Affiliations: ByteDance

Potential Business Impact:

Makes computers think better at math problems.

Large language models (LLMs) have recently demonstrated remarkable progress in reasoning capabilities through reinforcement learning with verifiable rewards (RLVR). By leveraging simple rule-based rewards, RL effectively incentivizes LLMs to produce extended chain-of-thought (CoT) reasoning trajectories, progressively guiding them toward correct answers. However, existing approaches remain largely heuristic and intuition-driven, limiting the development of principled methodologies. In this paper, we present a theoretical characterization of LLM reasoning grounded in information bottleneck (IB) principle, introducing IB-aware reasoning optimization (IBRO), a framework that encourages reasoning trajectories to be both informative about the final correct answer and generalizable across diverse prompts. We derive a practical token-level surrogate objective and propose an efficient approximation, resulting in the lightweight IB regularization method. This technique integrates seamlessly into existing RL-based post-training frameworks without additional computational overhead, requiring only a one-line code modification. Empirically, we validate IB regularization across multiple mathematical reasoning benchmarks and RL algorithms, demonstrating consistent improvements in LLM reasoning performance.

Country of Origin
πŸ‡¨πŸ‡³ πŸ‡ΈπŸ‡¬ Singapore, China

Page Count
13 pages

Category
Computer Science:
Artificial Intelligence