Dynamic Thinking-Token Selection for Efficient Reasoning in Large Reasoning Models
By: Zhenyuan Guo , Tong Chen , Wenlong Meng and more
Potential Business Impact:
Makes AI think faster by skipping extra steps.
Large Reasoning Models (LRMs) excel at solving complex problems by explicitly generating a reasoning trace before deriving the final answer. However, these extended generations incur substantial memory footprint and computational overhead, bottlenecking LRMs' efficiency. This work uses attention maps to analyze the influence of reasoning traces and uncover an interesting phenomenon: only some decision-critical tokens in a reasoning trace steer the model toward the final answer, while the remaining tokens contribute negligibly. Building on this observation, we propose Dynamic Thinking-Token Selection (DynTS). This method identifies decision-critical tokens and retains only their associated Key-Value (KV) cache states during inference, evicting the remaining redundant entries to optimize efficiency.
Similar Papers
State over Tokens: Characterizing the Role of Reasoning Tokens
Computation and Language
Lets computers "think" better by showing their steps.
DTS: Enhancing Large Reasoning Models via Decoding Tree Sketching
Artificial Intelligence
Finds faster, more accurate answers from AI.
Don't Think Longer, Think Wisely: Optimizing Thinking Dynamics for Large Reasoning Models
Artificial Intelligence
Makes AI think smarter, faster, and more accurately.