Score: 1

Focused Chain-of-Thought: Efficient LLM Reasoning via Structured Input Information

Published: November 27, 2025 | arXiv ID: 2511.22176v1

By: Lukas Struppek , Dominik Hintersdorf , Hannah Struppek and more

Potential Business Impact:

Makes AI think faster with less information.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Recent large language models achieve strong reasoning performance by generating detailed chain-of-thought traces, but this often leads to excessive token use and high inference latency. Existing efficiency approaches typically focus on model-centric interventions, such as reinforcement learning or supervised fine-tuning, to reduce verbosity. In contrast, we propose a training-free, input-centric approach. Inspired by cognitive psychology, we introduce Focused Chain-of-Thought (F-CoT), which separates information extraction from the reasoning process. F-CoT first organizes the essential information from a query into a concise, structured context and then guides the model to reason exclusively over this context. By preventing attention to irrelevant details, F-CoT naturally produces shorter reasoning paths. On arithmetic word problems, F-CoT reduces generated tokens by 2-3x while maintaining accuracy comparable to standard zero-shot CoT. These results highlight structured input as a simple yet effective lever for more efficient LLM reasoning.

Page Count
30 pages

Category
Computer Science:
Computation and Language