Score: 0

Budget-Aware Anytime Reasoning with LLM-Synthesized Preference Data

Published: January 16, 2026 | arXiv ID: 2601.11038v1

By: Xuanming Zhang , Shwan Ashrafi , Aziza Mirsaidova and more

Potential Business Impact:

Helps AI make better choices faster.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We study the reasoning behavior of large language models (LLMs) under limited computation budgets. In such settings, producing useful partial solutions quickly is often more practical than exhaustive reasoning, which incurs high inference costs. Many real-world tasks, such as trip planning, require models to deliver the best possible output within a fixed reasoning budget. We introduce an anytime reasoning framework and the Anytime Index, a metric that quantifies how effectively solution quality improves as reasoning tokens increase. To further enhance efficiency, we propose an inference-time self-improvement method using LLM-synthesized preference data, where models learn from their own reasoning comparisons to produce better intermediate solutions. Experiments on NaturalPlan (Trip), AIME, and GPQA datasets show consistent gains across Grok-3, GPT-oss, GPT-4.1/4o, and LLaMA models, improving both reasoning quality and efficiency under budget constraints.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
13 pages

Category
Computer Science:
Computation and Language