Score: 2

ERGO: Entropy-guided Resetting for Generation Optimization in Multi-turn Language Models

Published: October 15, 2025 | arXiv ID: 2510.14077v1

By: Haziq Mohammad Khalid , Athikash Jeyaganthan , Timothy Do and more

Potential Business Impact:

Fixes AI confusion in long chats.

Business Areas:
Semantic Search Internet Services

Large Language Models (LLMs) suffer significant performance degradation in multi-turn conversations when information is presented incrementally. Given that multi-turn conversations characterize everyday interactions with LLMs, this degradation poses a severe challenge to real world usability. We hypothesize that abrupt increases in model uncertainty signal misalignment in multi-turn LLM interactions, and we exploit this insight to dynamically realign conversational context. We introduce ERGO (Entropy-guided Resetting for Generation Optimization), which continuously quantifies internal uncertainty via Shannon entropy over next token distributions and triggers adaptive prompt consolidation when a sharp spike in entropy is detected. By treating uncertainty as a first class signal rather than a nuisance to eliminate, ERGO embraces variability in language and modeling, representing and responding to uncertainty. In multi-turn tasks with incrementally revealed instructions, ERGO yields a 56.6% average performance gain over standard baselines, increases aptitude (peak performance capability) by 24.7%, and decreases unreliability (variability in performance) by 35.3%, demonstrating that uncertainty aware interventions can improve both accuracy and reliability in conversational AI.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡¬πŸ‡§ United Kingdom, United States

Repos / Data Links

Page Count
14 pages

Category
Computer Science:
Computation and Language