ACR: Adaptive Context Refactoring via Context Refactoring Operators for Multi-Turn Dialogue
By: Jiawei Shen , Jia Zhu , Hanghui Guo and more
Potential Business Impact:
Keeps chatbots remembering what you said before.
Large Language Models (LLMs) have shown remarkable performance in multi-turn dialogue. However, in multi-turn dialogue, models still struggle to stay aligned with what has been established earlier, follow dependencies across many turns, and avoid drifting into incorrect facts as the interaction grows longer. Existing approaches primarily focus on extending the context window, introducing external memory, or applying context compression, yet these methods still face limitations such as \textbf{contextual inertia} and \textbf{state drift}. To address these challenges, we propose the \textbf{A}daptive \textbf{C}ontext \textbf{R}efactoring \textbf{(ACR)} Framework, which dynamically monitors and reshapes the interaction history to mitigate contextual inertia and state drift actively. ACR is built on a library of context refactoring operators and a teacher-guided self-evolving training paradigm that learns when to intervene and how to refactor, thereby decoupling context management from the reasoning process. Extensive experiments on multi-turn dialogue demonstrate that our method significantly outperforms existing baselines while reducing token consumption.
Similar Papers
Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models
Machine Learning (CS)
Helps AI remember more details for better thinking.
Adaptive Multi-Agent Response Refinement in Conversational Systems
Computation and Language
Makes chatbots smarter by checking facts and you.
RefineCoder: Iterative Improving of Large Language Models via Adaptive Critique Refinement for Code Generation
Computation and Language
Helps computers write better code by fixing their own mistakes.