U-Fold: Dynamic Intent-Aware Context Folding for User-Centric Agents
By: Jin Su , Runnan Fang , Yeqiu Li and more
Potential Business Impact:
Helps AI remember long conversations better.
Large language model (LLM)-based agents have been successfully deployed in many tool-augmented settings, but their scalability is fundamentally constrained by context length. Existing context-folding methods mitigate this issue by summarizing past interactions, yet they are typically designed for single-query or single-intent scenarios. In more realistic user-centric dialogues, we identify two major failure modes: (i) they irreversibly discard fine-grained constraints and intermediate facts that are crucial for later decisions, and (ii) their summaries fail to track evolving user intent, leading to omissions and erroneous actions. To address these limitations, we propose U-Fold, a dynamic context-folding framework tailored to user-centric tasks. U-Fold retains the full user--agent dialogue and tool-call history but, at each turn, uses two core components to produce an intent-aware, evolving dialogue summary and a compact, task-relevant tool log. Extensive experiments on $τ$-bench, $τ^2$-bench, VitaBench, and harder context-inflated settings show that U-Fold consistently outperforms ReAct (achieving a 71.4% win rate in long-context settings) and prior folding baselines (with improvements of up to 27.0%), particularly on long, noisy, multi-turn tasks. Our study demonstrates that U-Fold is a promising step toward transferring context-management techniques from single-query benchmarks to realistic user-centric applications.
Similar Papers
AgentFold: Long-Horizon Web Agents with Proactive Context Management
Computation and Language
Helps AI remember more to do complex tasks.
FoldAct: Efficient and Stable Context Folding for Long-Horizon Search Agents
Machine Learning (CS)
Teaches AI to remember long conversations better.
Scaling Long-Horizon LLM Agent via Context-Folding
Computation and Language
Helps AI remember more for long tasks.