Score: 2

Scaling Long-Horizon LLM Agent via Context-Folding

Published: October 13, 2025 | arXiv ID: 2510.11967v1

By: Weiwei Sun , Miao Lu , Zhan Ling and more

BigTech Affiliations: ByteDance

Potential Business Impact:

Helps AI remember more for long tasks.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language model (LLM) agents are fundamentally constrained by context length on long-horizon tasks. We introduce Context-Folding, a framework that empowers agents to actively manage their working context. An agent can procedurally branch into a sub-trajectory to handle a subtask and then fold it upon completion, collapsing the intermediate steps while retaining a concise summary of the outcome. To make this behavior learnable, we develop an end-to-end reinforcement learning framework FoldGRPO with specific process rewards to encourage effective task decomposition and context management. On complex long-horizon tasks (Deep Research and SWE), our folding agent matches or outperforms the ReAct baselines while using an active context 10$\times$ smaller and significantly outperforms models that rely on summarization-based context management.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
22 pages

Category
Computer Science:
Computation and Language