Score: 0

Agent-Dice: Disentangling Knowledge Updates via Geometric Consensus for Agent Continual Learning

Published: January 7, 2026 | arXiv ID: 2601.03641v2

By: Zheng Wu , Xingyu Lou , Xinbei Ma and more

Potential Business Impact:

Helps AI learn new things without forgetting old ones.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Large Language Model (LLM)-based agents significantly extend the utility of LLMs by interacting with dynamic environments. However, enabling agents to continually learn new tasks without catastrophic forgetting remains a critical challenge, known as the stability-plasticity dilemma. In this work, we argue that this dilemma fundamentally arises from the failure to explicitly distinguish between common knowledge shared across tasks and conflicting knowledge introduced by task-specific interference. To address this, we propose Agent-Dice, a parameter fusion framework based on directional consensus evaluation. Concretely, Agent-Dice disentangles knowledge updates through a two-stage process: geometric consensus filtering to prune conflicting gradients, and curvature-based importance weighting to amplify shared semantics. We provide a rigorous theoretical analysis that establishes the validity of the proposed fusion scheme and offers insight into the origins of the stability-plasticity dilemma. Extensive experiments on GUI agents and tool-use agent domains demonstrate that Agent-Dice exhibits outstanding continual learning performance with minimal computational overhead and parameter updates. The codes are available at https://github.com/Wuzheng02/Agent-Dice.

Country of Origin
🇨🇳 China

Page Count
16 pages

Category
Computer Science:
Computation and Language