Agent-Dice: Disentangling Knowledge Updates via Geometric Consensus for Agent Continual Learning
By: Zheng Wu , Xingyu Lou , Xinbei Ma and more
Potential Business Impact:
Helps AI learn new things without forgetting old ones.
Large Language Model (LLM)-based agents significantly extend the utility of LLMs by interacting with dynamic environments. However, enabling agents to continually learn new tasks without catastrophic forgetting remains a critical challenge, known as the stability-plasticity dilemma. In this work, we argue that this dilemma fundamentally arises from the failure to explicitly distinguish between common knowledge shared across tasks and conflicting knowledge introduced by task-specific interference. To address this, we propose Agent-Dice, a parameter fusion framework based on directional consensus evaluation. Concretely, Agent-Dice disentangles knowledge updates through a two-stage process: geometric consensus filtering to prune conflicting gradients, and curvature-based importance weighting to amplify shared semantics. We provide a rigorous theoretical analysis that establishes the validity of the proposed fusion scheme and offers insight into the origins of the stability-plasticity dilemma. Extensive experiments on GUI agents and tool-use agent domains demonstrate that Agent-Dice exhibits outstanding continual learning performance with minimal computational overhead and parameter updates. The codes are available at https://github.com/Wuzheng02/Agent-Dice.
Similar Papers
Multi-Agent Debate for LLM Judges with Adaptive Stability Detection
Artificial Intelligence
Debating computers make better judgments than voting ones.
Agentic-KGR: Co-evolutionary Knowledge Graph Construction through Multi-Agent Reinforcement Learning
Machine Learning (CS)
Lets AI learn new things as it talks.
Detailed balance in large language model-driven agents
Machine Learning (CS)
AI learns like physics, not just rules.