Score: 1

ARM: Role-Conditioned Neuron Transplantation for Training-Free Generalist LLM Agent Merging

Published: January 12, 2026 | arXiv ID: 2601.07309v1

By: Zhuoka Feng , Kang Chen , Sihan Zhao and more

Potential Business Impact:

Combines smart computer brains to learn many jobs.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Interactive large language model agents have advanced rapidly, but most remain specialized to a single environment and fail to adapt robustly to other environments. Model merging offers a training-free alternative by integrating multiple experts into a single model. In this paper, we propose Agent-Role Merging (ARM), an activation-guided, role-conditioned neuron transplantation method for model merging in LLM agents. ARM improves existing merging methods from static natural language tasks to multi-turn agent scenarios, and over the generalization ability across various interactive environments. This is achieved with a well designed 3-step framework: 1) constructing merged backbones, 2) selection based on its role-conditioned activation analysis, and 3) neuron transplantation for fine-grained refinements. Without gradient-based optimization, ARM improves cross-benchmark generalization while enjoying efficiency. Across diverse domains, the model obtained via ARM merging outperforms prior model merging methods and domain-specific expert models, while demonstrating strong out-of-domain generalization.

Country of Origin
🇨🇳 China

Page Count
17 pages

Category
Computer Science:
Artificial Intelligence