Score: 1

ReasonAny: Incorporating Reasoning Capability to Any Model via Simple and Effective Model Merging

Published: January 9, 2026 | arXiv ID: 2601.05560v1

By: Junyao Yang , Chen Qian , Dongrui Liu and more

Potential Business Impact:

Lets AI learn new skills without forgetting old ones.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Reasoning Models (LRMs) with long chain-of-thought reasoning have recently achieved remarkable success. Yet, equipping domain-specialized models with such reasoning capabilities, referred to as "Reasoning + X", remains a significant challenge. While model merging offers a promising training-free solution, existing methods often suffer from a destructive performance collapse: existing methods tend to both weaken reasoning depth and compromise domain-specific utility. Interestingly, we identify a counter-intuitive phenomenon underlying this failure: reasoning ability predominantly resides in parameter regions with low gradient sensitivity, contrary to the common assumption that domain capabilities correspond to high-magnitude parameters. Motivated by this insight, we propose ReasonAny, a novel merging framework that resolves the reasoning-domain performance collapse through Contrastive Gradient Identification. Experiments across safety, biomedicine, and finance domains show that ReasonAny effectively synthesizes "Reasoning + X" capabilities, significantly outperforming state-of-the-art baselines while retaining robust reasoning performance.

Country of Origin
🇨🇳 🇸🇬 Singapore, China

Page Count
22 pages

Category
Computer Science:
Computation and Language