Score: 0

Leveraging Parameter Space Symmetries for Reasoning Skill Transfer in LLMs

Published: November 13, 2025 | arXiv ID: 2511.10850v1

By: Stefan Horoi , Sangwoo Cho , Supriyo Chakraborty and more

Potential Business Impact:

Helps AI models share smart thinking skills better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Task arithmetic is a powerful technique for transferring skills between Large Language Models (LLMs), but it often suffers from negative interference when models have diverged during training. We address this limitation by first aligning the models' parameter spaces, leveraging the inherent permutation, rotation, and scaling symmetries of Transformer architectures. We adapt parameter space alignment for modern Grouped-Query Attention (GQA) and SwiGLU layers, exploring both weight-based and activation-based approaches. Using this alignment-first strategy, we successfully transfer advanced reasoning skills to a non-reasoning model. Experiments on challenging reasoning benchmarks show that our method consistently outperforms standard task arithmetic. This work provides an effective approach for merging and transferring specialized skills across evolving LLM families, reducing redundant fine-tuning and enhancing model adaptability.

Page Count
11 pages

Category
Computer Science:
Computation and Language