Leveraging Parameter Space Symmetries for Reasoning Skill Transfer in LLMs
By: Stefan Horoi , Sangwoo Cho , Supriyo Chakraborty and more
Potential Business Impact:
Helps AI models share smart thinking skills better.
Task arithmetic is a powerful technique for transferring skills between Large Language Models (LLMs), but it often suffers from negative interference when models have diverged during training. We address this limitation by first aligning the models' parameter spaces, leveraging the inherent permutation, rotation, and scaling symmetries of Transformer architectures. We adapt parameter space alignment for modern Grouped-Query Attention (GQA) and SwiGLU layers, exploring both weight-based and activation-based approaches. Using this alignment-first strategy, we successfully transfer advanced reasoning skills to a non-reasoning model. Experiments on challenging reasoning benchmarks show that our method consistently outperforms standard task arithmetic. This work provides an effective approach for merging and transferring specialized skills across evolving LLM families, reducing redundant fine-tuning and enhancing model adaptability.
Similar Papers
An Investigation of Robustness of LLMs in Mathematical Reasoning: Benchmarking with Mathematically-Equivalent Transformation of Advanced Mathematical Problems
Computation and Language
Tests if AI can do math, even when words change.
Investigating Task Arithmetic for Zero-Shot Information Retrieval
Information Retrieval
Combines AI knowledge for better search results.
AbstRaL: Augmenting LLMs' Reasoning by Reinforcing Abstract Thinking
Computation and Language
Teaches computers to solve math problems better.