Score: 0

High-Rank Structured Modulation for Parameter-Efficient Fine-Tuning

Published: January 12, 2026 | arXiv ID: 2601.07507v1

By: Yongkang Liu , Xing Li , Mengjie Zhao and more

Potential Business Impact:

Makes AI smarter with fewer computer parts.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

As the number of model parameters increases, parameter-efficient fine-tuning (PEFT) has become the go-to choice for tailoring pre-trained large language models. Low-rank Adaptation (LoRA) uses a low-rank update method to simulate full parameter fine-tuning, which is widely used to reduce resource requirements. However, decreasing the rank encounters challenges with limited representational capacity when compared to full parameter fine-tuning. We present \textbf{SMoA}, a high-rank \textbf{S}tructured \textbf{MO}dulation \textbf{A}dapter that uses fewer trainable parameters while maintaining a higher rank, thereby improving the model's representational capacity and offering improved performance potential. The core idea is to freeze the original pretrained weights and selectively amplify or suppress important features of the original weights across multiple subspaces. The subspace mechanism provides an efficient way to increase the capacity and complexity of a model. We conduct both theoretical analyses and empirical studies on various tasks. Experiment results show that SMoA outperforms LoRA and its variants on 10 tasks, with extensive ablation studies validating its effectiveness.

Page Count
13 pages

Category
Computer Science:
Computation and Language