Score: 1

FRoD: Full-Rank Efficient Fine-Tuning with Rotational Degrees for Fast Convergence

Published: December 29, 2025 | arXiv ID: 2512.23485v1

By: Guoan Wan , Tianyu Chen , Fangzheng Feng and more

Potential Business Impact:

Makes AI learn tasks faster with less effort.

Business Areas:
EdTech Education, Software

Parameter-efficient fine-tuning (PEFT) methods have emerged as a practical solution for adapting large foundation models to downstream tasks, reducing computational and memory costs by updating only a small subset of parameters. Among them, approaches like LoRA aim to strike a balance between efficiency and expressiveness, but often suffer from slow convergence and limited adaptation capacity due to their inherent low-rank constraints. This trade-off hampers the ability of PEFT methods to capture complex patterns needed for diverse tasks. To address these challenges, we propose FRoD, a novel fine-tuning method that combines hierarchical joint decomposition with rotational degrees of freedom. By extracting a globally shared basis across layers and injecting sparse, learnable perturbations into scaling factors for flexible full-rank updates, FRoD enhances expressiveness and efficiency, leading to faster and more robust convergence. On 20 benchmarks spanning vision, reasoning, and language understanding, FRoD matches full model fine-tuning in accuracy, while using only 1.72% of trainable parameters under identical training budgets.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
32 pages

Category
Computer Science:
Machine Learning (CS)