Score: 0

AFA-LoRA: Enabling Non-Linear Adaptations in LoRA with Activation Function Annealing

Published: December 27, 2025 | arXiv ID: 2512.22455v1

By: Jiacheng Li , Jianchao Tan , Zhidong Yang and more

Potential Business Impact:

Makes AI learn better without needing more training.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Low-Rank Adaptation (LoRA) is a widely adopted parameter-efficient fine-tuning (PEFT) method. However, its linear adaptation process limits its expressive power. This means there is a gap between the expressive power of linear training and non-linear training. To bridge this gap, we propose AFA-LoRA, a novel training strategy that brings non-linear expressivity to LoRA while maintaining its seamless mergeability. Our key innovation is an annealed activation function that transitions from a non-linear to a linear transformation during training, allowing the adapter to initially adopt stronger representational capabilities before converging to a mergeable linear form. We implement our method on supervised fine-tuning, reinforcement learning, and speculative decoding. The results show that AFA-LoRA reduces the performance gap between LoRA and full-parameter training. This work enables a more powerful and practical paradigm of parameter-efficient adaptation.

Page Count
10 pages

Category
Computer Science:
Machine Learning (CS)