AFA-LoRA: Enabling Non-Linear Adaptations in LoRA with Activation Function Annealing
By: Jiacheng Li , Jianchao Tan , Zhidong Yang and more
Potential Business Impact:
Makes AI learn better without needing more training.
Low-Rank Adaptation (LoRA) is a widely adopted parameter-efficient fine-tuning (PEFT) method. However, its linear adaptation process limits its expressive power. This means there is a gap between the expressive power of linear training and non-linear training. To bridge this gap, we propose AFA-LoRA, a novel training strategy that brings non-linear expressivity to LoRA while maintaining its seamless mergeability. Our key innovation is an annealed activation function that transitions from a non-linear to a linear transformation during training, allowing the adapter to initially adopt stronger representational capabilities before converging to a mergeable linear form. We implement our method on supervised fine-tuning, reinforcement learning, and speculative decoding. The results show that AFA-LoRA reduces the performance gap between LoRA and full-parameter training. This work enables a more powerful and practical paradigm of parameter-efficient adaptation.
Similar Papers
AuroRA: Breaking Low-Rank Bottleneck of LoRA with Nonlinear Mapping
Machine Learning (CS)
Makes AI learn better with fewer changes.
Don't Forget the Nonlinearity: Unlocking Activation Functions in Efficient Fine-Tuning
Machine Learning (CS)
Makes AI smarter by changing how it learns.
Don't Forget the Nonlinearity: Unlocking Activation Functions in Efficient Fine-Tuning
Machine Learning (CS)
Makes AI smarter by changing how it learns.