Exploring Sparsity for Parameter Efficient Fine Tuning Using Wavelets
By: Ahmet Bilican , M. Akın Yılmaz , A. Murat Tekalp and more
Potential Business Impact:
Makes AI learn better with less computer power.
Efficiently adapting large foundation models is critical, especially with tight compute and memory budgets. Parameter-Efficient Fine-Tuning (PEFT) methods such as LoRA offer limited granularity and effectiveness in few-parameter regimes. We propose Wavelet Fine-Tuning (WaveFT), a novel PEFT method that learns highly sparse updates in the wavelet domain of residual matrices. WaveFT allows precise control of trainable parameters, offering fine-grained capacity adjustment and excelling with remarkably low parameter count, potentially far fewer than LoRA's minimum, ideal for extreme parameter-efficient scenarios. Evaluated on personalized text-to-image generation using Stable Diffusion XL as baseline, WaveFT significantly outperforms LoRA and other PEFT methods, especially at low parameter counts; achieving superior subject fidelity, prompt alignment, and image diversity.
Similar Papers
Generalized Tensor-based Parameter-Efficient Fine-Tuning via Lie Group Transformations
Machine Learning (CS)
Makes AI learn new things faster and cheaper.
Quantum-PEFT: Ultra parameter-efficient fine-tuning
Machine Learning (CS)
Makes AI learn faster with fewer computer parts.
1LoRA: Summation Compression for Very Low-Rank Adaptation
CV and Pattern Recognition
Makes big computer brains learn faster with less effort.