Score: 2

Exploring Sparsity for Parameter Efficient Fine Tuning Using Wavelets

Published: May 18, 2025 | arXiv ID: 2505.12532v2

By: Ahmet Bilican , M. Akın Yılmaz , A. Murat Tekalp and more

Potential Business Impact:

Makes AI learn better with less computer power.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Efficiently adapting large foundation models is critical, especially with tight compute and memory budgets. Parameter-Efficient Fine-Tuning (PEFT) methods such as LoRA offer limited granularity and effectiveness in few-parameter regimes. We propose Wavelet Fine-Tuning (WaveFT), a novel PEFT method that learns highly sparse updates in the wavelet domain of residual matrices. WaveFT allows precise control of trainable parameters, offering fine-grained capacity adjustment and excelling with remarkably low parameter count, potentially far fewer than LoRA's minimum, ideal for extreme parameter-efficient scenarios. Evaluated on personalized text-to-image generation using Stable Diffusion XL as baseline, WaveFT significantly outperforms LoRA and other PEFT methods, especially at low parameter counts; achieving superior subject fidelity, prompt alignment, and image diversity.

Repos / Data Links

Page Count
22 pages

Category
Computer Science:
CV and Pattern Recognition