QWHA: Quantization-Aware Walsh-Hadamard Adaptation for Parameter-Efficient Fine-Tuning on Large Language Models
By: Hyesung Jeon , Seojune Lee , Beomseok Kang and more
Potential Business Impact:
Makes AI smarter and faster using less computer power.
The demand for efficient deployment of large language models (LLMs) has driven interest in quantization, which reduces inference cost, and parameter-efficient fine-tuning (PEFT), which lowers training overhead. This motivated the development of quantization-aware PEFT to produce accurate yet efficient quantized models. In this setting, reducing quantization error prior to fine-tuning is crucial for achieving high model accuracy. However, existing methods that rely on low-rank adaptation suffer from limited representational capacity. Recent Fourier-related transform (FT)-based adapters offer greater representational power than low-rank adapters, but their direct integration into quantized models often results in ineffective error reduction and increased computational overhead. To overcome these limitations, we propose QWHA, a method that integrates FT-based adapters into quantized models by employing the Walsh-Hadamard Transform (WHT) as the transform kernel, together with a novel adapter initialization scheme incorporating adaptive parameter selection and value refinement. We demonstrate that QWHA effectively mitigates quantization errors while facilitating fine-tuning, and that its design substantially reduces computational cost. Experimental results show that QWHA consistently outperforms baselines in low-bit quantization accuracy and achieves significant training speedups over existing FT-based adapters. The code is available at https://github.com/vantaa89/qwha.
Similar Papers
Hyper Compressed Fine-Tuning of Large Foundation Models with Quantum Inspired Adapters
Machine Learning (CS)
Makes AI learn faster with less computer power.
Quantum-Enhanced LLM Efficient Fine Tuning
Quantum Physics
Makes AI smarter with less computer power.
LoTA-QAF: Lossless Ternary Adaptation for Quantization-Aware Fine-Tuning
Machine Learning (CS)
Makes smart computer brains work on small devices.