Quantum-PEFT: Ultra parameter-efficient fine-tuning
By: Toshiaki Koike-Akino , Francesco Tonin , Yongtao Wu and more
Potential Business Impact:
Makes AI learn faster with fewer computer parts.
This paper introduces Quantum-PEFT that leverages quantum computations for parameter-efficient fine-tuning (PEFT). Unlike other additive PEFT methods, such as low-rank adaptation (LoRA), Quantum-PEFT exploits an underlying full-rank yet surprisingly parameter efficient quantum unitary parameterization. With the use of Pauli parameterization, the number of trainable parameters grows only logarithmically with the ambient dimension, as opposed to linearly as in LoRA-based PEFT methods. Quantum-PEFT achieves vanishingly smaller number of trainable parameters than the lowest-rank LoRA as dimensions grow, enhancing parameter efficiency while maintaining a competitive performance. We apply Quantum-PEFT to several transfer learning benchmarks in language and vision, demonstrating significant advantages in parameter efficiency.
Similar Papers
How Can Quantum Deep Learning Improve Large Language Models?
Quantum Physics
Makes AI learn new things much faster and cheaper.
PEFT A2Z: Parameter-Efficient Fine-Tuning Survey for Large Language and Vision Models
Computation and Language
Makes big AI models learn new things cheaply.
Exploring Sparsity for Parameter Efficient Fine Tuning Using Wavelets
CV and Pattern Recognition
Makes AI learn better with less computer power.