PEFT-Bench: A Parameter-Efficient Fine-Tuning Methods Benchmark
By: Robert Belanec , Branislav Pecher , Ivan Srba and more
Potential Business Impact:
Tests how to make AI smaller and faster.
Despite the state-of-the-art performance of Large Language Models (LLMs) achieved on many tasks, their massive scale often leads to high computational and environmental costs, limiting their accessibility. Parameter-efficient fine-tuning (PEFT) methods address this challenge by reducing the number of trainable parameters while maintaining strong downstream performance. Despite the increased development in PEFT methods, current evaluations remain limited (in terms of evaluated models and datasets) and difficult to reproduce. To bridge this gap, we introduce PEFT-Bench, a unified end-to-end benchmark for evaluating diverse PEFT methods on autoregressive LLMs. We demonstrate its usage across 27 NLP datasets and 6 PEFT methods. To account for different PEFT training and inference factors, we also introduce the PEFT Soft Score Penalties (PSCP) metric, which takes trainable parameters, inference speed, and training memory usage into account.
Similar Papers
PEFT-Factory: Unified Parameter-Efficient Fine-Tuning of Autoregressive Large Language Models
Computation and Language
Makes AI models easier to train and compare.
PEFT A2Z: Parameter-Efficient Fine-Tuning Survey for Large Language and Vision Models
Computation and Language
Makes big AI models learn new things cheaply.
TS-PEFT: Token-Selective Parameter-Efficient Fine-Tuning with Learnable Threshold Gating
Computation and Language
Makes AI learn better by changing only parts.