PEFT-Factory: Unified Parameter-Efficient Fine-Tuning of Autoregressive Large Language Models
By: Robert Belanec, Ivan Srba, Maria Bielikova
Potential Business Impact:
Makes AI models easier to train and compare.
Parameter-Efficient Fine-Tuning (PEFT) methods address the increasing size of Large Language Models (LLMs). Currently, many newly introduced PEFT methods are challenging to replicate, deploy, or compare with one another. To address this, we introduce PEFT-Factory, a unified framework for efficient fine-tuning LLMs using both off-the-shelf and custom PEFT methods. While its modular design supports extensibility, it natively provides a representative set of 19 PEFT methods, 27 classification and text generation datasets addressing 12 tasks, and both standard and PEFT-specific evaluation metrics. As a result, PEFT-Factory provides a ready-to-use, controlled, and stable environment, improving replicability and benchmarking of PEFT methods. PEFT-Factory is a downstream framework that originates from the popular LLaMA-Factory, and is publicly available at https://github.com/kinit-sk/PEFT-Factory
Similar Papers
PEFT-Bench: A Parameter-Efficient Fine-Tuning Methods Benchmark
Computation and Language
Tests how to make AI smaller and faster.
PEFT A2Z: Parameter-Efficient Fine-Tuning Survey for Large Language and Vision Models
Computation and Language
Makes big AI models learn new things cheaply.
A Bayesian Hybrid Parameter-Efficient Fine-Tuning Method for Large Language Models
Machine Learning (CS)
Helps AI learn better from new information.