Score: 1

PEFT-Factory: Unified Parameter-Efficient Fine-Tuning of Autoregressive Large Language Models

Published: December 2, 2025 | arXiv ID: 2512.02764v1

By: Robert Belanec, Ivan Srba, Maria Bielikova

Potential Business Impact:

Makes AI models easier to train and compare.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Parameter-Efficient Fine-Tuning (PEFT) methods address the increasing size of Large Language Models (LLMs). Currently, many newly introduced PEFT methods are challenging to replicate, deploy, or compare with one another. To address this, we introduce PEFT-Factory, a unified framework for efficient fine-tuning LLMs using both off-the-shelf and custom PEFT methods. While its modular design supports extensibility, it natively provides a representative set of 19 PEFT methods, 27 classification and text generation datasets addressing 12 tasks, and both standard and PEFT-specific evaluation metrics. As a result, PEFT-Factory provides a ready-to-use, controlled, and stable environment, improving replicability and benchmarking of PEFT methods. PEFT-Factory is a downstream framework that originates from the popular LLaMA-Factory, and is publicly available at https://github.com/kinit-sk/PEFT-Factory

Repos / Data Links

Page Count
15 pages

Category
Computer Science:
Computation and Language