Score: 0

TS-PEFT: Token-Selective Parameter-Efficient Fine-Tuning with Learnable Threshold Gating

Published: November 20, 2025 | arXiv ID: 2511.16147v1

By: Dabiao Ma , Ziming Dai , Zhimin Xin and more

Potential Business Impact:

Makes AI learn better by changing only parts.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

In the field of large models (LMs) for natural language processing (NLP) and computer vision (CV), Parameter-Efficient Fine-Tuning (PEFT) has emerged as a resource-efficient method that modifies a limited number of parameters while keeping the pretrained weights fixed. This paper investigates the traditional PEFT approach, which applies modifications to all position indices, and questions its necessity. We introduce a new paradigm called Token-Selective PEFT (TS-PEFT), in which a function S selectively applies PEFT modifications to a subset of position indices, potentially enhancing performance on downstream tasks. Our experimental results reveal that the indiscriminate application of PEFT to all indices is not only superfluous, but may also be counterproductive. This study offers a fresh perspective on PEFT, advocating for a more targeted approach to modifications and providing a framework for future research to optimize the fine-tuning process for large models.

Page Count
11 pages

Category
Computer Science:
Computation and Language