Score: 1

A Survey on Prompt Tuning

Published: July 8, 2025 | arXiv ID: 2507.06085v2

By: Zongqian Li, Yixuan Su, Nigel Collier

Potential Business Impact:

Teaches computers new tricks without changing their brains.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

This survey reviews prompt tuning, a parameter-efficient approach for adapting language models by prepending trainable continuous vectors while keeping the model frozen. We classify existing approaches into two categories: direct prompt learning and transfer learning. Direct prompt learning methods include: general optimization approaches, encoder-based methods, decomposition strategies, and mixture-of-experts frameworks. Transfer learning methods consist of: general transfer approaches, encoder-based methods, and decomposition strategies. For each method, we analyze method designs, innovations, insights, advantages, and disadvantages, with illustrative visualizations comparing different frameworks. We identify challenges in computational efficiency and training stability, and discuss future directions in improving training robustness and broadening application scope.

Country of Origin
🇬🇧 United Kingdom

Repos / Data Links

Page Count
11 pages

Category
Computer Science:
Computation and Language