A Survey on Prompt Tuning
By: Zongqian Li, Yixuan Su, Nigel Collier
Potential Business Impact:
Teaches computers new tricks without changing their brains.
This survey reviews prompt tuning, a parameter-efficient approach for adapting language models by prepending trainable continuous vectors while keeping the model frozen. We classify existing approaches into two categories: direct prompt learning and transfer learning. Direct prompt learning methods include: general optimization approaches, encoder-based methods, decomposition strategies, and mixture-of-experts frameworks. Transfer learning methods consist of: general transfer approaches, encoder-based methods, and decomposition strategies. For each method, we analyze method designs, innovations, insights, advantages, and disadvantages, with illustrative visualizations comparing different frameworks. We identify challenges in computational efficiency and training stability, and discuss future directions in improving training robustness and broadening application scope.
Similar Papers
Understanding Prompt Tuning and In-Context Learning via Meta-Learning
Machine Learning (CS)
Helps computers learn faster with better instructions.
ULPT: Prompt Tuning with Ultra-Low-Dimensional Optimization
Computation and Language
Makes big AI models learn new things faster.
Efficient and Effective Prompt Tuning via Prompt Decomposition and Compressed Outer Product
Computation and Language
Makes AI smarter with less computer power.