CBP-Tuning: Efficient Local Customization for Black-box Large Language Models
By: Jiaxuan Zhao , Naibin Gu , Yuchen Feng and more
Potential Business Impact:
Lets AI learn new things privately on your computer.
The high costs of customizing large language models (LLMs) fundamentally limit their adaptability to user-specific needs. Consequently, LLMs are increasingly offered as cloud-based services, a paradigm that introduces critical limitations: providers struggle to support personalized customization at scale, while users face privacy risks when exposing sensitive data. To address this dual challenge, we propose Customized Black-box Prompt Tuning (CBP-Tuning), a novel framework that facilitates efficient local customization while preserving bidirectional privacy. Specifically, we design a two-stage framework: (1) a prompt generator trained on the server-side to capture domain-specific and task-agnostic capabilities, and (2) user-side gradient-free optimization that tailors soft prompts for individual tasks. This approach eliminates the need for users to access model weights or upload private data, requiring only a single customized vector per task while achieving effective adaptation. Furthermore, the evaluation of CBP-Tuning in the commonsense reasoning, medical and financial domain settings demonstrates superior performance compared to baselines, showcasing its advantages in task-agnostic processing and privacy preservation.
Similar Papers
All You Need is One: Capsule Prompt Tuning with a Single Vector
Computation and Language
Makes AI understand tasks better with less effort.
Advanced Black-Box Tuning of Large Language Models with Limited API Calls
Artificial Intelligence
Makes smart computer programs learn faster, cheaper.
Medical Knowledge Intervention Prompt Tuning for Medical Image Classification
CV and Pattern Recognition
Helps AI understand medical images better.