CrossPT: Exploring Cross-Task Transferability through Multi-Task Prompt Tuning
By: Ahmad Pouramini, Hesham Faili
Potential Business Impact:
Teaches AI to do many jobs better.
Prompt tuning offers a parameter-efficient way to adapt large pre-trained language models to new tasks, but most existing approaches are designed for single-task settings, failing to share knowledge across related tasks. We propose Cross-task Prompt Tuning (CrossPT), a modular framework for multi-task prompt tuning that enables controlled knowledge transfer while maintaining task-specific specialization. CrossPT decomposes each target prompt into shared, pre-trained source prompts and task-specific private prompts, combined via a learned attention mechanism. To support robust transfer, we systematically investigate key design factors including prompt initialization, balancing shared and private prompts, number of source prompts, learning rates, task prefixes, and label semantics. Empirical results on GLUE and related benchmarks show that CrossPT achieves higher accuracy and robustness compared to traditional prompt tuning and related methods, particularly in low-resource scenarios, while maintaining strong parameter efficiency.
Similar Papers
MetaTPT: Meta Test-time Prompt Tuning for Vision-Language Models
CV and Pattern Recognition
Helps AI understand new pictures better.
PromptBridge: Cross-Model Prompt Transfer for Large Language Models
Computation and Language
Makes AI prompts work on different AI brains.
Dynamic Prompt Fusion for Multi-Task and Cross-Domain Adaptation in LLMs
Computation and Language
Helps computers learn many tasks better.