Learning Optimal Prompt Ensemble for Multi-source Visual Prompt Transfer
By: Jianhua Liu , Liwen Cao , Yanru Wu and more
Potential Business Impact:
Combines AI knowledge for better task learning.
Prompt tuning has emerged as a lightweight strategy for adapting foundation models to downstream tasks, particularly for resource-constrained systems. As pre-trained prompts become valuable assets, combining multiple source prompts offers a promising approach to enhance generalization for new tasks by leveraging complementary knowledge. However, naive aggregation often overlooks different source prompts have different contribution potential to the target task. To address this, we propose HGPrompt, a dynamic framework that learns optimal ensemble weights. These weights are optimized by jointly maximizing an information-theoretic metric for transferability and minimizing gradient conflicts via a novel regularization strategy. Specifically, we propose a differentiable prompt transferability metric to captures the discriminability of prompt-induced features on the target task. Meanwhile, HGPrompt match the gradient variances with respect to different source prompts based on Hessian and Fisher Information, ensuring stable and coherent knowledge transfer while suppressing gradient conflicts among them. Extensive experiments on the large-scale VTAB benchmark demonstrate the state-of-the-art performance of HGPrompt, validating its effectiveness in learning an optimal ensemble for effective multi-source prompt transfer.
Similar Papers
HGMP:Heterogeneous Graph Multi-Task Prompt Learning
Machine Learning (CS)
Helps computers understand complex data better.
Heterogeneous Graph Prompt Learning via Adaptive Weight Pruning
Machine Learning (CS)
Makes computer learning faster and better.
MAPGD: Multi-Agent Prompt Gradient Descent for Collaborative Prompt Optimization
Artificial Intelligence
Helps AI understand instructions better and faster.