Score: 1

Learning Optimal Prompt Ensemble for Multi-source Visual Prompt Transfer

Published: April 9, 2025 | arXiv ID: 2504.12311v4

By: Jianhua Liu , Liwen Cao , Yanru Wu and more

Potential Business Impact:

Combines AI knowledge for better task learning.

Business Areas:
Guides Media and Entertainment

Prompt tuning has emerged as a lightweight strategy for adapting foundation models to downstream tasks, particularly for resource-constrained systems. As pre-trained prompts become valuable assets, combining multiple source prompts offers a promising approach to enhance generalization for new tasks by leveraging complementary knowledge. However, naive aggregation often overlooks different source prompts have different contribution potential to the target task. To address this, we propose HGPrompt, a dynamic framework that learns optimal ensemble weights. These weights are optimized by jointly maximizing an information-theoretic metric for transferability and minimizing gradient conflicts via a novel regularization strategy. Specifically, we propose a differentiable prompt transferability metric to captures the discriminability of prompt-induced features on the target task. Meanwhile, HGPrompt match the gradient variances with respect to different source prompts based on Hessian and Fisher Information, ensuring stable and coherent knowledge transfer while suppressing gradient conflicts among them. Extensive experiments on the large-scale VTAB benchmark demonstrate the state-of-the-art performance of HGPrompt, validating its effectiveness in learning an optimal ensemble for effective multi-source prompt transfer.

Country of Origin
🇨🇳 China

Page Count
10 pages

Category
Computer Science:
Computation and Language