Key-Value Pair-Free Continual Learner via Task-Specific Prompt-Prototype
By: Haihua Luo , Xuming Ran , Zhengji Li and more
Potential Business Impact:
Teaches computers new things without forgetting old ones.
Continual learning aims to enable models to acquire new knowledge while retaining previously learned information. Prompt-based methods have shown remarkable performance in this domain; however, they typically rely on key-value pairing, which can introduce inter-task interference and hinder scalability. To overcome these limitations, we propose a novel approach employing task-specific Prompt-Prototype (ProP), thereby eliminating the need for key-value pairs. In our method, task-specific prompts facilitate more effective feature learning for the current task, while corresponding prototypes capture the representative features of the input. During inference, predictions are generated by binding each task-specific prompt with its associated prototype. Additionally, we introduce regularization constraints during prompt initialization to penalize excessively large values, thereby enhancing stability. Experiments on several widely used datasets demonstrate the effectiveness of the proposed method. In contrast to mainstream prompt-based approaches, our framework removes the dependency on key-value pairs, offering a fresh perspective for future continual learning research.
Similar Papers
RainbowPrompt: Diversity-Enhanced Prompt-Evolving for Continual Learning
CV and Pattern Recognition
Helps computers learn new things without forgetting old ones.
Retrieval-augmented Prompt Learning for Pre-trained Foundation Models
Computation and Language
Helps computers learn better from less data.
Automatic Prompt Generation via Adaptive Selection of Prompting Techniques
Computation and Language
Makes computers understand instructions better automatically.