Embracing Collaboration Over Competition: Condensing Multiple Prompts for Visual In-Context Learning
By: Jinpeng Wang , Tianci Luo , Yaohua Zha and more
Potential Business Impact:
Helps computers learn tasks by looking at examples.
Visual In-Context Learning (VICL) enables adaptively solving vision tasks by leveraging pixel demonstrations, mimicking human-like task completion through analogy. Prompt selection is critical in VICL, but current methods assume the existence of a single "ideal" prompt in a pool of candidates, which in practice may not hold true. Multiple suitable prompts may exist, but individually they often fall short, leading to difficulties in selection and the exclusion of useful context. To address this, we propose a new perspective: prompt condensation. Rather than relying on a single prompt, candidate prompts collaborate to efficiently integrate informative contexts without sacrificing resolution. We devise Condenser, a lightweight external plugin that compresses relevant fine-grained context across multiple prompts. Optimized end-to-end with the backbone, Condenser ensures accurate integration of contextual cues. Experiments demonstrate Condenser outperforms state-of-the-arts across benchmark tasks, showing superior context compression, scalability with more prompts, and enhanced computational efficiency compared to ensemble methods, positioning it as a highly competitive solution for VICL. Code is open-sourced at https://github.com/gimpong/CVPR25-Condenser.
Similar Papers
Exploring Task-Level Optimal Prompts for Visual In-Context Learning
Artificial Intelligence
Teaches computers to learn faster with fewer examples.
T2T-VICL: Unlocking the Boundaries of Cross-Task Visual In-Context Learning via Implicit Text-Driven VLMs
CV and Pattern Recognition
Helps AI understand different picture tasks together.
ConText: Driving In-context Learning for Text Removal and Segmentation
CV and Pattern Recognition
Helps computers read messy text better.