Visual Instruction Pretraining for Domain-Specific Foundation Models
By: Yuxuan Li , Yicheng Zhang , Wenhao Tang and more
Potential Business Impact:
Teaches computers to see better using thinking.
Modern computer vision is converging on a closed loop in which perception, reasoning and generation mutually reinforce each other. However, this loop remains incomplete: the top-down influence of high-level reasoning on the foundational learning of low-level perceptual features is not yet underexplored. This paper addresses this gap by proposing a new paradigm for pretraining foundation models in downstream domains. We introduce Visual insTruction Pretraining (ViTP), a novel approach that directly leverages reasoning to enhance perception. ViTP embeds a Vision Transformer (ViT) backbone within a Vision-Language Model and pretrains it end-to-end using a rich corpus of visual instruction data curated from target downstream domains. ViTP is powered by our proposed Visual Robustness Learning (VRL), which compels the ViT to learn robust and domain-relevant features from a sparse set of visual tokens. Extensive experiments on 16 challenging remote sensing and medical imaging benchmarks demonstrate that ViTP establishes new state-of-the-art performance across a diverse range of downstream tasks. The code is available at https://github.com/zcablii/ViTP.
Similar Papers
ViPER: Empowering the Self-Evolution of Visual Perception Abilities in Vision-Language Model
CV and Pattern Recognition
Helps computers see details better.
Rethinking Visual Intelligence: Insights from Video Pretraining
CV and Pattern Recognition
Video models learn faster than text models.
Can You Learn to See Without Images? Procedural Warm-Up for Vision Transformers
CV and Pattern Recognition
Teaches computers to learn faster with less data.