Can You Learn to See Without Images? Procedural Warm-Up for Vision Transformers
By: Zachary Shinnick , Liangze Jiang , Hemanth Saratchandran and more
Potential Business Impact:
Teaches computers to learn faster with less data.
Transformers show remarkable versatility across domains, suggesting the existence of inductive biases beneficial across modalities. In this work, we explore a new way to instil such generic biases in vision transformers (ViTs) by pretraining on procedurally-generated data devoid of visual or semantic content. We generate this data with simple algorithms such as formal grammars, so the results bear no relationship to either natural or synthetic images. We use this procedurally-generated data to pretrain ViTs in a warm-up phase that bypasses their visual patch embedding mechanisms, thus encouraging the models to internalise abstract computational priors. When followed by standard image-based training, this warm-up significantly improves data efficiency, convergence speed, and downstream performance. On ImageNet-1k for example, allocating just 1% of the training budget to procedural data improves final accuracy by over 1.7%. In terms of its effect on performance, 1% procedurally generated data is thus equivalent to 28% of the ImageNet-1k data. These findings suggest a promising path toward new data-efficient and domain-agnostic pretraining strategies.
Similar Papers
Visual Instruction Pretraining for Domain-Specific Foundation Models
CV and Pattern Recognition
Teaches computers to see better using thinking.
Separating Knowledge and Perception with Procedural Data
CV and Pattern Recognition
Teaches computers to recognize things from drawings.
Do Vision Transformers See Like Humans? Evaluating their Perceptual Alignment
CV and Pattern Recognition
Makes computers see like people, but bigger is worse.