Concepts or Skills? Rethinking Instruction Selection for Multi-modal Models
By: Andrew Bai , Justin Cui , Ruochen Wang and more
Potential Business Impact:
Teaches computers to see and understand better.
Vision-language instruction tuning achieves two main purposes: learning visual concepts and learning visual skills. In this paper, we found that vision-language benchmarks fall into the dichotomy of mainly benefiting from training on instructions with similar skills or visual concepts. Inspired by the discovery, we designed a simple targeted training data selection method to optimize the performance of a given benchmark. We first extract the concepts/skills from the benchmark, determine whether the benchmark predominantly benefits from similar concepts or skills, and finally select instructions with the most matching concepts/skills. Experiments on 10+ benchmarks validate the effectiveness of our targeted data selection method, showing +0.9\% over the best existing baseline averaged over all benchmarks and +1.5\% on the skill-focused subset. Our findings underscore the importance of recognizing the inherent trade-off within instruction selection, which requires balancing the acquisition of conceptual knowledge against visual skill.
Similar Papers
Examining Vision Language Models through Multi-dimensional Experiments with Vision and Text Features
CV and Pattern Recognition
Fixes AI mistakes when looking at pictures.
Vision Language Models: A Survey of 26K Papers
CV and Pattern Recognition
Shows how AI research is changing fast.
MathSight: A Benchmark Exploring Have Vision-Language Models Really Seen in University-Level Mathematical Reasoning?
CV and Pattern Recognition
Tests if computers *really* see math problems.