Score: 0

Concepts or Skills? Rethinking Instruction Selection for Multi-modal Models

Published: August 14, 2025 | arXiv ID: 2508.10339v1

By: Andrew Bai , Justin Cui , Ruochen Wang and more

Potential Business Impact:

Teaches computers to see and understand better.

Vision-language instruction tuning achieves two main purposes: learning visual concepts and learning visual skills. In this paper, we found that vision-language benchmarks fall into the dichotomy of mainly benefiting from training on instructions with similar skills or visual concepts. Inspired by the discovery, we designed a simple targeted training data selection method to optimize the performance of a given benchmark. We first extract the concepts/skills from the benchmark, determine whether the benchmark predominantly benefits from similar concepts or skills, and finally select instructions with the most matching concepts/skills. Experiments on 10+ benchmarks validate the effectiveness of our targeted data selection method, showing +0.9\% over the best existing baseline averaged over all benchmarks and +1.5\% on the skill-focused subset. Our findings underscore the importance of recognizing the inherent trade-off within instruction selection, which requires balancing the acquisition of conceptual knowledge against visual skill.

Page Count
11 pages

Category
Computer Science:
CV and Pattern Recognition