What Matters in Data Curation for Multimodal Reasoning? Insights from the DCVLR Challenge
By: Yosub Shin , Michael Buriek , Boris Sobolev and more
Potential Business Impact:
Teaches computers to learn better from fewer examples.
We study data curation for multimodal reasoning through the NeurIPS 2025 Data Curation for Vision-Language Reasoning (DCVLR) challenge, which isolates dataset selection by fixing the model and training protocol. Using a compact curated dataset derived primarily from Walton Multimodal Cold Start, our submission placed first in the challenge. Through post-competition ablations, we show that difficulty-based example selection on an aligned base dataset is the dominant driver of performance gains. Increasing dataset size does not reliably improve mean accuracy under the fixed training recipe, but mainly reduces run-to-run variance, while commonly used diversity and synthetic augmentation heuristics provide no additional benefit and often degrade performance. These results characterize DCVLR as a saturation-regime evaluation and highlight the central role of alignment and difficulty in data-efficient multimodal reasoning.
Similar Papers
OctoMed: Data Recipes for State-of-the-Art Multimodal Medical Reasoning
Artificial Intelligence
Teaches AI to understand medical images and text.
Vision-G1: Towards General Vision Language Reasoning with Multi-Domain Data Curation
CV and Pattern Recognition
Helps computers understand pictures and solve problems.
Better Reasoning with Less Data: Enhancing VLMs Through Unified Modality Scoring
CV and Pattern Recognition
Cleans up computer vision data for better understanding.