Reasoning Multimodal Large Language Model: Data Contamination and Dynamic Evaluation
By: Ming Liu, Wensheng Zhang
Potential Business Impact:
Tests AI to see if it truly understands.
Multimodal Large Language Models (MLLMs) show impressive vision-language benchmark performance, yet growing concerns about data contamination (test set exposure during training) risk masking true generalization. This concern extends to reasoning MLLMs, often fine-tuned via reinforcement learning from potentially contaminated base models. We propose a novel dynamic evaluation framework to rigorously assess MLLM generalization, moving beyond static benchmarks. Instead of perturbing inputs, we perturb the task itself. Using the same visual input, models are evaluated across a family of tasks (e.g., QA, captioning, question posing, verification) to probe diverse capabilities. This task perturbation reveals whether model performance is robust or reliant on superficial task-specific cues. Our approach is analogous to loss landscape sharpness: models overfit or contaminated for a single task (sharp minima) falter under task shifts, unlike models with generalizable solutions (flatter minima). We developed an automated pipeline with a calibrated judge scoring open-ended generations (captions, questions) using paraphrase and corruption sampling. Applying this framework to leading image/video MLLMs on benchmarks including MME, RealWorldQA, and CVRR-ES, we analyze each model's cross-task "ability vector." We demonstrate that fine-tuning on simulated test data (extreme contamination) drastically sharpens task-specific performance but harms overall generalization. Our dynamic task perturbation offers deeper insights into MLLM generalization, distinguishing genuine understanding from spurious leakage or overfitting.
Similar Papers
Rethinking the effects of data contamination in Code Intelligence
Software Engineering
Finds if computer code is copied unfairly.
Contamination Detection for VLMs using Multi-Modal Semantic Perturbation
Machine Learning (CS)
Finds if AI saw test answers before learning.
Exploring and Evaluating Multimodal Knowledge Reasoning Consistency of Multimodal Large Language Models
Computation and Language
Makes computers understand pictures and words better.