Empowering Reliable Visual-Centric Instruction Following in MLLMs
By: Weilei He , Feng Ju , Zhiyuan Fan and more
Potential Business Impact:
Helps AI follow instructions using pictures and words.
Evaluating the instruction-following (IF) capabilities of Multimodal Large Language Models (MLLMs) is essential for rigorously assessing how faithfully model outputs adhere to user-specified intentions. Nevertheless, existing benchmarks for evaluating MLLMs' instruction-following capability primarily focus on verbal instructions in the textual modality. These limitations hinder a thorough analysis of instruction-following capabilities, as they overlook the implicit constraints embedded in the semantically rich visual modality. To address this gap, we introduce VC-IFEval, a new benchmark accompanied by a systematically constructed dataset that evaluates MLLMs' instruction-following ability under multimodal settings. Our benchmark systematically incorporates vision-dependent constraints into instruction design, enabling a more rigorous and fine-grained assessment of how well MLLMs align their outputs with both visual input and textual instructions. Furthermore, by fine-tuning MLLMs on our dataset, we achieve substantial gains in visual instruction-following accuracy and adherence. Through extensive evaluation across representative MLLMs, we provide new insights into the strengths and limitations of current models.
Similar Papers
M-IFEval: Multilingual Instruction-Following Evaluation
Computation and Language
Tests AI's understanding in many languages.
MCIF: Multimodal Crosslingual Instruction-Following Benchmark from Scientific Talks
Computation and Language
Tests AI that understands talking, seeing, and reading.
MM-IFEngine: Towards Multimodal Instruction Following
CV and Pattern Recognition
Teaches AI to follow picture instructions precisely.