Score: 1

Empowering Reliable Visual-Centric Instruction Following in MLLMs

Published: January 6, 2026 | arXiv ID: 2601.03198v1

By: Weilei He , Feng Ju , Zhiyuan Fan and more

Potential Business Impact:

Helps AI follow instructions using pictures and words.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Evaluating the instruction-following (IF) capabilities of Multimodal Large Language Models (MLLMs) is essential for rigorously assessing how faithfully model outputs adhere to user-specified intentions. Nevertheless, existing benchmarks for evaluating MLLMs' instruction-following capability primarily focus on verbal instructions in the textual modality. These limitations hinder a thorough analysis of instruction-following capabilities, as they overlook the implicit constraints embedded in the semantically rich visual modality. To address this gap, we introduce VC-IFEval, a new benchmark accompanied by a systematically constructed dataset that evaluates MLLMs' instruction-following ability under multimodal settings. Our benchmark systematically incorporates vision-dependent constraints into instruction design, enabling a more rigorous and fine-grained assessment of how well MLLMs align their outputs with both visual input and textual instructions. Furthermore, by fine-tuning MLLMs on our dataset, we achieve substantial gains in visual instruction-following accuracy and adherence. Through extensive evaluation across representative MLLMs, we provide new insights into the strengths and limitations of current models.

Country of Origin
🇨🇳 🇭🇰 Hong Kong, China

Page Count
23 pages

Category
Computer Science:
Machine Learning (CS)