IF-VidCap: Can Video Caption Models Follow Instructions?
By: Shihao Li , Yuanxing Zhang , Jiangtao Wu and more
Potential Business Impact:
Teaches computers to describe videos as you ask.
Although Multimodal Large Language Models (MLLMs) have demonstrated proficiency in video captioning, practical applications require captions that follow specific user instructions rather than generating exhaustive, unconstrained descriptions. Current benchmarks, however, primarily assess descriptive comprehensiveness while largely overlooking instruction-following capabilities. To address this gap, we introduce IF-VidCap, a new benchmark for evaluating controllable video captioning, which contains 1,400 high-quality samples. Distinct from existing video captioning or general instruction-following benchmarks, IF-VidCap incorporates a systematic framework that assesses captions on two dimensions: format correctness and content correctness. Our comprehensive evaluation of over 20 prominent models reveals a nuanced landscape: despite the continued dominance of proprietary models, the performance gap is closing, with top-tier open-source solutions now achieving near-parity. Furthermore, we find that models specialized for dense captioning underperform general-purpose MLLMs on complex instructions, indicating that future work should simultaneously advance both descriptive richness and instruction-following fidelity.
Similar Papers
Empowering Reliable Visual-Centric Instruction Following in MLLMs
Machine Learning (CS)
Helps AI follow instructions using pictures and words.
Generalizing Verifiable Instruction Following
Computation and Language
Teaches chatbots to follow tricky rules exactly.
Instruction-Following Evaluation of Large Vision-Language Models
Computation and Language
Teaches AI to follow instructions better.