Marco-Bench-MIF: On Multilingual Instruction-Following Capability of Large Language Models
By: Bo Zeng , Chenyang Lyu , Sinuo Liu and more
Potential Business Impact:
Helps AI understand instructions in many languages.
Instruction-following capability has become a major ability to be evaluated for Large Language Models (LLMs). However, existing datasets, such as IFEval, are either predominantly monolingual and centered on English or simply machine translated to other languages, limiting their applicability in multilingual contexts. In this paper, we present an carefully-curated extension of IFEval to a localized multilingual version named Marco-Bench-MIF, covering 30 languages with varying levels of localization. Our benchmark addresses linguistic constraints (e.g., modifying capitalization requirements for Chinese) and cultural references (e.g., substituting region-specific company names in prompts) via a hybrid pipeline combining translation with verification. Through comprehensive evaluation of 20+ LLMs on our Marco-Bench-MIF, we found that: (1) 25-35% accuracy gap between high/low-resource languages, (2) model scales largely impact performance by 45-60% yet persists script-specific challenges, and (3) machine-translated data underestimates accuracy by7-22% versus localized data. Our analysis identifies challenges in multilingual instruction following, including keyword consistency preservation and compositional constraint adherence across languages. Our Marco-Bench-MIF is available at https://github.com/AIDC-AI/Marco-Bench-MIF.
Similar Papers
XIFBench: Evaluating Large Language Models on Multilingual Instruction Following
Computation and Language
Tests how well computers understand instructions in many languages.
M-IFEval: Multilingual Instruction-Following Evaluation
Computation and Language
Tests AI's understanding in many languages.
MaXIFE: Multilingual and Cross-lingual Instruction Following Evaluation
Computation and Language
Tests how well computers follow instructions in many languages.