XIFBench: Evaluating Large Language Models on Multilingual Instruction Following
By: Zhenyu Li , Kehai Chen , Yunfei Long and more
Potential Business Impact:
Tests how well computers understand instructions in many languages.
Large Language Models (LLMs) have demonstrated remarkable instruction-following capabilities across various applications. However, their performance in multilingual settings remains poorly understood, as existing evaluations lack fine-grained constraint analysis. We introduce XIFBench, a comprehensive constraint-based benchmark for assessing multilingual instruction-following abilities of LLMs, featuring a novel taxonomy of five constraint categories and 465 parallel instructions across six languages spanning different resource levels. To ensure consistent cross-lingual evaluation, we develop a requirement-based protocol that leverages English requirements as semantic anchors. These requirements are then used to validate the translations across languages. Extensive experiments with various LLMs reveal notable variations in instruction-following performance across resource levels, identifying key influencing factors such as constraint categories, instruction complexity, and cultural specificity.
Similar Papers
MaXIFE: Multilingual and Cross-lingual Instruction Following Evaluation
Computation and Language
Tests how well computers follow instructions in many languages.
EIFBENCH: Extremely Complex Instruction Following Benchmark for Large Language Models
Computation and Language
Tests if AI can follow many steps at once.
MCIF: Multimodal Crosslingual Instruction-Following Benchmark from Scientific Talks
Computation and Language
Tests AI that understands talking, seeing, and reading.