MaXIFE: Multilingual and Cross-lingual Instruction Following Evaluation
By: Yile Liu , Ziwei Ma , Xiu Jiang and more
Potential Business Impact:
Tests how well computers follow instructions in many languages.
With the rapid adoption of large language models (LLMs) in natural language processing, the ability to follow instructions has emerged as a key metric for evaluating their practical utility. However, existing evaluation methods often focus on single-language scenarios, overlooking the challenges and differences present in multilingual and cross-lingual contexts. To address this gap, we introduce MaXIFE: a comprehensive evaluation benchmark designed to assess instruction-following capabilities across 23 different languages with 1667 verifiable instruction tasks. MaXIFE integrates both Rule-Based Evaluation and Model-Based Evaluation, ensuring a balance of efficiency and accuracy. We applied MaXIFE to evaluate several leading commercial LLMs, establishing baseline results for future comparisons. By providing a standardized tool for multilingual instruction-following evaluation, MaXIFE aims to advance research and development in natural language processing.
Similar Papers
XIFBench: Evaluating Large Language Models on Multilingual Instruction Following
Computation and Language
Tests how well computers understand instructions in many languages.
MCIF: Multimodal Crosslingual Instruction-Following Benchmark from Scientific Talks
Computation and Language
Tests AI that understands talking, seeing, and reading.
When Instructions Multiply: Measuring and Estimating LLM Capabilities of Multiple Instructions Following
Computation and Language
Helps computers follow many commands better.