EIFBENCH: Extremely Complex Instruction Following Benchmark for Large Language Models
By: Tao Zou , Xinghua Zhang , Haiyang Yu and more
Potential Business Impact:
Tests if AI can follow many steps at once.
With the development and widespread application of large language models (LLMs), the new paradigm of "Model as Product" is rapidly evolving, and demands higher capabilities to address complex user needs, often requiring precise workflow execution which involves the accurate understanding of multiple tasks. However, existing benchmarks focusing on single-task environments with limited constraints lack the complexity required to fully reflect real-world scenarios. To bridge this gap, we present the Extremely Complex Instruction Following Benchmark (EIFBENCH), meticulously crafted to facilitate a more realistic and robust evaluation of LLMs. EIFBENCH not only includes multi-task scenarios that enable comprehensive assessment across diverse task types concurrently, but also integrates a variety of constraints, replicating complex operational environments. Furthermore, we propose the Segment Policy Optimization (SegPO) algorithm to enhance the LLM's ability to accurately fulfill multi-task workflow. Evaluations on EIFBENCH have unveiled considerable performance discrepancies in existing LLMs when challenged with these extremely complex instructions. This finding underscores the necessity for ongoing optimization to navigate the intricate challenges posed by LLM applications.
Similar Papers
XIFBench: Evaluating Large Language Models on Multilingual Instruction Following
Computation and Language
Tests how well computers understand instructions in many languages.
When Instructions Multiply: Measuring and Estimating LLM Capabilities of Multiple Instructions Following
Computation and Language
Helps computers follow many commands better.
CodeAlignBench: Assessing Code Generation Models on Developer-Preferred Code Adjustments
Software Engineering
Tests if AI can write code correctly.