Evaluating Robustness of Large Language Models in Enterprise Applications: Benchmarks for Perturbation Consistency Across Formats and Languages
By: Tara Bogavelli , Oluwanifemi Bamgbose , Gabrielle Gauthier Melançon and more
Potential Business Impact:
Makes AI understand instructions even when they change slightly.
Enterprise LLM applications require consistently high quality and reliable performance across diverse scenarios, demanding robustness to minor variations. Existing research shows that even small prompt changes can lead to substantial differences in output, but has mainly focused on a narrow set of perturbations with small academic datasets, limiting their relevance to real-world applications. To address this, we present a comprehensive benchmark suite that evaluates robustness across multiple perturbation types, including general text edits (e.g., punctuation, whitespace), formatting changes (e.g., JSON, YAML), multilingual and cross-lingual inputs, and positional variations in instructions. Evaluating 11 models ranging from 4B to 120B+ parameters, we find that minor perturbations reduce performance by up to 40 percentage points on key enterprise metrics. Critically, we demonstrate that the relationship between model size and robustness is more nuanced than conventional assumptions suggest: an 8B parameter model (Ministral 3 8B) outperforms most larger models, while another 8B model (Llama 3.1 8B) performs worst overall.
Similar Papers
On Robustness and Reliability of Benchmark-Based Evaluation of LLMs
Computation and Language
Tests make smart computers seem less smart.
A Multi-Language Perspective on the Robustness of LLM Code Generation
Software Engineering
Tests AI code writers to make them better.
Evaluating and Improving Robustness in Large Language Models: A Survey and Future Directions
Computation and Language
Makes AI smarter and more reliable.